Test Report: Docker_Linux_containerd 19651

                    
                      f000a69778791892f7d89fef6358d7150d12a198:2024-09-16:36236
                    
                

Test fail (56/306)

Order failed test Duration
29 TestAddons/serial/Volcano 300.9
31 TestAddons/serial/GCPAuth/Namespaces 0
33 TestAddons/parallel/Registry 14.3
34 TestAddons/parallel/Ingress 1.88
36 TestAddons/parallel/MetricsServer 367.04
37 TestAddons/parallel/HelmTiller 89
39 TestAddons/parallel/CSI 361.86
42 TestAddons/parallel/LocalPath 0
46 TestCertOptions 27.96
68 TestFunctional/serial/KubeContext 1.73
69 TestFunctional/serial/KubectlGetPods 1.74
82 TestFunctional/serial/ComponentHealth 1.98
85 TestFunctional/serial/InvalidService 0
88 TestFunctional/parallel/DashboardCmd 4.02
95 TestFunctional/parallel/ServiceCmdConnect 3.43
97 TestFunctional/parallel/PersistentVolumeClaim 90.67
101 TestFunctional/parallel/MySQL 2.16
107 TestFunctional/parallel/NodeLabels 2.07
112 TestFunctional/parallel/ServiceCmd/DeployApp 0
113 TestFunctional/parallel/ServiceCmd/List 0.34
114 TestFunctional/parallel/ServiceCmd/JSONOutput 0.38
115 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
116 TestFunctional/parallel/ServiceCmd/Format 0.32
117 TestFunctional/parallel/ServiceCmd/URL 0.32
122 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
123 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 104.48
138 TestFunctional/parallel/MountCmd/any-port 2.3
162 TestMultiControlPlane/serial/NodeLabels 2.04
167 TestMultiControlPlane/serial/RestartSecondaryNode 17.38
170 TestMultiControlPlane/serial/DeleteSecondaryNode 11.42
173 TestMultiControlPlane/serial/RestartCluster 68.91
229 TestMultiNode/serial/MultiNodeLabels 2
233 TestMultiNode/serial/StartAfterStop 10.56
235 TestMultiNode/serial/DeleteNode 7.78
237 TestMultiNode/serial/RestartMultiNode 53.87
251 TestKubernetesUpgrade 322.37
297 TestStartStop/group/no-preload/serial/DeployApp 3.67
298 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.68
302 TestStartStop/group/old-k8s-version/serial/DeployApp 3.8
303 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.57
306 TestStartStop/group/old-k8s-version/serial/SecondStart 377.16
309 TestStartStop/group/embed-certs/serial/DeployApp 3.77
310 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.6
316 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 3.5
317 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.59
322 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 7.23
338 TestNetworkPlugins/group/auto/NetCatPod 1800.31
340 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 7.39
345 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 7.03
351 TestNetworkPlugins/group/kindnet/NetCatPod 1800.3
353 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 7.34
359 TestNetworkPlugins/group/calico/NetCatPod 1800.29
361 TestNetworkPlugins/group/enable-default-cni/NetCatPod 1800.29
365 TestNetworkPlugins/group/flannel/NetCatPod 1800.31
369 TestNetworkPlugins/group/bridge/NetCatPod 1800.3
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 1800.29
x
+
TestAddons/serial/Volcano (300.9s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 14.273105ms
addons_test.go:897: volcano-scheduler stabilized in 14.386237ms
addons_test.go:905: volcano-admission stabilized in 14.418757ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-jtz7f" [4098242c-564d-48c1-85bb-1a269db97aa8] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003629484s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-rcfsk" [5919f54b-8406-43d8-bb45-66bd740958e6] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004061306s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-hdpdb" [82d3745f-6f40-4fda-ba64-265d6b361879] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.002859979s
addons_test.go:932: (dbg) Run:  kubectl --context addons-191972 delete -n volcano-system job volcano-admission-init
addons_test.go:932: (dbg) Non-zero exit: kubectl --context addons-191972 delete -n volcano-system job volcano-admission-init: fork/exec /usr/local/bin/kubectl: exec format error (382.276µs)
addons_test.go:934: vcjob creation with kubectl --context addons-191972 delete -n volcano-system job volcano-admission-init failed: fork/exec /usr/local/bin/kubectl: exec format error
addons_test.go:938: (dbg) Run:  kubectl --context addons-191972 create -f testdata/vcjob.yaml
addons_test.go:938: (dbg) Non-zero exit: kubectl --context addons-191972 create -f testdata/vcjob.yaml: fork/exec /usr/local/bin/kubectl: exec format error (284.626µs)
addons_test.go:940: vcjob creation with kubectl --context addons-191972 create -f testdata/vcjob.yaml failed: fork/exec /usr/local/bin/kubectl: exec format error
addons_test.go:946: (dbg) Run:  kubectl --context addons-191972 get vcjob -n my-volcano
addons_test.go:946: (dbg) Non-zero exit: kubectl --context addons-191972 get vcjob -n my-volcano: fork/exec /usr/local/bin/kubectl: exec format error (240.975µs)
addons_test.go:946: (dbg) Run:  kubectl --context addons-191972 get vcjob -n my-volcano
addons_test.go:946: (dbg) Non-zero exit: kubectl --context addons-191972 get vcjob -n my-volcano: fork/exec /usr/local/bin/kubectl: exec format error (477.823µs)
addons_test.go:946: (dbg) Run:  kubectl --context addons-191972 get vcjob -n my-volcano
addons_test.go:946: (dbg) Non-zero exit: kubectl --context addons-191972 get vcjob -n my-volcano: fork/exec /usr/local/bin/kubectl: exec format error (417.9µs)
addons_test.go:946: (dbg) Run:  kubectl --context addons-191972 get vcjob -n my-volcano
addons_test.go:946: (dbg) Non-zero exit: kubectl --context addons-191972 get vcjob -n my-volcano: fork/exec /usr/local/bin/kubectl: exec format error (397.413µs)
addons_test.go:946: (dbg) Run:  kubectl --context addons-191972 get vcjob -n my-volcano
addons_test.go:946: (dbg) Non-zero exit: kubectl --context addons-191972 get vcjob -n my-volcano: fork/exec /usr/local/bin/kubectl: exec format error (414.656µs)
addons_test.go:946: (dbg) Run:  kubectl --context addons-191972 get vcjob -n my-volcano
addons_test.go:946: (dbg) Non-zero exit: kubectl --context addons-191972 get vcjob -n my-volcano: fork/exec /usr/local/bin/kubectl: exec format error (410.911µs)
addons_test.go:946: (dbg) Run:  kubectl --context addons-191972 get vcjob -n my-volcano
addons_test.go:946: (dbg) Non-zero exit: kubectl --context addons-191972 get vcjob -n my-volcano: fork/exec /usr/local/bin/kubectl: exec format error (418.65µs)
addons_test.go:946: (dbg) Run:  kubectl --context addons-191972 get vcjob -n my-volcano
addons_test.go:946: (dbg) Non-zero exit: kubectl --context addons-191972 get vcjob -n my-volcano: fork/exec /usr/local/bin/kubectl: exec format error (430.329µs)
addons_test.go:960: failed checking volcano: fork/exec /usr/local/bin/kubectl: exec format error
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:964: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:964: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-191972 -n addons-191972
addons_test.go:964: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-09-16 10:32:28.763759804 +0000 UTC m=+615.746016871
addons_test.go:965: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-191972
helpers_test.go:235: (dbg) docker inspect addons-191972:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "49285aed0ac6b4add7b1c7856bcff882cf5b64bc1fd5779afefda3979360aedd",
	        "Created": "2024-09-16T10:23:37.048894749Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 13416,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:23:37.183215602Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/49285aed0ac6b4add7b1c7856bcff882cf5b64bc1fd5779afefda3979360aedd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/49285aed0ac6b4add7b1c7856bcff882cf5b64bc1fd5779afefda3979360aedd/hostname",
	        "HostsPath": "/var/lib/docker/containers/49285aed0ac6b4add7b1c7856bcff882cf5b64bc1fd5779afefda3979360aedd/hosts",
	        "LogPath": "/var/lib/docker/containers/49285aed0ac6b4add7b1c7856bcff882cf5b64bc1fd5779afefda3979360aedd/49285aed0ac6b4add7b1c7856bcff882cf5b64bc1fd5779afefda3979360aedd-json.log",
	        "Name": "/addons-191972",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-191972:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-191972",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2254b5220082ec2e338390341321e26cfa70d77e8e8e98f86dc832205812162a-init/diff:/var/lib/docker/overlay2/e391a31d00d345931d18da68f8a7d7338830f6d9b98fe203461225d03a65d594/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2254b5220082ec2e338390341321e26cfa70d77e8e8e98f86dc832205812162a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2254b5220082ec2e338390341321e26cfa70d77e8e8e98f86dc832205812162a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2254b5220082ec2e338390341321e26cfa70d77e8e8e98f86dc832205812162a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-191972",
	                "Source": "/var/lib/docker/volumes/addons-191972/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-191972",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-191972",
	                "name.minikube.sigs.k8s.io": "addons-191972",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b247e3d2e57f223fa64fb9fece255c3b6a0f61eb064ba71e6e8c51f7e6b8590a",
	            "SandboxKey": "/var/run/docker/netns/b247e3d2e57f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-191972": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "aac8db9a46c7b7c219b85113240d1d4a2ee20d1c156fb7315fdf6aa5e797f6a8",
	                    "EndpointID": "ab683490c93590fb0411cd607b8ad8f3100f7ae01f11dd3e855f6321d940faae",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-191972",
	                        "49285aed0ac6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-191972 -n addons-191972
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-191972 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-191972 logs -n 25: (1.203092834s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-297488   | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | -p download-only-297488              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| delete  | -p download-only-297488              | download-only-297488   | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| start   | -o=json --download-only              | download-only-024449   | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | -p download-only-024449              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| delete  | -p download-only-024449              | download-only-024449   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| delete  | -p download-only-297488              | download-only-297488   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| delete  | -p download-only-024449              | download-only-024449   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| start   | --download-only -p                   | download-docker-065822 | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | download-docker-065822               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-065822            | download-docker-065822 | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| start   | --download-only -p                   | binary-mirror-727123   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | binary-mirror-727123                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:34779               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-727123              | binary-mirror-727123   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| addons  | enable dashboard -p                  | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | addons-191972                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | addons-191972                        |                        |         |         |                     |                     |
	| start   | -p addons-191972 --wait=true         | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:27 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:23:15
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:23:15.015457   12653 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:23:15.015610   12653 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:23:15.015623   12653 out.go:358] Setting ErrFile to fd 2...
	I0916 10:23:15.015629   12653 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:23:15.015835   12653 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 10:23:15.016423   12653 out.go:352] Setting JSON to false
	I0916 10:23:15.017221   12653 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":339,"bootTime":1726481856,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:23:15.017316   12653 start.go:139] virtualization: kvm guest
	I0916 10:23:15.019468   12653 out.go:177] * [addons-191972] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:23:15.020856   12653 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:23:15.020860   12653 notify.go:220] Checking for updates...
	I0916 10:23:15.023158   12653 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:23:15.024282   12653 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:23:15.025336   12653 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 10:23:15.026362   12653 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:23:15.027468   12653 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:23:15.028714   12653 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:23:15.049632   12653 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:23:15.049710   12653 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:23:15.095467   12653 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-16 10:23:15.085826834 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:23:15.095614   12653 docker.go:318] overlay module found
	I0916 10:23:15.097552   12653 out.go:177] * Using the docker driver based on user configuration
	I0916 10:23:15.098917   12653 start.go:297] selected driver: docker
	I0916 10:23:15.098932   12653 start.go:901] validating driver "docker" against <nil>
	I0916 10:23:15.098957   12653 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:23:15.099817   12653 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:23:15.144749   12653 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-16 10:23:15.136589077 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:23:15.144922   12653 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:23:15.145171   12653 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:23:15.147081   12653 out.go:177] * Using Docker driver with root privileges
	I0916 10:23:15.148504   12653 cni.go:84] Creating CNI manager for ""
	I0916 10:23:15.148563   12653 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 10:23:15.148575   12653 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 10:23:15.148632   12653 start.go:340] cluster config:
	{Name:addons-191972 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-191972 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:23:15.149981   12653 out.go:177] * Starting "addons-191972" primary control-plane node in "addons-191972" cluster
	I0916 10:23:15.151239   12653 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 10:23:15.152375   12653 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:23:15.153439   12653 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:23:15.153479   12653 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4
	I0916 10:23:15.153492   12653 cache.go:56] Caching tarball of preloaded images
	I0916 10:23:15.153495   12653 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:23:15.153601   12653 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 10:23:15.153613   12653 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 10:23:15.153950   12653 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/config.json ...
	I0916 10:23:15.153974   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/config.json: {Name:mk77e04db13eac753d69895eba14a3f7223b28d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:15.169560   12653 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:23:15.169666   12653 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:23:15.169681   12653 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:23:15.169685   12653 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:23:15.169694   12653 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:23:15.169701   12653 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:23:27.861517   12653 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:23:27.861553   12653 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:23:27.861589   12653 start.go:360] acquireMachinesLock for addons-191972: {Name:mk1204ee6335c794af5ff39cd93a214e3c1d654b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:23:27.861691   12653 start.go:364] duration metric: took 80.959µs to acquireMachinesLock for "addons-191972"
	I0916 10:23:27.861720   12653 start.go:93] Provisioning new machine with config: &{Name:addons-191972 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-191972 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 10:23:27.861797   12653 start.go:125] createHost starting for "" (driver="docker")
	I0916 10:23:27.864363   12653 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0916 10:23:27.864609   12653 start.go:159] libmachine.API.Create for "addons-191972" (driver="docker")
	I0916 10:23:27.864644   12653 client.go:168] LocalClient.Create starting
	I0916 10:23:27.864787   12653 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem
	I0916 10:23:28.100386   12653 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem
	I0916 10:23:28.472961   12653 cli_runner.go:164] Run: docker network inspect addons-191972 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 10:23:28.488573   12653 cli_runner.go:211] docker network inspect addons-191972 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 10:23:28.488653   12653 network_create.go:284] running [docker network inspect addons-191972] to gather additional debugging logs...
	I0916 10:23:28.488675   12653 cli_runner.go:164] Run: docker network inspect addons-191972
	W0916 10:23:28.503724   12653 cli_runner.go:211] docker network inspect addons-191972 returned with exit code 1
	I0916 10:23:28.503773   12653 network_create.go:287] error running [docker network inspect addons-191972]: docker network inspect addons-191972: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-191972 not found
	I0916 10:23:28.503790   12653 network_create.go:289] output of [docker network inspect addons-191972]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-191972 not found
	
	** /stderr **
	I0916 10:23:28.503874   12653 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:23:28.520445   12653 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ac6790}
	I0916 10:23:28.520486   12653 network_create.go:124] attempt to create docker network addons-191972 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0916 10:23:28.520531   12653 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-191972 addons-191972
	I0916 10:23:28.578324   12653 network_create.go:108] docker network addons-191972 192.168.49.0/24 created
	I0916 10:23:28.578353   12653 kic.go:121] calculated static IP "192.168.49.2" for the "addons-191972" container
	I0916 10:23:28.578405   12653 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 10:23:28.593459   12653 cli_runner.go:164] Run: docker volume create addons-191972 --label name.minikube.sigs.k8s.io=addons-191972 --label created_by.minikube.sigs.k8s.io=true
	I0916 10:23:28.611104   12653 oci.go:103] Successfully created a docker volume addons-191972
	I0916 10:23:28.611189   12653 cli_runner.go:164] Run: docker run --rm --name addons-191972-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-191972 --entrypoint /usr/bin/test -v addons-191972:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 10:23:32.566442   12653 cli_runner.go:217] Completed: docker run --rm --name addons-191972-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-191972 --entrypoint /usr/bin/test -v addons-191972:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib: (3.955205965s)
	I0916 10:23:32.566475   12653 oci.go:107] Successfully prepared a docker volume addons-191972
	I0916 10:23:32.566499   12653 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:23:32.566524   12653 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 10:23:32.566588   12653 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-191972:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 10:23:36.989473   12653 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-191972:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.422844639s)
	I0916 10:23:36.989499   12653 kic.go:203] duration metric: took 4.422974303s to extract preloaded images to volume ...
	W0916 10:23:36.989616   12653 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 10:23:36.989704   12653 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 10:23:37.034645   12653 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-191972 --name addons-191972 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-191972 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-191972 --network addons-191972 --ip 192.168.49.2 --volume addons-191972:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 10:23:37.351088   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Running}}
	I0916 10:23:37.369798   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:37.389505   12653 cli_runner.go:164] Run: docker exec addons-191972 stat /var/lib/dpkg/alternatives/iptables
	I0916 10:23:37.432507   12653 oci.go:144] the created container "addons-191972" has a running status.
	I0916 10:23:37.432542   12653 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa...
	I0916 10:23:37.512853   12653 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 10:23:37.532177   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:37.549342   12653 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 10:23:37.549361   12653 kic_runner.go:114] Args: [docker exec --privileged addons-191972 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 10:23:37.594990   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:37.611429   12653 machine.go:93] provisionDockerMachine start ...
	I0916 10:23:37.611513   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:37.628951   12653 main.go:141] libmachine: Using SSH client type: native
	I0916 10:23:37.629230   12653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 10:23:37.629249   12653 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:23:37.630101   12653 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54456->127.0.0.1:32768: read: connection reset by peer
	I0916 10:23:40.759062   12653 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-191972
	
	I0916 10:23:40.759087   12653 ubuntu.go:169] provisioning hostname "addons-191972"
	I0916 10:23:40.759139   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:40.776123   12653 main.go:141] libmachine: Using SSH client type: native
	I0916 10:23:40.776294   12653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 10:23:40.776306   12653 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-191972 && echo "addons-191972" | sudo tee /etc/hostname
	I0916 10:23:40.917999   12653 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-191972
	
	I0916 10:23:40.918073   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:40.934369   12653 main.go:141] libmachine: Using SSH client type: native
	I0916 10:23:40.934536   12653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 10:23:40.934552   12653 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-191972' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-191972/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-191972' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:23:41.063670   12653 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:23:41.063696   12653 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 10:23:41.063755   12653 ubuntu.go:177] setting up certificates
	I0916 10:23:41.063769   12653 provision.go:84] configureAuth start
	I0916 10:23:41.063821   12653 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-191972
	I0916 10:23:41.080185   12653 provision.go:143] copyHostCerts
	I0916 10:23:41.080289   12653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 10:23:41.080452   12653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 10:23:41.080539   12653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 10:23:41.080607   12653 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.addons-191972 san=[127.0.0.1 192.168.49.2 addons-191972 localhost minikube]
	I0916 10:23:41.189624   12653 provision.go:177] copyRemoteCerts
	I0916 10:23:41.189685   12653 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:23:41.189718   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:41.206072   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:41.299940   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 10:23:41.321259   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:23:41.342100   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:23:41.362764   12653 provision.go:87] duration metric: took 298.977855ms to configureAuth
	I0916 10:23:41.362793   12653 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:23:41.362955   12653 config.go:182] Loaded profile config "addons-191972": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:23:41.362966   12653 machine.go:96] duration metric: took 3.751519266s to provisionDockerMachine
	I0916 10:23:41.362991   12653 client.go:171] duration metric: took 13.498318264s to LocalClient.Create
	I0916 10:23:41.363014   12653 start.go:167] duration metric: took 13.498406844s to libmachine.API.Create "addons-191972"
	I0916 10:23:41.363024   12653 start.go:293] postStartSetup for "addons-191972" (driver="docker")
	I0916 10:23:41.363035   12653 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:23:41.363112   12653 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:23:41.363159   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:41.379631   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:41.472315   12653 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:23:41.475416   12653 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:23:41.475455   12653 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:23:41.475469   12653 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:23:41.475477   12653 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:23:41.475490   12653 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 10:23:41.475562   12653 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 10:23:41.475593   12653 start.go:296] duration metric: took 112.560003ms for postStartSetup
	I0916 10:23:41.475953   12653 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-191972
	I0916 10:23:41.491831   12653 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/config.json ...
	I0916 10:23:41.492098   12653 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:23:41.492159   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:41.508709   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:41.604422   12653 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:23:41.608355   12653 start.go:128] duration metric: took 13.746544864s to createHost
	I0916 10:23:41.608378   12653 start.go:83] releasing machines lock for "addons-191972", held for 13.74667303s
	I0916 10:23:41.608449   12653 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-191972
	I0916 10:23:41.624552   12653 ssh_runner.go:195] Run: cat /version.json
	I0916 10:23:41.624594   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:41.624666   12653 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:23:41.624742   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:41.640830   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:41.641558   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:41.811513   12653 ssh_runner.go:195] Run: systemctl --version
	I0916 10:23:41.816090   12653 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:23:41.820031   12653 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 10:23:41.841966   12653 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:23:41.842040   12653 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:23:41.867614   12653 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 10:23:41.867637   12653 start.go:495] detecting cgroup driver to use...
	I0916 10:23:41.867665   12653 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:23:41.867707   12653 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 10:23:41.878761   12653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 10:23:41.889209   12653 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:23:41.889272   12653 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:23:41.901658   12653 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:23:41.914376   12653 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:23:41.989625   12653 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:23:42.064036   12653 docker.go:233] disabling docker service ...
	I0916 10:23:42.064087   12653 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:23:42.082378   12653 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:23:42.092694   12653 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:23:42.163431   12653 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:23:42.235566   12653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:23:42.245920   12653 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:23:42.260071   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 10:23:42.268844   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:23:42.277914   12653 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:23:42.277973   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:23:42.287090   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:23:42.295426   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:23:42.303716   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:23:42.312468   12653 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:23:42.320449   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:23:42.328970   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:23:42.337386   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:23:42.345791   12653 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:23:42.352855   12653 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:23:42.359971   12653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:23:42.438798   12653 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 10:23:42.548862   12653 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 10:23:42.548940   12653 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 10:23:42.552403   12653 start.go:563] Will wait 60s for crictl version
	I0916 10:23:42.552460   12653 ssh_runner.go:195] Run: which crictl
	I0916 10:23:42.555471   12653 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:23:42.586679   12653 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 10:23:42.586752   12653 ssh_runner.go:195] Run: containerd --version
	I0916 10:23:42.608454   12653 ssh_runner.go:195] Run: containerd --version
	I0916 10:23:42.632432   12653 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 10:23:42.633762   12653 cli_runner.go:164] Run: docker network inspect addons-191972 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:23:42.650400   12653 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:23:42.653892   12653 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:23:42.664053   12653 kubeadm.go:883] updating cluster {Name:addons-191972 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-191972 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:23:42.664154   12653 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:23:42.664195   12653 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:23:42.695688   12653 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 10:23:42.695710   12653 containerd.go:534] Images already preloaded, skipping extraction
	I0916 10:23:42.695778   12653 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:23:42.727148   12653 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 10:23:42.727166   12653 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:23:42.727174   12653 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I0916 10:23:42.727255   12653 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-191972 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-191972 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:23:42.727302   12653 ssh_runner.go:195] Run: sudo crictl info
	I0916 10:23:42.757474   12653 cni.go:84] Creating CNI manager for ""
	I0916 10:23:42.757493   12653 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 10:23:42.757502   12653 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:23:42.757520   12653 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-191972 NodeName:addons-191972 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:23:42.757633   12653 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-191972"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:23:42.757684   12653 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:23:42.765604   12653 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:23:42.765672   12653 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:23:42.773363   12653 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0916 10:23:42.789280   12653 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:23:42.805100   12653 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0916 10:23:42.820420   12653 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0916 10:23:42.823264   12653 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:23:42.832700   12653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:23:42.907069   12653 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:23:42.919246   12653 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972 for IP: 192.168.49.2
	I0916 10:23:42.919266   12653 certs.go:194] generating shared ca certs ...
	I0916 10:23:42.919279   12653 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:42.919399   12653 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 10:23:43.054784   12653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt ...
	I0916 10:23:43.054815   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt: {Name:mkf05eaa3032985e939bd1a93aa36a6d50242974 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.055008   12653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key ...
	I0916 10:23:43.055031   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key: {Name:mk4cf19316dad04ab708c5c17e172ec92fc35230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.055134   12653 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 10:23:43.268289   12653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt ...
	I0916 10:23:43.268318   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt: {Name:mk68da284b9ad8d396a1f11e7cfb94cc6f208c5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.268510   12653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key ...
	I0916 10:23:43.268532   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key: {Name:mkdf8c5da2a6d70c9ece2277843ebe69f9105c4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.268626   12653 certs.go:256] generating profile certs ...
	I0916 10:23:43.268694   12653 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.key
	I0916 10:23:43.268720   12653 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt with IP's: []
	I0916 10:23:43.341520   12653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt ...
	I0916 10:23:43.341551   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: {Name:mke3c2895145f9c692cb1e6451d9766499ccc877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.341738   12653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.key ...
	I0916 10:23:43.341755   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.key: {Name:mkd6237ae8ebf429452ae0c60cea457b1f9cff1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.341855   12653 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.key.ac265369
	I0916 10:23:43.341882   12653 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.crt.ac265369 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0916 10:23:43.403750   12653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.crt.ac265369 ...
	I0916 10:23:43.403775   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.crt.ac265369: {Name:mk72db26b8519849abdf811ed93be5caeac2267d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.403951   12653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.key.ac265369 ...
	I0916 10:23:43.403973   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.key.ac265369: {Name:mk4b11dab0a085e395344dc35616a0c16f298191 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.404065   12653 certs.go:381] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.crt.ac265369 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.crt
	I0916 10:23:43.404155   12653 certs.go:385] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.key.ac265369 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.key
	I0916 10:23:43.404230   12653 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.key
	I0916 10:23:43.404250   12653 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.crt with IP's: []
	I0916 10:23:43.488130   12653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.crt ...
	I0916 10:23:43.488160   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.crt: {Name:mk11d8f9c437e5586897185f4551df7594041471 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.488342   12653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.key ...
	I0916 10:23:43.488360   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.key: {Name:mk18734ee357c50ce0ff509ffb1c7e42743fa1b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.488577   12653 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:23:43.488617   12653 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 10:23:43.488652   12653 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:23:43.488682   12653 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 10:23:43.489279   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:23:43.511557   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:23:43.532934   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:23:43.553377   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:23:43.575078   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0916 10:23:43.595868   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:23:43.616905   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:23:43.637839   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:23:43.658915   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:23:43.680485   12653 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:23:43.696295   12653 ssh_runner.go:195] Run: openssl version
	I0916 10:23:43.701282   12653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:23:43.709681   12653 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:43.712715   12653 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:43.712762   12653 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:43.718832   12653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:23:43.727190   12653 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:23:43.730247   12653 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:23:43.730290   12653 kubeadm.go:392] StartCluster: {Name:addons-191972 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-191972 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:23:43.730356   12653 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 10:23:43.730405   12653 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:23:43.761830   12653 cri.go:89] found id: ""
	I0916 10:23:43.761893   12653 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:23:43.770086   12653 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:23:43.778465   12653 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 10:23:43.778522   12653 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:23:43.786355   12653 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:23:43.786373   12653 kubeadm.go:157] found existing configuration files:
	
	I0916 10:23:43.786419   12653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 10:23:43.794471   12653 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:23:43.794519   12653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:23:43.802487   12653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 10:23:43.810401   12653 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:23:43.810451   12653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:23:43.817541   12653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 10:23:43.824799   12653 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:23:43.824842   12653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:23:43.832032   12653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 10:23:43.839239   12653 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:23:43.839298   12653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:23:43.847649   12653 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 10:23:43.880192   12653 kubeadm.go:310] W0916 10:23:43.879583    1109 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:23:43.880773   12653 kubeadm.go:310] W0916 10:23:43.880291    1109 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:23:43.896580   12653 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 10:23:43.944226   12653 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:23:52.227261   12653 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 10:23:52.227338   12653 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 10:23:52.227418   12653 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 10:23:52.227466   12653 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 10:23:52.227501   12653 kubeadm.go:310] OS: Linux
	I0916 10:23:52.227541   12653 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 10:23:52.227584   12653 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 10:23:52.227625   12653 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 10:23:52.227670   12653 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 10:23:52.227711   12653 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 10:23:52.227786   12653 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 10:23:52.227872   12653 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 10:23:52.227947   12653 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 10:23:52.227994   12653 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 10:23:52.228098   12653 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:23:52.228218   12653 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:23:52.228360   12653 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 10:23:52.228491   12653 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:23:52.230143   12653 out.go:235]   - Generating certificates and keys ...
	I0916 10:23:52.230239   12653 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 10:23:52.230328   12653 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 10:23:52.230422   12653 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 10:23:52.230504   12653 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 10:23:52.230596   12653 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 10:23:52.230685   12653 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 10:23:52.230768   12653 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 10:23:52.230910   12653 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-191972 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 10:23:52.230984   12653 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 10:23:52.231130   12653 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-191972 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 10:23:52.231228   12653 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 10:23:52.231331   12653 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 10:23:52.231395   12653 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 10:23:52.231471   12653 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:23:52.231543   12653 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:23:52.231622   12653 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 10:23:52.231683   12653 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:23:52.231759   12653 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:23:52.231871   12653 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:23:52.231979   12653 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:23:52.232069   12653 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:23:52.233407   12653 out.go:235]   - Booting up control plane ...
	I0916 10:23:52.233500   12653 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:23:52.233589   12653 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:23:52.233654   12653 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:23:52.233747   12653 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:23:52.233846   12653 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:23:52.233895   12653 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 10:23:52.234011   12653 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 10:23:52.234102   12653 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:23:52.234155   12653 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.63037ms
	I0916 10:23:52.234224   12653 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 10:23:52.234282   12653 kubeadm.go:310] [api-check] The API server is healthy after 4.501222011s
	I0916 10:23:52.234402   12653 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:23:52.234544   12653 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:23:52.234625   12653 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:23:52.234780   12653 kubeadm.go:310] [mark-control-plane] Marking the node addons-191972 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:23:52.234830   12653 kubeadm.go:310] [bootstrap-token] Using token: fe3fo6.40ynbll2pbwpp3it
	I0916 10:23:52.236918   12653 out.go:235]   - Configuring RBAC rules ...
	I0916 10:23:52.237043   12653 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:23:52.237118   12653 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:23:52.237261   12653 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:23:52.237418   12653 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:23:52.237547   12653 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:23:52.237659   12653 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:23:52.237791   12653 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:23:52.237856   12653 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 10:23:52.237898   12653 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 10:23:52.237904   12653 kubeadm.go:310] 
	I0916 10:23:52.237963   12653 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 10:23:52.237971   12653 kubeadm.go:310] 
	I0916 10:23:52.238040   12653 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 10:23:52.238046   12653 kubeadm.go:310] 
	I0916 10:23:52.238070   12653 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 10:23:52.238123   12653 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:23:52.238167   12653 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:23:52.238173   12653 kubeadm.go:310] 
	I0916 10:23:52.238218   12653 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 10:23:52.238223   12653 kubeadm.go:310] 
	I0916 10:23:52.238268   12653 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:23:52.238274   12653 kubeadm.go:310] 
	I0916 10:23:52.238329   12653 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 10:23:52.238418   12653 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:23:52.238507   12653 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:23:52.238515   12653 kubeadm.go:310] 
	I0916 10:23:52.238598   12653 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:23:52.238681   12653 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 10:23:52.238690   12653 kubeadm.go:310] 
	I0916 10:23:52.238801   12653 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fe3fo6.40ynbll2pbwpp3it \
	I0916 10:23:52.238908   12653 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 \
	I0916 10:23:52.238933   12653 kubeadm.go:310] 	--control-plane 
	I0916 10:23:52.238939   12653 kubeadm.go:310] 
	I0916 10:23:52.239012   12653 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:23:52.239020   12653 kubeadm.go:310] 
	I0916 10:23:52.239095   12653 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fe3fo6.40ynbll2pbwpp3it \
	I0916 10:23:52.239199   12653 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 
	I0916 10:23:52.239210   12653 cni.go:84] Creating CNI manager for ""
	I0916 10:23:52.239215   12653 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 10:23:52.240733   12653 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 10:23:52.241980   12653 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 10:23:52.245609   12653 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 10:23:52.245625   12653 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 10:23:52.261912   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 10:23:52.447057   12653 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:23:52.447144   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:52.447165   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-191972 minikube.k8s.io/updated_at=2024_09_16T10_23_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=addons-191972 minikube.k8s.io/primary=true
	I0916 10:23:52.543497   12653 ops.go:34] apiserver oom_adj: -16
	I0916 10:23:52.543643   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:53.044491   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:53.543770   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:54.044061   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:54.544691   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:55.044249   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:55.543918   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:56.043679   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:56.543717   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:57.044619   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:57.107839   12653 kubeadm.go:1113] duration metric: took 4.660750668s to wait for elevateKubeSystemPrivileges
	I0916 10:23:57.107871   12653 kubeadm.go:394] duration metric: took 13.37758355s to StartCluster
	I0916 10:23:57.107890   12653 settings.go:142] acquiring lock: {Name:mk5f140da961bb985c566f732d611f3e238f1f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:57.107998   12653 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:23:57.108383   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:57.108581   12653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 10:23:57.108610   12653 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 10:23:57.108666   12653 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0916 10:23:57.108789   12653 addons.go:69] Setting yakd=true in profile "addons-191972"
	I0916 10:23:57.108813   12653 addons.go:234] Setting addon yakd=true in "addons-191972"
	I0916 10:23:57.108830   12653 config.go:182] Loaded profile config "addons-191972": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:23:57.108844   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.108885   12653 addons.go:69] Setting inspektor-gadget=true in profile "addons-191972"
	I0916 10:23:57.108900   12653 addons.go:234] Setting addon inspektor-gadget=true in "addons-191972"
	I0916 10:23:57.108928   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.109000   12653 addons.go:69] Setting gcp-auth=true in profile "addons-191972"
	I0916 10:23:57.109025   12653 mustload.go:65] Loading cluster: addons-191972
	I0916 10:23:57.109143   12653 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-191972"
	I0916 10:23:57.109187   12653 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-191972"
	I0916 10:23:57.109185   12653 addons.go:69] Setting default-storageclass=true in profile "addons-191972"
	I0916 10:23:57.109211   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.109225   12653 config.go:182] Loaded profile config "addons-191972": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:23:57.109232   12653 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-191972"
	I0916 10:23:57.109216   12653 addons.go:69] Setting cloud-spanner=true in profile "addons-191972"
	I0916 10:23:57.109259   12653 addons.go:69] Setting storage-provisioner=true in profile "addons-191972"
	I0916 10:23:57.109265   12653 addons.go:234] Setting addon cloud-spanner=true in "addons-191972"
	I0916 10:23:57.109274   12653 addons.go:234] Setting addon storage-provisioner=true in "addons-191972"
	I0916 10:23:57.109308   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.109323   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.109407   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.109485   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.109507   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.109547   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.109684   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.109757   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.109825   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.110167   12653 addons.go:69] Setting ingress-dns=true in profile "addons-191972"
	I0916 10:23:57.110372   12653 addons.go:234] Setting addon ingress-dns=true in "addons-191972"
	I0916 10:23:57.110546   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.111202   12653 addons.go:69] Setting helm-tiller=true in profile "addons-191972"
	I0916 10:23:57.111255   12653 addons.go:234] Setting addon helm-tiller=true in "addons-191972"
	I0916 10:23:57.111282   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.111445   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.111484   12653 addons.go:69] Setting ingress=true in profile "addons-191972"
	I0916 10:23:57.111498   12653 addons.go:234] Setting addon ingress=true in "addons-191972"
	I0916 10:23:57.111527   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.111731   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.110913   12653 addons.go:69] Setting metrics-server=true in profile "addons-191972"
	I0916 10:23:57.111983   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.111987   12653 addons.go:234] Setting addon metrics-server=true in "addons-191972"
	I0916 10:23:57.112171   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.110926   12653 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-191972"
	I0916 10:23:57.113223   12653 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-191972"
	I0916 10:23:57.113258   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.113700   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.115817   12653 out.go:177] * Verifying Kubernetes components...
	I0916 10:23:57.116675   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.110938   12653 addons.go:69] Setting registry=true in profile "addons-191972"
	I0916 10:23:57.116963   12653 addons.go:234] Setting addon registry=true in "addons-191972"
	I0916 10:23:57.117093   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.110938   12653 addons.go:69] Setting volcano=true in profile "addons-191972"
	I0916 10:23:57.117245   12653 addons.go:234] Setting addon volcano=true in "addons-191972"
	I0916 10:23:57.117313   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.110949   12653 addons.go:69] Setting volumesnapshots=true in profile "addons-191972"
	I0916 10:23:57.117350   12653 addons.go:234] Setting addon volumesnapshots=true in "addons-191972"
	I0916 10:23:57.117397   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.117799   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.117919   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.118954   12653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:23:57.110924   12653 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-191972"
	I0916 10:23:57.120855   12653 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-191972"
	I0916 10:23:57.121186   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.148826   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.156121   12653 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:23:57.158094   12653 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0916 10:23:57.160078   12653 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0916 10:23:57.160230   12653 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:23:57.163394   12653 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:23:57.163405   12653 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0916 10:23:57.163428   12653 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0916 10:23:57.163491   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.163933   12653 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 10:23:57.163952   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0916 10:23:57.163999   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.166339   12653 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0916 10:23:57.166352   12653 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0916 10:23:57.166505   12653 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:57.166525   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:23:57.166591   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.176509   12653 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:23:57.176539   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0916 10:23:57.176597   12653 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0916 10:23:57.176613   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0916 10:23:57.176614   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.176667   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.176871   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.184510   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0916 10:23:57.184923   12653 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0916 10:23:57.187620   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0916 10:23:57.187908   12653 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 10:23:57.187925   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0916 10:23:57.188005   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.190192   12653 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0916 10:23:57.190888   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0916 10:23:57.191984   12653 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0916 10:23:57.192004   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0916 10:23:57.192062   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.192462   12653 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-191972"
	I0916 10:23:57.192519   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.193186   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.195485   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0916 10:23:57.196395   12653 addons.go:234] Setting addon default-storageclass=true in "addons-191972"
	I0916 10:23:57.196441   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.197033   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.200024   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0916 10:23:57.200756   12653 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0916 10:23:57.202388   12653 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 10:23:57.202409   12653 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 10:23:57.202572   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.204739   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0916 10:23:57.206967   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0916 10:23:57.217725   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0916 10:23:57.217900   12653 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0916 10:23:57.219581   12653 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0916 10:23:57.219714   12653 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0916 10:23:57.219798   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.219620   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 10:23:57.220511   12653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0916 10:23:57.221727   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.235796   12653 out.go:177]   - Using image docker.io/registry:2.8.3
	I0916 10:23:57.237579   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0916 10:23:57.239326   12653 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 10:23:57.239350   12653 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0916 10:23:57.239411   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.239514   12653 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0916 10:23:57.241480   12653 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0916 10:23:57.241502   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0916 10:23:57.241555   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.243883   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.255850   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.256610   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.261965   12653 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0916 10:23:57.263559   12653 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0916 10:23:57.265255   12653 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0916 10:23:57.266412   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.267838   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.268005   12653 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 10:23:57.268022   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0916 10:23:57.268074   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.269050   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.276483   12653 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:57.276507   12653 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:23:57.276573   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.283025   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.284257   12653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 10:23:57.288880   12653 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0916 10:23:57.290776   12653 out.go:177]   - Using image docker.io/busybox:stable
	I0916 10:23:57.292419   12653 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:23:57.292444   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0916 10:23:57.292510   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.295145   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.295780   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.297628   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.298120   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.300416   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.306147   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.311231   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.314549   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	W0916 10:23:57.324739   12653 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0916 10:23:57.324769   12653 retry.go:31] will retry after 374.435778ms: ssh: handshake failed: EOF
	W0916 10:23:57.325602   12653 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0916 10:23:57.325619   12653 retry.go:31] will retry after 150.651165ms: ssh: handshake failed: EOF
	I0916 10:23:57.330682   12653 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:23:57.629690   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 10:23:57.729822   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:57.730227   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:23:57.742355   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:57.824974   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 10:23:57.842831   12653 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 10:23:57.842917   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0916 10:23:57.843332   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:23:57.921972   12653 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0916 10:23:57.922058   12653 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0916 10:23:57.922011   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0916 10:23:57.922034   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 10:23:57.922195   12653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0916 10:23:57.929874   12653 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 10:23:57.929901   12653 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0916 10:23:57.941141   12653 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0916 10:23:57.941166   12653 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0916 10:23:58.138273   12653 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0916 10:23:58.138369   12653 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0916 10:23:58.222261   12653 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:23:58.222352   12653 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0916 10:23:58.229572   12653 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 10:23:58.229660   12653 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0916 10:23:58.232627   12653 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0916 10:23:58.232698   12653 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0916 10:23:58.322393   12653 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 10:23:58.322420   12653 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 10:23:58.339998   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 10:23:58.435282   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 10:23:58.435313   12653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0916 10:23:58.435591   12653 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.15128486s)
	I0916 10:23:58.435618   12653 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0916 10:23:58.436958   12653 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.1062474s)
	I0916 10:23:58.437947   12653 node_ready.go:35] waiting up to 6m0s for node "addons-191972" to be "Ready" ...
	I0916 10:23:58.441471   12653 node_ready.go:49] node "addons-191972" has status "Ready":"True"
	I0916 10:23:58.441502   12653 node_ready.go:38] duration metric: took 3.529013ms for node "addons-191972" to be "Ready" ...
	I0916 10:23:58.441514   12653 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:23:58.442873   12653 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0916 10:23:58.442897   12653 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0916 10:23:58.534045   12653 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2l862" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:58.540468   12653 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:23:58.540496   12653 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 10:23:58.642810   12653 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:23:58.642885   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0916 10:23:58.728521   12653 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 10:23:58.728554   12653 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0916 10:23:58.840472   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:23:58.921026   12653 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0916 10:23:58.921059   12653 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0916 10:23:58.936525   12653 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0916 10:23:58.936552   12653 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0916 10:23:58.939212   12653 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-191972" context rescaled to 1 replicas
	I0916 10:23:59.131614   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:23:59.224079   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 10:23:59.224104   12653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0916 10:23:59.230203   12653 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0916 10:23:59.230238   12653 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0916 10:23:59.423686   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:23:59.430144   12653 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0916 10:23:59.430176   12653 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0916 10:23:59.433784   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 10:23:59.433810   12653 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0916 10:23:59.542608   12653 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:23:59.542635   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0916 10:23:59.630644   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 10:23:59.630734   12653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0916 10:23:59.840282   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:23:59.927613   12653 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:23:59.927705   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0916 10:24:00.030859   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 10:24:00.030936   12653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0916 10:24:00.034479   12653 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0916 10:24:00.034549   12653 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0916 10:24:00.038488   12653 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-2l862" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-2l862" not found
	I0916 10:24:00.038522   12653 pod_ready.go:82] duration metric: took 1.504385632s for pod "coredns-7c65d6cfc9-2l862" in "kube-system" namespace to be "Ready" ...
	E0916 10:24:00.038535   12653 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-2l862" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-2l862" not found
	I0916 10:24:00.038552   12653 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:00.333635   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:24:00.339910   12653 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 10:24:00.339994   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0916 10:24:00.627234   12653 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0916 10:24:00.627262   12653 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0916 10:24:00.929780   12653 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 10:24:00.929809   12653 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0916 10:24:01.128973   12653 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:24:01.129062   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0916 10:24:01.334031   12653 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 10:24:01.334116   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0916 10:24:01.525220   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:24:02.022039   12653 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 10:24:02.022114   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0916 10:24:02.136463   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:02.532736   12653 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:24:02.532829   12653 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0916 10:24:02.738986   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:24:04.426813   12653 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0916 10:24:04.426903   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:24:04.456284   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:24:04.624938   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:04.638370   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.008571899s)
	I0916 10:24:04.638414   12653 addons.go:475] Verifying addon ingress=true in "addons-191972"
	I0916 10:24:04.638488   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.908226437s)
	I0916 10:24:04.638570   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.908717103s)
	I0916 10:24:04.638623   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.896188028s)
	I0916 10:24:04.638699   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.81369606s)
	I0916 10:24:04.638718   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.795359026s)
	I0916 10:24:04.638742   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.716592394s)
	I0916 10:24:04.641681   12653 out.go:177] * Verifying ingress addon...
	I0916 10:24:04.644857   12653 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	W0916 10:24:04.722084   12653 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0916 10:24:04.723574   12653 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0916 10:24:04.723598   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:04.841083   12653 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0916 10:24:04.932849   12653 addons.go:234] Setting addon gcp-auth=true in "addons-191972"
	I0916 10:24:04.932903   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:24:04.933372   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:24:04.957393   12653 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0916 10:24:04.957464   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:24:04.975728   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:24:05.150342   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:05.650366   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:06.149809   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:06.649391   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:06.834167   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (8.494119031s)
	I0916 10:24:06.834259   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.993750099s)
	I0916 10:24:06.834355   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.702687859s)
	I0916 10:24:06.834379   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.410662864s)
	I0916 10:24:06.834381   12653 addons.go:475] Verifying addon metrics-server=true in "addons-191972"
	I0916 10:24:06.834394   12653 addons.go:475] Verifying addon registry=true in "addons-191972"
	I0916 10:24:06.834447   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.994082306s)
	I0916 10:24:06.834595   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.500877662s)
	W0916 10:24:06.834635   12653 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:24:06.834660   12653 retry.go:31] will retry after 180.492463ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:24:06.834694   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.309367322s)
	I0916 10:24:06.836029   12653 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-191972 service yakd-dashboard -n yakd-dashboard
	
	I0916 10:24:06.836032   12653 out.go:177] * Verifying registry addon...
	I0916 10:24:06.838577   12653 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0916 10:24:06.842659   12653 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 10:24:06.842681   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:07.016329   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:24:07.122253   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:07.229433   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:07.346049   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:07.428384   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.689342475s)
	I0916 10:24:07.428423   12653 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-191972"
	I0916 10:24:07.428557   12653 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.471115449s)
	I0916 10:24:07.430137   12653 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:24:07.430140   12653 out.go:177] * Verifying csi-hostpath-driver addon...
	I0916 10:24:07.432142   12653 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0916 10:24:07.433350   12653 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0916 10:24:07.433452   12653 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 10:24:07.433472   12653 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0916 10:24:07.446890   12653 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 10:24:07.446929   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:07.523198   12653 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 10:24:07.523247   12653 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0916 10:24:07.543809   12653 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:24:07.543877   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0916 10:24:07.627288   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:24:07.649744   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:07.842799   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:07.943700   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.149515   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:08.343117   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:08.438263   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.651360   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:08.739263   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.722876496s)
	I0916 10:24:08.739377   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.111993041s)
	I0916 10:24:08.740565   12653 addons.go:475] Verifying addon gcp-auth=true in "addons-191972"
	I0916 10:24:08.742658   12653 out.go:177] * Verifying gcp-auth addon...
	I0916 10:24:08.744959   12653 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0916 10:24:08.752275   12653 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 10:24:08.842486   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:08.937942   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.148485   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:09.342745   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:09.444884   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.544117   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:09.649057   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:09.850158   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:09.951607   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.149384   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:10.342403   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:10.437953   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.648926   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:10.842555   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:10.938628   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:11.149265   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:11.341824   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:11.438269   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:11.544664   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:11.649663   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:11.842706   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:11.938382   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:12.149747   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:12.341485   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:12.438115   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:12.649444   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:12.842417   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:12.937808   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:13.149247   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:13.342184   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:13.443397   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:13.544742   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:13.649342   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:13.842433   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:13.938156   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:14.148884   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:14.342230   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:14.437378   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:14.648929   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:14.841404   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:14.938373   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:15.148947   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:15.342062   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:15.437442   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:15.544833   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:15.649729   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:15.875330   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:16.063181   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:16.148410   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:16.342704   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:16.437759   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:16.649599   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:16.842196   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:16.937322   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:17.148973   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:17.342240   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:17.438331   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:17.649287   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:17.842346   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:17.937786   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:18.044459   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:18.148462   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:18.342098   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:18.438245   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:18.650618   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:18.842115   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:18.937393   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:19.148210   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:19.342331   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:19.437753   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:19.649206   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:19.841659   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:19.937929   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:20.149095   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:20.341559   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:20.437389   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:20.543697   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:20.649389   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:20.841724   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:20.939911   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:21.148803   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:21.341867   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:21.437743   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:21.649220   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:21.841636   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:21.937733   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:22.148853   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:22.341623   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:22.438291   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:22.544155   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:22.648789   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:22.842117   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:22.937569   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:23.148605   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:23.342228   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:23.437946   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:23.648725   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:23.848611   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:23.937702   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:24.148830   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:24.341472   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:24.437746   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:24.648857   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:24.841524   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:24.937579   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:25.043875   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:25.148986   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:25.341729   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:25.438614   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:25.648859   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:25.842571   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:25.937660   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:26.148067   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:26.342525   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:26.442495   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:26.649368   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:26.841986   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:26.937210   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:27.044290   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:27.148266   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:27.341925   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:27.437369   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:27.648710   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:27.842271   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:27.937289   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:28.149389   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:28.341712   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:28.437988   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:28.649507   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:28.841935   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:28.937651   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:29.148305   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:29.341758   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:29.437230   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:29.544648   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:29.648789   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:29.842453   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:29.937780   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:30.149144   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:30.341971   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:30.436935   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:30.648826   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:30.842241   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:30.937301   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:31.148532   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:31.342364   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:31.438028   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:31.649021   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:31.842529   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:31.938084   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:32.044452   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:32.148477   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:32.342165   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:32.437629   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:32.649007   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:32.841446   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:32.937583   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:33.148965   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:33.341801   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:33.437144   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:33.649484   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:33.842344   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:33.937348   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:34.148522   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:34.342404   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:34.438126   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:34.543640   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:34.649404   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:34.842417   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:34.937940   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:35.149191   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:35.341955   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:35.437296   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:35.649499   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:35.841951   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:35.937835   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:36.148878   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:36.342396   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:36.437451   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:36.648935   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:36.841429   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:36.937515   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:37.043652   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:37.148879   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:37.341650   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:37.438917   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:37.648863   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:37.843665   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:37.937755   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:38.148476   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:38.342129   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:38.437617   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:38.648850   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:38.842096   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:38.937210   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:39.044295   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:39.148546   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:39.342070   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:39.437434   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:39.649394   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:39.850992   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:39.937068   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:40.148412   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:40.342026   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:40.438818   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:40.648424   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:40.842673   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:40.937959   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:41.149077   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:41.341573   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:41.437823   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:41.544866   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:41.649385   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:41.842400   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:41.942736   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:42.148726   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:42.342124   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:42.438550   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:42.649404   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:42.841927   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:42.937808   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:43.149523   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:43.341957   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:43.437318   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:43.545247   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:43.648618   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:43.842970   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:43.938236   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:44.149170   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:44.342180   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:44.437399   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:44.649533   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:44.842942   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:44.937846   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:45.149581   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:45.342185   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:45.437873   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:45.649109   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:45.842031   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:45.937050   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:46.043865   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:46.149131   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:46.342272   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:46.437555   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:46.649645   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:46.850195   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:46.951731   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:47.044952   12653 pod_ready.go:93] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:47.044977   12653 pod_ready.go:82] duration metric: took 47.006412913s for pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.044991   12653 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.048830   12653 pod_ready.go:93] pod "etcd-addons-191972" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:47.048847   12653 pod_ready.go:82] duration metric: took 3.848159ms for pod "etcd-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.048861   12653 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.052536   12653 pod_ready.go:93] pod "kube-apiserver-addons-191972" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:47.052558   12653 pod_ready.go:82] duration metric: took 3.691187ms for pod "kube-apiserver-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.052566   12653 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.056167   12653 pod_ready.go:93] pod "kube-controller-manager-addons-191972" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:47.056192   12653 pod_ready.go:82] duration metric: took 3.620465ms for pod "kube-controller-manager-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.056201   12653 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fnr7f" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.060021   12653 pod_ready.go:93] pod "kube-proxy-fnr7f" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:47.060038   12653 pod_ready.go:82] duration metric: took 3.830746ms for pod "kube-proxy-fnr7f" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.060046   12653 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.149672   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:47.342533   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:47.437808   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:47.441161   12653 pod_ready.go:93] pod "kube-scheduler-addons-191972" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:47.441181   12653 pod_ready.go:82] duration metric: took 381.129532ms for pod "kube-scheduler-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.441188   12653 pod_ready.go:39] duration metric: took 48.999654984s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:24:47.441205   12653 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:24:47.441254   12653 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:24:47.453909   12653 api_server.go:72] duration metric: took 50.345260117s to wait for apiserver process to appear ...
	I0916 10:24:47.453935   12653 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:24:47.453960   12653 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 10:24:47.458673   12653 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 10:24:47.459648   12653 api_server.go:141] control plane version: v1.31.1
	I0916 10:24:47.459673   12653 api_server.go:131] duration metric: took 5.729621ms to wait for apiserver health ...
	I0916 10:24:47.459683   12653 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:24:47.648237   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:47.648583   12653 system_pods.go:59] 19 kube-system pods found
	I0916 10:24:47.648620   12653 system_pods.go:61] "coredns-7c65d6cfc9-9rccl" [f2ffddc5-3995-4d5a-8059-297b3859f9c5] Running
	I0916 10:24:47.648634   12653 system_pods.go:61] "csi-hostpath-attacher-0" [07b91be6-2f2b-441c-80ee-f832e9ee2e18] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 10:24:47.648642   12653 system_pods.go:61] "csi-hostpath-resizer-0" [311c306b-accd-4242-8415-81199a4ce054] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 10:24:47.648653   12653 system_pods.go:61] "csi-hostpathplugin-qdnbn" [8f5408a2-a7eb-4f76-8ce0-b885fb4b47e5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 10:24:47.648667   12653 system_pods.go:61] "etcd-addons-191972" [81af20b7-9b19-4723-9b92-0ded3d775cd3] Running
	I0916 10:24:47.648673   12653 system_pods.go:61] "kindnet-rxp8k" [02b143c0-bbb4-4f94-8448-9c3c4f248a87] Running
	I0916 10:24:47.648678   12653 system_pods.go:61] "kube-apiserver-addons-191972" [1aabf917-f381-4e69-8524-954958c99b7e] Running
	I0916 10:24:47.648684   12653 system_pods.go:61] "kube-controller-manager-addons-191972" [ee796e67-bd06-4d93-9d20-aabcbb395ba2] Running
	I0916 10:24:47.648690   12653 system_pods.go:61] "kube-ingress-dns-minikube" [0f28fa0b-84f1-4215-aa41-4596ab4cef8b] Running
	I0916 10:24:47.648696   12653 system_pods.go:61] "kube-proxy-fnr7f" [a9e53f94-30ad-4178-b9e2-3ba4354a5adf] Running
	I0916 10:24:47.648700   12653 system_pods.go:61] "kube-scheduler-addons-191972" [32bbb72d-e291-4c84-8709-6066cf10b0cf] Running
	I0916 10:24:47.648709   12653 system_pods.go:61] "metrics-server-84c5f94fbc-s7654" [14e280ea-8ba8-4805-844c-aeff8fb18ce0] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 10:24:47.648719   12653 system_pods.go:61] "nvidia-device-plugin-daemonset-vpb85" [14ca6c72-b73b-4254-910a-0b876ca73f90] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 10:24:47.648732   12653 system_pods.go:61] "registry-66c9cd494c-vsbgv" [bd70fcec-e032-4dbd-902c-a139ac179bbf] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 10:24:47.648740   12653 system_pods.go:61] "registry-proxy-6vsnj" [05d6014b-9706-4d7a-a816-dbc7f557cd15] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 10:24:47.648749   12653 system_pods.go:61] "snapshot-controller-56fcc65765-4g9w6" [ad55cf42-58df-4d10-aae8-7626a86e7e0d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:24:47.648760   12653 system_pods.go:61] "snapshot-controller-56fcc65765-htkmc" [70a5c810-f514-4b23-b3a3-7a4cc46c35e2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:24:47.648766   12653 system_pods.go:61] "storage-provisioner" [ca9a25b7-8324-4ea1-a525-24f8c308baea] Running
	I0916 10:24:47.648777   12653 system_pods.go:61] "tiller-deploy-b48cc5f79-ddkxz" [dfe534c4-9e29-4907-b8cc-1dd12fc52f45] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 10:24:47.648789   12653 system_pods.go:74] duration metric: took 189.097544ms to wait for pod list to return data ...
	I0916 10:24:47.648801   12653 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:24:47.841018   12653 default_sa.go:45] found service account: "default"
	I0916 10:24:47.841043   12653 default_sa.go:55] duration metric: took 192.233696ms for default service account to be created ...
	I0916 10:24:47.841053   12653 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:24:47.841394   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:47.937402   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:48.049475   12653 system_pods.go:86] 19 kube-system pods found
	I0916 10:24:48.049509   12653 system_pods.go:89] "coredns-7c65d6cfc9-9rccl" [f2ffddc5-3995-4d5a-8059-297b3859f9c5] Running
	I0916 10:24:48.049523   12653 system_pods.go:89] "csi-hostpath-attacher-0" [07b91be6-2f2b-441c-80ee-f832e9ee2e18] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 10:24:48.049533   12653 system_pods.go:89] "csi-hostpath-resizer-0" [311c306b-accd-4242-8415-81199a4ce054] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 10:24:48.049541   12653 system_pods.go:89] "csi-hostpathplugin-qdnbn" [8f5408a2-a7eb-4f76-8ce0-b885fb4b47e5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 10:24:48.049546   12653 system_pods.go:89] "etcd-addons-191972" [81af20b7-9b19-4723-9b92-0ded3d775cd3] Running
	I0916 10:24:48.049550   12653 system_pods.go:89] "kindnet-rxp8k" [02b143c0-bbb4-4f94-8448-9c3c4f248a87] Running
	I0916 10:24:48.049554   12653 system_pods.go:89] "kube-apiserver-addons-191972" [1aabf917-f381-4e69-8524-954958c99b7e] Running
	I0916 10:24:48.049560   12653 system_pods.go:89] "kube-controller-manager-addons-191972" [ee796e67-bd06-4d93-9d20-aabcbb395ba2] Running
	I0916 10:24:48.049569   12653 system_pods.go:89] "kube-ingress-dns-minikube" [0f28fa0b-84f1-4215-aa41-4596ab4cef8b] Running
	I0916 10:24:48.049572   12653 system_pods.go:89] "kube-proxy-fnr7f" [a9e53f94-30ad-4178-b9e2-3ba4354a5adf] Running
	I0916 10:24:48.049576   12653 system_pods.go:89] "kube-scheduler-addons-191972" [32bbb72d-e291-4c84-8709-6066cf10b0cf] Running
	I0916 10:24:48.049579   12653 system_pods.go:89] "metrics-server-84c5f94fbc-s7654" [14e280ea-8ba8-4805-844c-aeff8fb18ce0] Running
	I0916 10:24:48.049587   12653 system_pods.go:89] "nvidia-device-plugin-daemonset-vpb85" [14ca6c72-b73b-4254-910a-0b876ca73f90] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 10:24:48.049595   12653 system_pods.go:89] "registry-66c9cd494c-vsbgv" [bd70fcec-e032-4dbd-902c-a139ac179bbf] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 10:24:48.049600   12653 system_pods.go:89] "registry-proxy-6vsnj" [05d6014b-9706-4d7a-a816-dbc7f557cd15] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 10:24:48.049605   12653 system_pods.go:89] "snapshot-controller-56fcc65765-4g9w6" [ad55cf42-58df-4d10-aae8-7626a86e7e0d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:24:48.049613   12653 system_pods.go:89] "snapshot-controller-56fcc65765-htkmc" [70a5c810-f514-4b23-b3a3-7a4cc46c35e2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:24:48.049618   12653 system_pods.go:89] "storage-provisioner" [ca9a25b7-8324-4ea1-a525-24f8c308baea] Running
	I0916 10:24:48.049625   12653 system_pods.go:89] "tiller-deploy-b48cc5f79-ddkxz" [dfe534c4-9e29-4907-b8cc-1dd12fc52f45] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 10:24:48.049634   12653 system_pods.go:126] duration metric: took 208.573497ms to wait for k8s-apps to be running ...
	I0916 10:24:48.049644   12653 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:24:48.049682   12653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:24:48.060846   12653 system_svc.go:56] duration metric: took 11.19263ms WaitForService to wait for kubelet
	I0916 10:24:48.060871   12653 kubeadm.go:582] duration metric: took 50.952228588s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:24:48.060890   12653 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:24:48.148219   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:48.242671   12653 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:24:48.242705   12653 node_conditions.go:123] node cpu capacity is 8
	I0916 10:24:48.242718   12653 node_conditions.go:105] duration metric: took 181.823571ms to run NodePressure ...
	I0916 10:24:48.242730   12653 start.go:241] waiting for startup goroutines ...
	I0916 10:24:48.342074   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:48.437253   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:48.650425   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:48.850814   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:48.937328   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:49.149694   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:49.342609   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:49.438289   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:49.649584   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:49.842847   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:49.936933   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:50.149348   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:50.342164   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:50.438163   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:50.649197   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:50.853453   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:50.938034   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:51.148940   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:51.341925   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:51.437207   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:51.649501   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:51.841516   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:51.937843   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:52.148973   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:52.341463   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:52.437548   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:52.649904   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:52.842395   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:52.938876   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:53.150346   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:53.342226   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:53.437852   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:53.650214   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:53.841999   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:53.938041   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:54.149543   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:54.342470   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:54.438196   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:54.649301   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:54.842219   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:54.937405   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:55.148757   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:55.342352   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:55.437453   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:55.649467   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:55.842884   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:55.938335   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:56.149527   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:56.342461   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:56.438207   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:56.649107   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:56.841744   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:56.938316   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:57.150214   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:57.342941   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:57.438321   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:57.650060   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:57.841776   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:57.937801   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:58.148724   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:58.342609   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:58.437714   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:58.648506   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:58.842214   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:58.937202   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:59.149022   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:59.341924   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:59.437205   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:59.649919   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:59.842721   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:59.943895   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:00.148461   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:00.342965   12653 kapi.go:107] duration metric: took 53.504381408s to wait for kubernetes.io/minikube-addons=registry ...
	I0916 10:25:00.438324   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:00.649093   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:00.937839   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:01.148871   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:01.436988   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:01.649359   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:01.937842   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:02.149127   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:02.439235   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:02.648644   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:02.937625   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:03.148437   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:03.438471   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:03.649883   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:03.936881   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:04.149787   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:04.438325   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:04.649405   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:04.937307   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:05.148501   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:05.437162   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:05.649408   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:05.937329   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:06.148922   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:06.437615   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:06.648794   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:06.937817   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:07.149424   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:07.437622   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:07.648805   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:07.975373   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:08.148579   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:08.438130   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:08.649051   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:08.938155   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:09.241812   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:09.438112   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:09.649051   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:09.937597   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:10.148065   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:10.438452   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:10.649615   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:10.937657   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:11.150286   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:11.438138   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:11.648515   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:11.938254   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:12.148855   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:12.437045   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:12.648984   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:12.937480   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:13.149222   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:13.437879   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:13.648073   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:13.937714   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:14.148744   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:14.437856   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:14.648905   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:14.937125   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:15.149947   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:15.438534   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:15.649415   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:15.938563   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:16.148929   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:16.437971   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:16.649574   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:16.938374   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:17.149584   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:17.437332   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:17.649230   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:17.939095   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:18.148655   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:18.437781   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:18.648991   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:18.937887   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:19.149216   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:19.437411   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:19.649222   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:19.937654   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:20.149853   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:20.438168   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:20.648811   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:20.948409   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:21.172608   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:21.655855   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:21.656415   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:21.973917   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:22.149178   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:22.438576   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:22.649097   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:22.939034   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:23.149425   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:23.438124   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:23.650285   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:23.938421   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:24.148909   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:24.441944   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:24.649383   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:24.938850   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:25.149722   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:25.437832   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:25.649648   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:25.938500   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:26.149259   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:26.437884   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:26.649790   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:26.937641   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:27.149739   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:27.438223   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:27.648728   12653 kapi.go:107] duration metric: took 1m23.003864669s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0916 10:25:27.938153   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:28.438461   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:28.939228   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:29.438060   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:29.937952   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:30.438284   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:30.938383   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:31.437781   12653 kapi.go:107] duration metric: took 1m24.004430138s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0916 10:26:53.748019   12653 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 10:26:53.748042   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:54.248033   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:54.748085   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:55.248231   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:55.748800   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:56.251601   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:56.748202   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:57.248415   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:57.748866   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:58.248439   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:58.748615   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:59.248797   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:59.748674   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:00.248751   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:00.748977   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:01.247802   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:01.749050   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:02.247827   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:02.751439   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:03.248607   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:03.748774   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:04.248993   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:04.748179   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:05.248453   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:05.748269   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:06.248843   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:06.749191   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:07.248224   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:07.748003   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:08.248208   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:08.748339   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:09.248558   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:09.748890   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:10.247853   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:10.748462   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:11.248698   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:11.748605   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:12.249209   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:12.747956   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:13.247977   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:13.748012   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:14.248098   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:14.748444   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:15.248890   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:15.748752   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:16.248803   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:16.749124   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:17.248063   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:17.747865   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:18.247931   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:18.748279   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:19.248473   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:19.748289   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:20.248375   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:20.748484   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:21.248848   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:21.748816   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:22.247827   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:22.748462   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:23.248760   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:23.749167   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:24.248424   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:24.748963   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:25.248350   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:25.748222   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:26.248413   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:26.748789   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:27.247908   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:27.747837   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:28.248226   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:28.748371   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:29.249618   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:29.748597   12653 kapi.go:107] duration metric: took 3m21.003635946s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0916 10:27:29.750701   12653 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-191972 cluster.
	I0916 10:27:29.752412   12653 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0916 10:27:29.754028   12653 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0916 10:27:29.756074   12653 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, nvidia-device-plugin, cloud-spanner, storage-provisioner-rancher, volcano, helm-tiller, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0916 10:27:29.757930   12653 addons.go:510] duration metric: took 3m32.649258168s for enable addons: enabled=[storage-provisioner ingress-dns nvidia-device-plugin cloud-spanner storage-provisioner-rancher volcano helm-tiller metrics-server inspektor-gadget yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0916 10:27:29.758012   12653 start.go:246] waiting for cluster config update ...
	I0916 10:27:29.758039   12653 start.go:255] writing updated cluster config ...
	I0916 10:27:29.758383   12653 ssh_runner.go:195] Run: rm -f paused
	I0916 10:27:29.765351   12653 out.go:177] * Done! kubectl is now configured to use "addons-191972" cluster and "default" namespace by default
	E0916 10:27:29.767004   12653 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	85bcbbfdfc074       195d612ae7722       2 minutes ago       Exited              gadget                                   6                   d4feba9de8c25       gadget-rwwbs
	cfade64badb92       db2fc13d44d50       5 minutes ago       Running             gcp-auth                                 0                   99d0fe27850b3       gcp-auth-89d5ffd79-6r2td
	df81f1fc28725       a876393c9504b       6 minutes ago       Running             admission                                0                   0aa4b1d0acb5a       volcano-admission-77d7d48b68-rcfsk
	9dd4a83ba6d70       6041e92ec449f       6 minutes ago       Running             volcano-scheduler                        1                   9564c1c96bcee       volcano-scheduler-576bc46687-jtz7f
	72101e37ab665       738351fd438f0       6 minutes ago       Running             csi-snapshotter                          0                   73e3689e18fe9       csi-hostpathplugin-qdnbn
	da8f6a34306e1       931dbfd16f87c       6 minutes ago       Running             csi-provisioner                          0                   73e3689e18fe9       csi-hostpathplugin-qdnbn
	1649420a66573       e899260153aed       7 minutes ago       Running             liveness-probe                           0                   73e3689e18fe9       csi-hostpathplugin-qdnbn
	e0e474b6d95e5       e255e073c508c       7 minutes ago       Running             hostpath                                 0                   73e3689e18fe9       csi-hostpathplugin-qdnbn
	d5fc898fd874b       a80c8fd6e5229       7 minutes ago       Running             controller                               0                   30db636a12234       ingress-nginx-controller-bc57996ff-lpb7q
	06d43e898075b       88ef14a257f42       7 minutes ago       Running             node-driver-registrar                    0                   73e3689e18fe9       csi-hostpathplugin-qdnbn
	39c5183f27011       ce263a8653f9c       7 minutes ago       Exited              patch                                    0                   589d98ccee909       ingress-nginx-admission-patch-8f8nz
	a8bb0086c52b5       6041e92ec449f       7 minutes ago       Exited              volcano-scheduler                        0                   9564c1c96bcee       volcano-scheduler-576bc46687-jtz7f
	c87d3f3268f2d       159abe21a6880       7 minutes ago       Running             nvidia-device-plugin-ctr                 0                   4d5298be39c95       nvidia-device-plugin-daemonset-vpb85
	ddf31d8b68bc1       a876393c9504b       7 minutes ago       Exited              main                                     0                   b49978f431ab4       volcano-admission-init-57gk4
	06cf11b7a83f9       ce263a8653f9c       7 minutes ago       Exited              create                                   0                   6301c91177942       ingress-nginx-admission-create-5rjsx
	1cd468b4437bd       a1ed5895ba635       7 minutes ago       Running             csi-external-health-monitor-controller   0                   73e3689e18fe9       csi-hostpathplugin-qdnbn
	79266075c79ff       59cbb42146a37       7 minutes ago       Running             csi-attacher                             0                   a4c401b363464       csi-hostpath-attacher-0
	c65d9de60c2d0       aa61ee9c70bc4       7 minutes ago       Running             volume-snapshot-controller               0                   dba5883c9dc9b       snapshot-controller-56fcc65765-4g9w6
	0c025c1b7dd4c       19a639eda60f0       7 minutes ago       Running             csi-resizer                              0                   176615116e8de       csi-hostpath-resizer-0
	c7d7b6bb58927       96e410111f023       7 minutes ago       Running             volcano-controllers                      0                   84cb34271a61b       volcano-controllers-56675bb4d5-hdpdb
	b2d8c858e6464       75ef5b734af47       7 minutes ago       Running             registry                                 0                   67e2abd040a93       registry-66c9cd494c-vsbgv
	6819af68287c4       aa61ee9c70bc4       7 minutes ago       Running             volume-snapshot-controller               0                   bb404cbffba4e       snapshot-controller-56fcc65765-htkmc
	89cfd63e70df2       3f39089e90831       7 minutes ago       Running             tiller                                   0                   79bab02e559b8       tiller-deploy-b48cc5f79-ddkxz
	4c991c61b822b       c7e3a3eeaf5ed       7 minutes ago       Running             yakd                                     0                   b4f1dc70e1041       yakd-dashboard-67d98fc6b-gsg67
	7aa17b075bc66       38c5e506fa551       7 minutes ago       Running             registry-proxy                           0                   a76629f8ed521       registry-proxy-6vsnj
	576d6c9483015       48d9cfaaf3904       7 minutes ago       Running             metrics-server                           0                   debbe4f662687       metrics-server-84c5f94fbc-s7654
	3c2ba113f3a92       c69fa2e9cbf5f       7 minutes ago       Running             coredns                                  0                   e557eec597dbb       coredns-7c65d6cfc9-9rccl
	4b7eae4464585       5d78bb8f226e8       7 minutes ago       Running             cloud-spanner-emulator                   0                   d087511b13dbf       cloud-spanner-emulator-769b77f747-8tnxp
	74825d98cba88       e16d1e3a10667       7 minutes ago       Running             local-path-provisioner                   0                   1e611781a41cb       local-path-provisioner-86d989889c-w6mf9
	dfe8c0b03e5c3       30dd67412fdea       8 minutes ago       Running             minikube-ingress-dns                     0                   6682d7fdc0949       kube-ingress-dns-minikube
	62a4b8c25074d       6e38f40d628db       8 minutes ago       Running             storage-provisioner                      0                   54247c11bac23       storage-provisioner
	4c4482bfa98cf       12968670680f4       8 minutes ago       Running             kindnet-cni                              0                   48c4106711b6e       kindnet-rxp8k
	d9d3353287790       60c005f310ff3       8 minutes ago       Running             kube-proxy                               0                   b70e27ed4bc15       kube-proxy-fnr7f
	6e4dbd39a8ef5       175ffd71cce3d       8 minutes ago       Running             kube-controller-manager                  0                   f593f7267aeda       kube-controller-manager-addons-191972
	c76b948fbd083       6bab7719df100       8 minutes ago       Running             kube-apiserver                           0                   a7eb33c199dbc       kube-apiserver-addons-191972
	0539bdd901d4a       9aa1fad941575       8 minutes ago       Running             kube-scheduler                           0                   3aba8d618e3fa       kube-scheduler-addons-191972
	92c65a04535dd       2e96e5913fc06       8 minutes ago       Running             etcd                                     0                   84fc0865b25fe       etcd-addons-191972
	
	
	==> containerd <==
	Sep 16 10:27:51 addons-191972 containerd[858]: time="2024-09-16T10:27:51.527869707Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8d37e5e006a1b9fd8200f3ae3dbeabf8b8bc403e894738fe714d548dbb16d939\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 16 10:27:51 addons-191972 containerd[858]: time="2024-09-16T10:27:51.527959384Z" level=info msg="RemovePodSandbox \"8d37e5e006a1b9fd8200f3ae3dbeabf8b8bc403e894738fe714d548dbb16d939\" returns successfully"
	Sep 16 10:30:22 addons-191972 containerd[858]: time="2024-09-16T10:30:22.472786510Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\""
	Sep 16 10:30:22 addons-191972 containerd[858]: time="2024-09-16T10:30:22.782235554Z" level=info msg="ImageUpdate event name:\"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 10:30:22 addons-191972 containerd[858]: time="2024-09-16T10:30:22.783046431Z" level=info msg="stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: active requests=0, bytes read=89"
	Sep 16 10:30:22 addons-191972 containerd[858]: time="2024-09-16T10:30:22.787250224Z" level=info msg="Pulled image \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\" with image id \"sha256:195d612ae7722fdfec0d582d74fde7db062c1655b60737ceedb14cd627d0d601\", repo tag \"\", repo digest \"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\", size \"76810004\" in 314.394692ms"
	Sep 16 10:30:22 addons-191972 containerd[858]: time="2024-09-16T10:30:22.787449262Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\" returns image reference \"sha256:195d612ae7722fdfec0d582d74fde7db062c1655b60737ceedb14cd627d0d601\""
	Sep 16 10:30:22 addons-191972 containerd[858]: time="2024-09-16T10:30:22.789725094Z" level=info msg="CreateContainer within sandbox \"d4feba9de8c251ddcedb6bf5e748a13f7a0bf0cb99f6be81820752487f60aa7e\" for container &ContainerMetadata{Name:gadget,Attempt:6,}"
	Sep 16 10:30:22 addons-191972 containerd[858]: time="2024-09-16T10:30:22.800134969Z" level=info msg="CreateContainer within sandbox \"d4feba9de8c251ddcedb6bf5e748a13f7a0bf0cb99f6be81820752487f60aa7e\" for &ContainerMetadata{Name:gadget,Attempt:6,} returns container id \"85bcbbfdfc074366faf8d70de5e5b0ae05b2c86caf5118e07c5f5779a11f6f09\""
	Sep 16 10:30:22 addons-191972 containerd[858]: time="2024-09-16T10:30:22.800754490Z" level=info msg="StartContainer for \"85bcbbfdfc074366faf8d70de5e5b0ae05b2c86caf5118e07c5f5779a11f6f09\""
	Sep 16 10:30:22 addons-191972 containerd[858]: time="2024-09-16T10:30:22.843283404Z" level=info msg="StartContainer for \"85bcbbfdfc074366faf8d70de5e5b0ae05b2c86caf5118e07c5f5779a11f6f09\" returns successfully"
	Sep 16 10:30:24 addons-191972 containerd[858]: time="2024-09-16T10:30:24.176994977Z" level=error msg="ExecSync for \"85bcbbfdfc074366faf8d70de5e5b0ae05b2c86caf5118e07c5f5779a11f6f09\" failed" error="failed to exec in container: failed to start exec \"27882f61b24b53a443b6e46ee04153bb3515bea43d925a7012150636c8ba9b92\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 16 10:30:24 addons-191972 containerd[858]: time="2024-09-16T10:30:24.185493904Z" level=error msg="ExecSync for \"85bcbbfdfc074366faf8d70de5e5b0ae05b2c86caf5118e07c5f5779a11f6f09\" failed" error="failed to exec in container: failed to start exec \"26ed0c03bd9a0ff0ee995c79b2ba79abf38609ad8135188439c4af4d0a6a93e0\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 16 10:30:24 addons-191972 containerd[858]: time="2024-09-16T10:30:24.194276478Z" level=error msg="ExecSync for \"85bcbbfdfc074366faf8d70de5e5b0ae05b2c86caf5118e07c5f5779a11f6f09\" failed" error="failed to exec in container: failed to start exec \"938a8fd9e03bc05aa8f1e8372356551ee6b960cf3a438422b232df76685a688e\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 16 10:30:24 addons-191972 containerd[858]: time="2024-09-16T10:30:24.409449949Z" level=info msg="shim disconnected" id=85bcbbfdfc074366faf8d70de5e5b0ae05b2c86caf5118e07c5f5779a11f6f09 namespace=k8s.io
	Sep 16 10:30:24 addons-191972 containerd[858]: time="2024-09-16T10:30:24.409512235Z" level=warning msg="cleaning up after shim disconnected" id=85bcbbfdfc074366faf8d70de5e5b0ae05b2c86caf5118e07c5f5779a11f6f09 namespace=k8s.io
	Sep 16 10:30:24 addons-191972 containerd[858]: time="2024-09-16T10:30:24.409526255Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 16 10:30:24 addons-191972 containerd[858]: time="2024-09-16T10:30:24.444045761Z" level=error msg="ExecSync for \"85bcbbfdfc074366faf8d70de5e5b0ae05b2c86caf5118e07c5f5779a11f6f09\" failed" error="failed to exec in container: container is in CONTAINER_EXITED state"
	Sep 16 10:30:24 addons-191972 containerd[858]: time="2024-09-16T10:30:24.444057440Z" level=error msg="ExecSync for \"85bcbbfdfc074366faf8d70de5e5b0ae05b2c86caf5118e07c5f5779a11f6f09\" failed" error="failed to exec in container: container is in CONTAINER_EXITED state"
	Sep 16 10:30:24 addons-191972 containerd[858]: time="2024-09-16T10:30:24.444558731Z" level=error msg="ExecSync for \"85bcbbfdfc074366faf8d70de5e5b0ae05b2c86caf5118e07c5f5779a11f6f09\" failed" error="failed to exec in container: container is in CONTAINER_EXITED state"
	Sep 16 10:30:24 addons-191972 containerd[858]: time="2024-09-16T10:30:24.444570246Z" level=error msg="ExecSync for \"85bcbbfdfc074366faf8d70de5e5b0ae05b2c86caf5118e07c5f5779a11f6f09\" failed" error="failed to exec in container: container is in CONTAINER_EXITED state"
	Sep 16 10:30:24 addons-191972 containerd[858]: time="2024-09-16T10:30:24.444959201Z" level=error msg="ExecSync for \"85bcbbfdfc074366faf8d70de5e5b0ae05b2c86caf5118e07c5f5779a11f6f09\" failed" error="failed to exec in container: container is in CONTAINER_EXITED state"
	Sep 16 10:30:24 addons-191972 containerd[858]: time="2024-09-16T10:30:24.444970542Z" level=error msg="ExecSync for \"85bcbbfdfc074366faf8d70de5e5b0ae05b2c86caf5118e07c5f5779a11f6f09\" failed" error="failed to exec in container: container is in CONTAINER_EXITED state"
	Sep 16 10:30:24 addons-191972 containerd[858]: time="2024-09-16T10:30:24.930369287Z" level=info msg="RemoveContainer for \"5fa0d996921d71662e0953097db38f30fe57ee50d895b0e21e192a63cb74b9c9\""
	Sep 16 10:30:24 addons-191972 containerd[858]: time="2024-09-16T10:30:24.936616496Z" level=info msg="RemoveContainer for \"5fa0d996921d71662e0953097db38f30fe57ee50d895b0e21e192a63cb74b9c9\" returns successfully"
	
	
	==> coredns [3c2ba113f3a928b6de94c4ca0bf607534ff798f3d85ffd2a7685ed6dacc00744] <==
	[INFO] 10.244.0.3:34722 - 16813 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000126799s
	[INFO] 10.244.0.3:47807 - 19593 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000078163s
	[INFO] 10.244.0.3:47807 - 48005 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00012131s
	[INFO] 10.244.0.3:52137 - 389 "AAAA IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.004304691s
	[INFO] 10.244.0.3:52137 - 40577 "A IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.004777432s
	[INFO] 10.244.0.3:37044 - 23366 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.003875752s
	[INFO] 10.244.0.3:37044 - 14153 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004520489s
	[INFO] 10.244.0.3:37775 - 29429 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003806717s
	[INFO] 10.244.0.3:37775 - 41674 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003872738s
	[INFO] 10.244.0.3:58704 - 7476 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000090446s
	[INFO] 10.244.0.3:58704 - 1849 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000134094s
	[INFO] 10.244.0.25:38825 - 37363 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000216144s
	[INFO] 10.244.0.25:38931 - 39307 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000245831s
	[INFO] 10.244.0.25:50024 - 16483 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000164924s
	[INFO] 10.244.0.25:42236 - 32299 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000196632s
	[INFO] 10.244.0.25:49331 - 38072 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000114124s
	[INFO] 10.244.0.25:36861 - 61813 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000164666s
	[INFO] 10.244.0.25:33081 - 5019 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.00927584s
	[INFO] 10.244.0.25:32825 - 10257 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.009718235s
	[INFO] 10.244.0.25:50215 - 44243 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007980557s
	[INFO] 10.244.0.25:46089 - 36172 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008374403s
	[INFO] 10.244.0.25:60708 - 60516 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00523636s
	[INFO] 10.244.0.25:53932 - 3930 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005436837s
	[INFO] 10.244.0.25:33968 - 30856 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.002295196s
	[INFO] 10.244.0.25:51453 - 49493 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002387298s
	
	
	==> describe nodes <==
	Name:               addons-191972
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-191972
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=addons-191972
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_23_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-191972
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-191972"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:23:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-191972
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:32:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:27:56 +0000   Mon, 16 Sep 2024 10:23:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:27:56 +0000   Mon, 16 Sep 2024 10:23:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:27:56 +0000   Mon, 16 Sep 2024 10:23:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:27:56 +0000   Mon, 16 Sep 2024 10:23:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-191972
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 0263fbb37d3545b09ff38a7b68907e4c
	  System UUID:                45c87f39-d597-4b0c-a097-439ebdb945ff
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-769b77f747-8tnxp     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m29s
	  gadget                      gadget-rwwbs                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m25s
	  gcp-auth                    gcp-auth-89d5ffd79-6r2td                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m36s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-lpb7q    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         8m25s
	  kube-system                 coredns-7c65d6cfc9-9rccl                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     8m33s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m22s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m22s
	  kube-system                 csi-hostpathplugin-qdnbn                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m22s
	  kube-system                 etcd-addons-191972                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m39s
	  kube-system                 kindnet-rxp8k                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      8m33s
	  kube-system                 kube-apiserver-addons-191972                250m (3%)     0 (0%)      0 (0%)           0 (0%)         8m38s
	  kube-system                 kube-controller-manager-addons-191972       200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m39s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m29s
	  kube-system                 kube-proxy-fnr7f                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m33s
	  kube-system                 kube-scheduler-addons-191972                100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m38s
	  kube-system                 metrics-server-84c5f94fbc-s7654             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         8m27s
	  kube-system                 nvidia-device-plugin-daemonset-vpb85        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m29s
	  kube-system                 registry-66c9cd494c-vsbgv                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m28s
	  kube-system                 registry-proxy-6vsnj                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m27s
	  kube-system                 snapshot-controller-56fcc65765-4g9w6        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m24s
	  kube-system                 snapshot-controller-56fcc65765-htkmc        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m24s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m29s
	  kube-system                 tiller-deploy-b48cc5f79-ddkxz               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m27s
	  local-path-storage          local-path-provisioner-86d989889c-w6mf9     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m29s
	  volcano-system              volcano-admission-77d7d48b68-rcfsk          0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m24s
	  volcano-system              volcano-controllers-56675bb4d5-hdpdb        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m24s
	  volcano-system              volcano-scheduler-576bc46687-jtz7f          0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m23s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-gsg67              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     8m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 8m29s  kube-proxy       
	  Normal   Starting                 8m38s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m38s  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  8m38s  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8m38s  kubelet          Node addons-191972 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m38s  kubelet          Node addons-191972 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m38s  kubelet          Node addons-191972 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m34s  node-controller  Node addons-191972 event: Registered Node addons-191972 in Controller
	
	
	==> dmesg <==
	[Sep16 10:17]  #2
	[  +0.001391]  #3
	[  +0.000000]  #4
	[  +0.003060] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.003238] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.002037] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.002135]  #5
	[  +0.000696]  #6
	[  +0.003195]  #7
	[  +0.058540] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.423486] i8042: Warning: Keylock active
	[  +0.007424] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.002994] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000654] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000607] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000622] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000638] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000657] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000607] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000608] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.588189] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +7.251152] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [92c65a04535ddef6879f2eb4260843c6961d1fb2395f595b3a5665263c562002] <==
	{"level":"info","ts":"2024-09-16T10:23:47.260476Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:23:47.261160Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:23:47.261447Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:23:47.262322Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-16T10:23:47.262576Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:24:15.873285Z","caller":"traceutil/trace.go:171","msg":"trace[187537689] transaction","detail":"{read_only:false; response_revision:986; number_of_response:1; }","duration":"119.841789ms","start":"2024-09-16T10:24:15.753419Z","end":"2024-09-16T10:24:15.873261Z","steps":["trace[187537689] 'process raft request'  (duration: 119.705144ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:24:16.060589Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.178284ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:24:16.060680Z","caller":"traceutil/trace.go:171","msg":"trace[2127996318] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:986; }","duration":"125.313412ms","start":"2024-09-16T10:24:15.935346Z","end":"2024-09-16T10:24:16.060659Z","steps":["trace[2127996318] 'range keys from in-memory index tree'  (duration: 125.097316ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:07.796336Z","caller":"traceutil/trace.go:171","msg":"trace[28147226] transaction","detail":"{read_only:false; response_revision:1242; number_of_response:1; }","duration":"128.826483ms","start":"2024-09-16T10:25:07.667485Z","end":"2024-09-16T10:25:07.796311Z","steps":["trace[28147226] 'process raft request'  (duration: 41.106171ms)","trace[28147226] 'compare'  (duration: 87.53434ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:25:21.424319Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.488522ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128031931970271159 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/volcano-system/volcano-scheduler-576bc46687-jtz7f\" mod_revision:812 > success:<request_put:<key:\"/registry/pods/volcano-system/volcano-scheduler-576bc46687-jtz7f\" value_size:4029 >> failure:<request_range:<key:\"/registry/pods/volcano-system/volcano-scheduler-576bc46687-jtz7f\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-09-16T10:25:21.424401Z","caller":"traceutil/trace.go:171","msg":"trace[1168470588] linearizableReadLoop","detail":"{readStateIndex:1335; appliedIndex:1334; }","duration":"177.395065ms","start":"2024-09-16T10:25:21.246995Z","end":"2024-09-16T10:25:21.424390Z","steps":["trace[1168470588] 'read index received'  (duration: 48.427907ms)","trace[1168470588] 'applied index is now lower than readState.Index'  (duration: 128.965162ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:25:21.424444Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.446761ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:21.424466Z","caller":"traceutil/trace.go:171","msg":"trace[1171179904] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1302; }","duration":"177.469291ms","start":"2024-09-16T10:25:21.246991Z","end":"2024-09-16T10:25:21.424460Z","steps":["trace[1171179904] 'agreement among raft nodes before linearized reading'  (duration: 177.429463ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:21.424486Z","caller":"traceutil/trace.go:171","msg":"trace[1930200040] transaction","detail":"{read_only:false; response_revision:1302; number_of_response:1; }","duration":"247.357795ms","start":"2024-09-16T10:25:21.177107Z","end":"2024-09-16T10:25:21.424464Z","steps":["trace[1930200040] 'process raft request'  (duration: 118.297085ms)","trace[1930200040] 'compare'  (duration: 128.26971ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T10:25:21.652910Z","caller":"traceutil/trace.go:171","msg":"trace[1856019889] linearizableReadLoop","detail":"{readStateIndex:1338; appliedIndex:1335; }","duration":"218.326846ms","start":"2024-09-16T10:25:21.434567Z","end":"2024-09-16T10:25:21.652894Z","steps":["trace[1856019889] 'read index received'  (duration: 55.93458ms)","trace[1856019889] 'applied index is now lower than readState.Index'  (duration: 162.391571ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T10:25:21.652969Z","caller":"traceutil/trace.go:171","msg":"trace[1279722024] transaction","detail":"{read_only:false; response_revision:1305; number_of_response:1; }","duration":"224.683287ms","start":"2024-09-16T10:25:21.428268Z","end":"2024-09-16T10:25:21.652951Z","steps":["trace[1279722024] 'process raft request'  (duration: 224.540452ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:21.653003Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"218.415614ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:21.653027Z","caller":"traceutil/trace.go:171","msg":"trace[1008371896] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1305; }","duration":"218.457307ms","start":"2024-09-16T10:25:21.434563Z","end":"2024-09-16T10:25:21.653020Z","steps":["trace[1008371896] 'agreement among raft nodes before linearized reading'  (duration: 218.392253ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:21.652921Z","caller":"traceutil/trace.go:171","msg":"trace[1132385399] transaction","detail":"{read_only:false; response_revision:1304; number_of_response:1; }","duration":"225.049342ms","start":"2024-09-16T10:25:21.427850Z","end":"2024-09-16T10:25:21.652899Z","steps":["trace[1132385399] 'process raft request'  (duration: 131.625555ms)","trace[1132385399] 'compare'  (duration: 93.227933ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T10:25:21.868227Z","caller":"traceutil/trace.go:171","msg":"trace[1246984751] linearizableReadLoop","detail":"{readStateIndex:1339; appliedIndex:1338; }","duration":"139.924393ms","start":"2024-09-16T10:25:21.728284Z","end":"2024-09-16T10:25:21.868208Z","steps":["trace[1246984751] 'read index received'  (duration: 63.202511ms)","trace[1246984751] 'applied index is now lower than readState.Index'  (duration: 76.72121ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T10:25:21.868259Z","caller":"traceutil/trace.go:171","msg":"trace[501466804] transaction","detail":"{read_only:false; response_revision:1306; number_of_response:1; }","duration":"210.400699ms","start":"2024-09-16T10:25:21.657832Z","end":"2024-09-16T10:25:21.868233Z","steps":["trace[501466804] 'process raft request'  (duration: 133.673421ms)","trace[501466804] 'compare'  (duration: 76.618072ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:25:21.868373Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.878283ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:21.868410Z","caller":"traceutil/trace.go:171","msg":"trace[1169815467] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1306; }","duration":"121.931335ms","start":"2024-09-16T10:25:21.746471Z","end":"2024-09-16T10:25:21.868402Z","steps":["trace[1169815467] 'agreement among raft nodes before linearized reading'  (duration: 121.861476ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:21.868538Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.236255ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-09-16T10:25:21.868579Z","caller":"traceutil/trace.go:171","msg":"trace[344111638] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1306; }","duration":"140.292497ms","start":"2024-09-16T10:25:21.728276Z","end":"2024-09-16T10:25:21.868569Z","steps":["trace[344111638] 'agreement among raft nodes before linearized reading'  (duration: 140.016451ms)"],"step_count":1}
	
	
	==> gcp-auth [cfade64badb92dacf9d0c56d24c0fb7e95088f5abf7a814ef4801971e4b26216] <==
	2024/09/16 10:27:29 GCP Auth Webhook started!
	
	
	==> kernel <==
	 10:32:30 up 14 min,  0 users,  load average: 0.16, 0.38, 0.33
	Linux addons-191972 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [4c4482bfa98cf1024c4b123130c5a320a891204919b9a1459b6f3269e1e7d29d] <==
	I0916 10:30:29.443919       1 main.go:299] handling current node
	I0916 10:30:39.447829       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:30:39.447865       1 main.go:299] handling current node
	I0916 10:30:49.449838       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:30:49.449872       1 main.go:299] handling current node
	I0916 10:30:59.441603       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:30:59.441634       1 main.go:299] handling current node
	I0916 10:31:09.447812       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:31:09.447844       1 main.go:299] handling current node
	I0916 10:31:19.450907       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:31:19.450940       1 main.go:299] handling current node
	I0916 10:31:29.448450       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:31:29.448483       1 main.go:299] handling current node
	I0916 10:31:39.447884       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:31:39.447915       1 main.go:299] handling current node
	I0916 10:31:49.443809       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:31:49.443842       1 main.go:299] handling current node
	I0916 10:31:59.441426       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:31:59.441461       1 main.go:299] handling current node
	I0916 10:32:09.447827       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:32:09.447865       1 main.go:299] handling current node
	I0916 10:32:19.448134       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:32:19.448165       1 main.go:299] handling current node
	I0916 10:32:29.443818       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:32:29.443852       1 main.go:299] handling current node
	
	
	==> kube-apiserver [c76b948fbd083e0e5229c3ac96548e67224afd5a037343a2b118da9b9ae5ad3a] <==
	W0916 10:26:12.255412       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:13.326100       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:14.378343       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:15.413935       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:16.459096       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:17.509475       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:18.532761       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:19.545400       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:20.553347       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:21.640741       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:22.735942       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:24.007851       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:25.084707       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:26.137166       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:27.215912       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:28.269709       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:29.285978       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:30.385745       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:31.389520       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:53.671732       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.114.210:443: connect: connection refused
	E0916 10:26:53.671804       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.114.210:443: connect: connection refused" logger="UnhandledError"
	W0916 10:27:11.712823       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.114.210:443: connect: connection refused
	E0916 10:27:11.712858       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.114.210:443: connect: connection refused" logger="UnhandledError"
	W0916 10:27:11.785537       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.114.210:443: connect: connection refused
	E0916 10:27:11.785576       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.114.210:443: connect: connection refused" logger="UnhandledError"
	
	
	==> kube-controller-manager [6e4dbd39a8ef56c5a753071ab0489111fcbcaac9f7cbe3b4fdf88030aa41c77b] <==
	I0916 10:27:11.721447       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0916 10:27:11.726999       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0916 10:27:11.727309       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0916 10:27:11.736401       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0916 10:27:11.792850       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 10:27:11.798111       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 10:27:11.798661       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 10:27:11.810216       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 10:27:12.442562       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0916 10:27:12.450572       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 10:27:13.528146       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 10:27:13.560100       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0916 10:27:14.534075       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 10:27:14.540099       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 10:27:14.543857       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 10:27:14.564649       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0916 10:27:14.570957       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0916 10:27:14.576033       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0916 10:27:29.502878       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="6.820035ms"
	I0916 10:27:29.502976       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="56.15µs"
	I0916 10:27:44.013104       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0916 10:27:44.016022       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0916 10:27:44.039693       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0916 10:27:44.041144       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0916 10:27:56.735238       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-191972"
	
	
	==> kube-proxy [d9d335328779062c055353442bb9ca0c1e2fef63bc1c598650e6ea25604013a5] <==
	I0916 10:23:59.129562       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:23:59.824945       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:23:59.825067       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:24:00.037013       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:24:00.040602       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:24:00.135054       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:24:00.135450       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:24:00.135471       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:24:00.237323       1 config.go:199] "Starting service config controller"
	I0916 10:24:00.237372       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:24:00.237410       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:24:00.237416       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:24:00.237471       1 config.go:328] "Starting node config controller"
	I0916 10:24:00.237491       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:24:00.337642       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:24:00.337724       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:24:00.337829       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0539bdd901d4af068b2160b27df45018e72113a7a75c6a082ae7e2f64f3f908b] <==
	W0916 10:23:49.138663       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0916 10:23:49.138662       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:23:49.138689       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0916 10:23:49.138707       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:49.138696       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0916 10:23:49.138760       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0916 10:23:49.138769       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 10:23:49.138774       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 10:23:49.138779       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 10:23:49.138787       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:49.139877       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:49.139916       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:50.064082       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 10:23:50.064133       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:50.118512       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:23:50.118558       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:50.132045       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:23:50.132096       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:50.175403       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:50.175438       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:50.199805       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:23:50.199848       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:50.241540       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:50.241599       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0916 10:23:50.633994       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:30:24 addons-191972 kubelet[1565]: I0916 10:30:24.929466    1565 scope.go:117] "RemoveContainer" containerID="85bcbbfdfc074366faf8d70de5e5b0ae05b2c86caf5118e07c5f5779a11f6f09"
	Sep 16 10:30:24 addons-191972 kubelet[1565]: E0916 10:30:24.929683    1565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-rwwbs_gadget(62b2176c-9dcb-4741-bd18-81ab2a2303f2)\"" pod="gadget/gadget-rwwbs" podUID="62b2176c-9dcb-4741-bd18-81ab2a2303f2"
	Sep 16 10:30:29 addons-191972 kubelet[1565]: I0916 10:30:29.444582    1565 scope.go:117] "RemoveContainer" containerID="85bcbbfdfc074366faf8d70de5e5b0ae05b2c86caf5118e07c5f5779a11f6f09"
	Sep 16 10:30:29 addons-191972 kubelet[1565]: E0916 10:30:29.444787    1565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-rwwbs_gadget(62b2176c-9dcb-4741-bd18-81ab2a2303f2)\"" pod="gadget/gadget-rwwbs" podUID="62b2176c-9dcb-4741-bd18-81ab2a2303f2"
	Sep 16 10:30:35 addons-191972 kubelet[1565]: I0916 10:30:35.472080    1565 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-vpb85" secret="" err="secret \"gcp-auth\" not found"
	Sep 16 10:30:43 addons-191972 kubelet[1565]: I0916 10:30:43.472156    1565 scope.go:117] "RemoveContainer" containerID="85bcbbfdfc074366faf8d70de5e5b0ae05b2c86caf5118e07c5f5779a11f6f09"
	Sep 16 10:30:43 addons-191972 kubelet[1565]: E0916 10:30:43.472346    1565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-rwwbs_gadget(62b2176c-9dcb-4741-bd18-81ab2a2303f2)\"" pod="gadget/gadget-rwwbs" podUID="62b2176c-9dcb-4741-bd18-81ab2a2303f2"
	Sep 16 10:30:56 addons-191972 kubelet[1565]: I0916 10:30:56.471258    1565 scope.go:117] "RemoveContainer" containerID="85bcbbfdfc074366faf8d70de5e5b0ae05b2c86caf5118e07c5f5779a11f6f09"
	Sep 16 10:30:56 addons-191972 kubelet[1565]: E0916 10:30:56.471496    1565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-rwwbs_gadget(62b2176c-9dcb-4741-bd18-81ab2a2303f2)\"" pod="gadget/gadget-rwwbs" podUID="62b2176c-9dcb-4741-bd18-81ab2a2303f2"
	Sep 16 10:31:01 addons-191972 kubelet[1565]: I0916 10:31:01.472634    1565 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-vsbgv" secret="" err="secret \"gcp-auth\" not found"
	Sep 16 10:31:07 addons-191972 kubelet[1565]: I0916 10:31:07.471765    1565 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-6vsnj" secret="" err="secret \"gcp-auth\" not found"
	Sep 16 10:31:11 addons-191972 kubelet[1565]: I0916 10:31:11.472587    1565 scope.go:117] "RemoveContainer" containerID="85bcbbfdfc074366faf8d70de5e5b0ae05b2c86caf5118e07c5f5779a11f6f09"
	Sep 16 10:31:11 addons-191972 kubelet[1565]: E0916 10:31:11.472747    1565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-rwwbs_gadget(62b2176c-9dcb-4741-bd18-81ab2a2303f2)\"" pod="gadget/gadget-rwwbs" podUID="62b2176c-9dcb-4741-bd18-81ab2a2303f2"
	Sep 16 10:31:26 addons-191972 kubelet[1565]: I0916 10:31:26.471457    1565 scope.go:117] "RemoveContainer" containerID="85bcbbfdfc074366faf8d70de5e5b0ae05b2c86caf5118e07c5f5779a11f6f09"
	Sep 16 10:31:26 addons-191972 kubelet[1565]: E0916 10:31:26.471626    1565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-rwwbs_gadget(62b2176c-9dcb-4741-bd18-81ab2a2303f2)\"" pod="gadget/gadget-rwwbs" podUID="62b2176c-9dcb-4741-bd18-81ab2a2303f2"
	Sep 16 10:31:39 addons-191972 kubelet[1565]: I0916 10:31:39.472092    1565 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-vpb85" secret="" err="secret \"gcp-auth\" not found"
	Sep 16 10:31:39 addons-191972 kubelet[1565]: I0916 10:31:39.472206    1565 scope.go:117] "RemoveContainer" containerID="85bcbbfdfc074366faf8d70de5e5b0ae05b2c86caf5118e07c5f5779a11f6f09"
	Sep 16 10:31:39 addons-191972 kubelet[1565]: E0916 10:31:39.472411    1565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-rwwbs_gadget(62b2176c-9dcb-4741-bd18-81ab2a2303f2)\"" pod="gadget/gadget-rwwbs" podUID="62b2176c-9dcb-4741-bd18-81ab2a2303f2"
	Sep 16 10:31:53 addons-191972 kubelet[1565]: I0916 10:31:53.471903    1565 scope.go:117] "RemoveContainer" containerID="85bcbbfdfc074366faf8d70de5e5b0ae05b2c86caf5118e07c5f5779a11f6f09"
	Sep 16 10:31:53 addons-191972 kubelet[1565]: E0916 10:31:53.472106    1565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-rwwbs_gadget(62b2176c-9dcb-4741-bd18-81ab2a2303f2)\"" pod="gadget/gadget-rwwbs" podUID="62b2176c-9dcb-4741-bd18-81ab2a2303f2"
	Sep 16 10:32:05 addons-191972 kubelet[1565]: I0916 10:32:05.471804    1565 scope.go:117] "RemoveContainer" containerID="85bcbbfdfc074366faf8d70de5e5b0ae05b2c86caf5118e07c5f5779a11f6f09"
	Sep 16 10:32:05 addons-191972 kubelet[1565]: E0916 10:32:05.472061    1565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-rwwbs_gadget(62b2176c-9dcb-4741-bd18-81ab2a2303f2)\"" pod="gadget/gadget-rwwbs" podUID="62b2176c-9dcb-4741-bd18-81ab2a2303f2"
	Sep 16 10:32:19 addons-191972 kubelet[1565]: I0916 10:32:19.472053    1565 scope.go:117] "RemoveContainer" containerID="85bcbbfdfc074366faf8d70de5e5b0ae05b2c86caf5118e07c5f5779a11f6f09"
	Sep 16 10:32:19 addons-191972 kubelet[1565]: E0916 10:32:19.472223    1565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-rwwbs_gadget(62b2176c-9dcb-4741-bd18-81ab2a2303f2)\"" pod="gadget/gadget-rwwbs" podUID="62b2176c-9dcb-4741-bd18-81ab2a2303f2"
	Sep 16 10:32:25 addons-191972 kubelet[1565]: I0916 10:32:25.471405    1565 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-vsbgv" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [62a4b8c25074dcef9656a9b6e749de86b5f7c97f45a25cd328153d14be1d5a78] <==
	I0916 10:24:03.139108       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:24:03.230289       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:24:03.230361       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:24:03.238016       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:24:03.238457       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ff346362-6d54-491c-b142-6d85e8abf2d5", APIVersion:"v1", ResourceVersion:"576", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-191972_e8089787-9f1d-4116-8123-a579d9482714 became leader
	I0916 10:24:03.238505       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-191972_e8089787-9f1d-4116-8123-a579d9482714!
	I0916 10:24:03.339118       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-191972_e8089787-9f1d-4116-8123-a579d9482714!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-191972 -n addons-191972
helpers_test.go:261: (dbg) Run:  kubectl --context addons-191972 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context addons-191972 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (360.715µs)
helpers_test.go:263: kubectl --context addons-191972 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestAddons/serial/Volcano (300.90s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-191972 create ns new-namespace
addons_test.go:656: (dbg) Non-zero exit: kubectl --context addons-191972 create ns new-namespace: fork/exec /usr/local/bin/kubectl: exec format error (294.354µs)
addons_test.go:658: kubectl --context addons-191972 create ns new-namespace failed: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestAddons/serial/GCPAuth/Namespaces (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.855993ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-vsbgv" [bd70fcec-e032-4dbd-902c-a139ac179bbf] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00332213s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-6vsnj" [05d6014b-9706-4d7a-a816-dbc7f557cd15] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003799204s
addons_test.go:342: (dbg) Run:  kubectl --context addons-191972 delete po -l run=registry-test --now
addons_test.go:342: (dbg) Non-zero exit: kubectl --context addons-191972 delete po -l run=registry-test --now: fork/exec /usr/local/bin/kubectl: exec format error (367.082µs)
addons_test.go:344: pre-cleanup kubectl --context addons-191972 delete po -l run=registry-test --now failed: fork/exec /usr/local/bin/kubectl: exec format error (not a problem)
addons_test.go:347: (dbg) Run:  kubectl --context addons-191972 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-191972 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": fork/exec /usr/local/bin/kubectl: exec format error (296.93µs)
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-191972 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: fork/exec /usr/local/bin/kubectl: exec format error
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got **
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-191972 ip
2024/09/16 10:32:43 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-191972 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-191972
helpers_test.go:235: (dbg) docker inspect addons-191972:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "49285aed0ac6b4add7b1c7856bcff882cf5b64bc1fd5779afefda3979360aedd",
	        "Created": "2024-09-16T10:23:37.048894749Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 13416,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:23:37.183215602Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/49285aed0ac6b4add7b1c7856bcff882cf5b64bc1fd5779afefda3979360aedd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/49285aed0ac6b4add7b1c7856bcff882cf5b64bc1fd5779afefda3979360aedd/hostname",
	        "HostsPath": "/var/lib/docker/containers/49285aed0ac6b4add7b1c7856bcff882cf5b64bc1fd5779afefda3979360aedd/hosts",
	        "LogPath": "/var/lib/docker/containers/49285aed0ac6b4add7b1c7856bcff882cf5b64bc1fd5779afefda3979360aedd/49285aed0ac6b4add7b1c7856bcff882cf5b64bc1fd5779afefda3979360aedd-json.log",
	        "Name": "/addons-191972",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-191972:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-191972",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2254b5220082ec2e338390341321e26cfa70d77e8e8e98f86dc832205812162a-init/diff:/var/lib/docker/overlay2/e391a31d00d345931d18da68f8a7d7338830f6d9b98fe203461225d03a65d594/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2254b5220082ec2e338390341321e26cfa70d77e8e8e98f86dc832205812162a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2254b5220082ec2e338390341321e26cfa70d77e8e8e98f86dc832205812162a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2254b5220082ec2e338390341321e26cfa70d77e8e8e98f86dc832205812162a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-191972",
	                "Source": "/var/lib/docker/volumes/addons-191972/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-191972",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-191972",
	                "name.minikube.sigs.k8s.io": "addons-191972",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b247e3d2e57f223fa64fb9fece255c3b6a0f61eb064ba71e6e8c51f7e6b8590a",
	            "SandboxKey": "/var/run/docker/netns/b247e3d2e57f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-191972": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "aac8db9a46c7b7c219b85113240d1d4a2ee20d1c156fb7315fdf6aa5e797f6a8",
	                    "EndpointID": "ab683490c93590fb0411cd607b8ad8f3100f7ae01f11dd3e855f6321d940faae",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-191972",
	                        "49285aed0ac6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-191972 -n addons-191972
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-191972 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-191972 logs -n 25: (1.71169249s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-297488   | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | -p download-only-297488              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| delete  | -p download-only-297488              | download-only-297488   | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| start   | -o=json --download-only              | download-only-024449   | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | -p download-only-024449              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| delete  | -p download-only-024449              | download-only-024449   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| delete  | -p download-only-297488              | download-only-297488   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| delete  | -p download-only-024449              | download-only-024449   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| start   | --download-only -p                   | download-docker-065822 | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | download-docker-065822               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-065822            | download-docker-065822 | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| start   | --download-only -p                   | binary-mirror-727123   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | binary-mirror-727123                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:34779               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-727123              | binary-mirror-727123   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| addons  | enable dashboard -p                  | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | addons-191972                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | addons-191972                        |                        |         |         |                     |                     |
	| start   | -p addons-191972 --wait=true         | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:27 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                        |         |         |                     |                     |
	| addons  | addons-191972 addons disable         | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | yakd --alsologtostderr -v=1          |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | -p addons-191972                     |                        |         |         |                     |                     |
	| ip      | addons-191972 ip                     | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	| addons  | addons-191972 addons disable         | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p             | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC |                     |
	|         | addons-191972                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC |                     |
	|         | -p addons-191972                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:23:15
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:23:15.015457   12653 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:23:15.015610   12653 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:23:15.015623   12653 out.go:358] Setting ErrFile to fd 2...
	I0916 10:23:15.015629   12653 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:23:15.015835   12653 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 10:23:15.016423   12653 out.go:352] Setting JSON to false
	I0916 10:23:15.017221   12653 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":339,"bootTime":1726481856,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:23:15.017316   12653 start.go:139] virtualization: kvm guest
	I0916 10:23:15.019468   12653 out.go:177] * [addons-191972] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:23:15.020856   12653 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:23:15.020860   12653 notify.go:220] Checking for updates...
	I0916 10:23:15.023158   12653 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:23:15.024282   12653 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:23:15.025336   12653 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 10:23:15.026362   12653 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:23:15.027468   12653 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:23:15.028714   12653 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:23:15.049632   12653 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:23:15.049710   12653 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:23:15.095467   12653 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-16 10:23:15.085826834 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:23:15.095614   12653 docker.go:318] overlay module found
	I0916 10:23:15.097552   12653 out.go:177] * Using the docker driver based on user configuration
	I0916 10:23:15.098917   12653 start.go:297] selected driver: docker
	I0916 10:23:15.098932   12653 start.go:901] validating driver "docker" against <nil>
	I0916 10:23:15.098957   12653 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:23:15.099817   12653 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:23:15.144749   12653 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-16 10:23:15.136589077 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:23:15.144922   12653 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:23:15.145171   12653 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:23:15.147081   12653 out.go:177] * Using Docker driver with root privileges
	I0916 10:23:15.148504   12653 cni.go:84] Creating CNI manager for ""
	I0916 10:23:15.148563   12653 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 10:23:15.148575   12653 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 10:23:15.148632   12653 start.go:340] cluster config:
	{Name:addons-191972 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-191972 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:23:15.149981   12653 out.go:177] * Starting "addons-191972" primary control-plane node in "addons-191972" cluster
	I0916 10:23:15.151239   12653 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 10:23:15.152375   12653 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:23:15.153439   12653 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:23:15.153479   12653 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4
	I0916 10:23:15.153492   12653 cache.go:56] Caching tarball of preloaded images
	I0916 10:23:15.153495   12653 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:23:15.153601   12653 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 10:23:15.153613   12653 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 10:23:15.153950   12653 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/config.json ...
	I0916 10:23:15.153974   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/config.json: {Name:mk77e04db13eac753d69895eba14a3f7223b28d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:15.169560   12653 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:23:15.169666   12653 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:23:15.169681   12653 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:23:15.169685   12653 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:23:15.169694   12653 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:23:15.169701   12653 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:23:27.861517   12653 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:23:27.861553   12653 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:23:27.861589   12653 start.go:360] acquireMachinesLock for addons-191972: {Name:mk1204ee6335c794af5ff39cd93a214e3c1d654b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:23:27.861691   12653 start.go:364] duration metric: took 80.959µs to acquireMachinesLock for "addons-191972"
	I0916 10:23:27.861720   12653 start.go:93] Provisioning new machine with config: &{Name:addons-191972 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-191972 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 10:23:27.861797   12653 start.go:125] createHost starting for "" (driver="docker")
	I0916 10:23:27.864363   12653 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0916 10:23:27.864609   12653 start.go:159] libmachine.API.Create for "addons-191972" (driver="docker")
	I0916 10:23:27.864644   12653 client.go:168] LocalClient.Create starting
	I0916 10:23:27.864787   12653 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem
	I0916 10:23:28.100386   12653 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem
	I0916 10:23:28.472961   12653 cli_runner.go:164] Run: docker network inspect addons-191972 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 10:23:28.488573   12653 cli_runner.go:211] docker network inspect addons-191972 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 10:23:28.488653   12653 network_create.go:284] running [docker network inspect addons-191972] to gather additional debugging logs...
	I0916 10:23:28.488675   12653 cli_runner.go:164] Run: docker network inspect addons-191972
	W0916 10:23:28.503724   12653 cli_runner.go:211] docker network inspect addons-191972 returned with exit code 1
	I0916 10:23:28.503773   12653 network_create.go:287] error running [docker network inspect addons-191972]: docker network inspect addons-191972: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-191972 not found
	I0916 10:23:28.503790   12653 network_create.go:289] output of [docker network inspect addons-191972]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-191972 not found
	
	** /stderr **
	I0916 10:23:28.503874   12653 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:23:28.520445   12653 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ac6790}
	I0916 10:23:28.520486   12653 network_create.go:124] attempt to create docker network addons-191972 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0916 10:23:28.520531   12653 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-191972 addons-191972
	I0916 10:23:28.578324   12653 network_create.go:108] docker network addons-191972 192.168.49.0/24 created
	I0916 10:23:28.578353   12653 kic.go:121] calculated static IP "192.168.49.2" for the "addons-191972" container
	I0916 10:23:28.578405   12653 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 10:23:28.593459   12653 cli_runner.go:164] Run: docker volume create addons-191972 --label name.minikube.sigs.k8s.io=addons-191972 --label created_by.minikube.sigs.k8s.io=true
	I0916 10:23:28.611104   12653 oci.go:103] Successfully created a docker volume addons-191972
	I0916 10:23:28.611189   12653 cli_runner.go:164] Run: docker run --rm --name addons-191972-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-191972 --entrypoint /usr/bin/test -v addons-191972:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 10:23:32.566442   12653 cli_runner.go:217] Completed: docker run --rm --name addons-191972-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-191972 --entrypoint /usr/bin/test -v addons-191972:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib: (3.955205965s)
	I0916 10:23:32.566475   12653 oci.go:107] Successfully prepared a docker volume addons-191972
	I0916 10:23:32.566499   12653 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:23:32.566524   12653 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 10:23:32.566588   12653 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-191972:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 10:23:36.989473   12653 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-191972:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.422844639s)
	I0916 10:23:36.989499   12653 kic.go:203] duration metric: took 4.422974303s to extract preloaded images to volume ...
	W0916 10:23:36.989616   12653 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 10:23:36.989704   12653 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 10:23:37.034645   12653 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-191972 --name addons-191972 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-191972 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-191972 --network addons-191972 --ip 192.168.49.2 --volume addons-191972:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 10:23:37.351088   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Running}}
	I0916 10:23:37.369798   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:37.389505   12653 cli_runner.go:164] Run: docker exec addons-191972 stat /var/lib/dpkg/alternatives/iptables
	I0916 10:23:37.432507   12653 oci.go:144] the created container "addons-191972" has a running status.
	I0916 10:23:37.432542   12653 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa...
	I0916 10:23:37.512853   12653 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 10:23:37.532177   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:37.549342   12653 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 10:23:37.549361   12653 kic_runner.go:114] Args: [docker exec --privileged addons-191972 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 10:23:37.594990   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:37.611429   12653 machine.go:93] provisionDockerMachine start ...
	I0916 10:23:37.611513   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:37.628951   12653 main.go:141] libmachine: Using SSH client type: native
	I0916 10:23:37.629230   12653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 10:23:37.629249   12653 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:23:37.630101   12653 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54456->127.0.0.1:32768: read: connection reset by peer
	I0916 10:23:40.759062   12653 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-191972
	
	I0916 10:23:40.759087   12653 ubuntu.go:169] provisioning hostname "addons-191972"
	I0916 10:23:40.759139   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:40.776123   12653 main.go:141] libmachine: Using SSH client type: native
	I0916 10:23:40.776294   12653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 10:23:40.776306   12653 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-191972 && echo "addons-191972" | sudo tee /etc/hostname
	I0916 10:23:40.917999   12653 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-191972
	
	I0916 10:23:40.918073   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:40.934369   12653 main.go:141] libmachine: Using SSH client type: native
	I0916 10:23:40.934536   12653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 10:23:40.934552   12653 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-191972' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-191972/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-191972' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:23:41.063670   12653 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:23:41.063696   12653 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 10:23:41.063755   12653 ubuntu.go:177] setting up certificates
	I0916 10:23:41.063769   12653 provision.go:84] configureAuth start
	I0916 10:23:41.063821   12653 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-191972
	I0916 10:23:41.080185   12653 provision.go:143] copyHostCerts
	I0916 10:23:41.080289   12653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 10:23:41.080452   12653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 10:23:41.080539   12653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 10:23:41.080607   12653 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.addons-191972 san=[127.0.0.1 192.168.49.2 addons-191972 localhost minikube]
	I0916 10:23:41.189624   12653 provision.go:177] copyRemoteCerts
	I0916 10:23:41.189685   12653 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:23:41.189718   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:41.206072   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:41.299940   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 10:23:41.321259   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:23:41.342100   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:23:41.362764   12653 provision.go:87] duration metric: took 298.977855ms to configureAuth
	I0916 10:23:41.362793   12653 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:23:41.362955   12653 config.go:182] Loaded profile config "addons-191972": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:23:41.362966   12653 machine.go:96] duration metric: took 3.751519266s to provisionDockerMachine
	I0916 10:23:41.362991   12653 client.go:171] duration metric: took 13.498318264s to LocalClient.Create
	I0916 10:23:41.363014   12653 start.go:167] duration metric: took 13.498406844s to libmachine.API.Create "addons-191972"
	I0916 10:23:41.363024   12653 start.go:293] postStartSetup for "addons-191972" (driver="docker")
	I0916 10:23:41.363035   12653 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:23:41.363112   12653 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:23:41.363159   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:41.379631   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:41.472315   12653 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:23:41.475416   12653 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:23:41.475455   12653 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:23:41.475469   12653 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:23:41.475477   12653 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:23:41.475490   12653 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 10:23:41.475562   12653 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 10:23:41.475593   12653 start.go:296] duration metric: took 112.560003ms for postStartSetup
	I0916 10:23:41.475953   12653 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-191972
	I0916 10:23:41.491831   12653 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/config.json ...
	I0916 10:23:41.492098   12653 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:23:41.492159   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:41.508709   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:41.604422   12653 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:23:41.608355   12653 start.go:128] duration metric: took 13.746544864s to createHost
	I0916 10:23:41.608378   12653 start.go:83] releasing machines lock for "addons-191972", held for 13.74667303s
	I0916 10:23:41.608449   12653 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-191972
	I0916 10:23:41.624552   12653 ssh_runner.go:195] Run: cat /version.json
	I0916 10:23:41.624594   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:41.624666   12653 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:23:41.624742   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:41.640830   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:41.641558   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:41.811513   12653 ssh_runner.go:195] Run: systemctl --version
	I0916 10:23:41.816090   12653 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:23:41.820031   12653 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 10:23:41.841966   12653 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:23:41.842040   12653 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:23:41.867614   12653 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 10:23:41.867637   12653 start.go:495] detecting cgroup driver to use...
	I0916 10:23:41.867665   12653 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:23:41.867707   12653 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 10:23:41.878761   12653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 10:23:41.889209   12653 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:23:41.889272   12653 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:23:41.901658   12653 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:23:41.914376   12653 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:23:41.989625   12653 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:23:42.064036   12653 docker.go:233] disabling docker service ...
	I0916 10:23:42.064087   12653 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:23:42.082378   12653 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:23:42.092694   12653 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:23:42.163431   12653 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:23:42.235566   12653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:23:42.245920   12653 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:23:42.260071   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 10:23:42.268844   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:23:42.277914   12653 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:23:42.277973   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:23:42.287090   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:23:42.295426   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:23:42.303716   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:23:42.312468   12653 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:23:42.320449   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:23:42.328970   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:23:42.337386   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:23:42.345791   12653 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:23:42.352855   12653 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:23:42.359971   12653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:23:42.438798   12653 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 10:23:42.548862   12653 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 10:23:42.548940   12653 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 10:23:42.552403   12653 start.go:563] Will wait 60s for crictl version
	I0916 10:23:42.552460   12653 ssh_runner.go:195] Run: which crictl
	I0916 10:23:42.555471   12653 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:23:42.586679   12653 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 10:23:42.586752   12653 ssh_runner.go:195] Run: containerd --version
	I0916 10:23:42.608454   12653 ssh_runner.go:195] Run: containerd --version
	I0916 10:23:42.632432   12653 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 10:23:42.633762   12653 cli_runner.go:164] Run: docker network inspect addons-191972 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:23:42.650400   12653 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:23:42.653892   12653 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:23:42.664053   12653 kubeadm.go:883] updating cluster {Name:addons-191972 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-191972 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:23:42.664154   12653 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:23:42.664195   12653 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:23:42.695688   12653 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 10:23:42.695710   12653 containerd.go:534] Images already preloaded, skipping extraction
	I0916 10:23:42.695778   12653 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:23:42.727148   12653 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 10:23:42.727166   12653 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:23:42.727174   12653 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I0916 10:23:42.727255   12653 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-191972 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-191972 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:23:42.727302   12653 ssh_runner.go:195] Run: sudo crictl info
	I0916 10:23:42.757474   12653 cni.go:84] Creating CNI manager for ""
	I0916 10:23:42.757493   12653 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 10:23:42.757502   12653 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:23:42.757520   12653 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-191972 NodeName:addons-191972 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:23:42.757633   12653 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-191972"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:23:42.757684   12653 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:23:42.765604   12653 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:23:42.765672   12653 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:23:42.773363   12653 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0916 10:23:42.789280   12653 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:23:42.805100   12653 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0916 10:23:42.820420   12653 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0916 10:23:42.823264   12653 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:23:42.832700   12653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:23:42.907069   12653 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:23:42.919246   12653 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972 for IP: 192.168.49.2
	I0916 10:23:42.919266   12653 certs.go:194] generating shared ca certs ...
	I0916 10:23:42.919279   12653 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:42.919399   12653 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 10:23:43.054784   12653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt ...
	I0916 10:23:43.054815   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt: {Name:mkf05eaa3032985e939bd1a93aa36a6d50242974 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.055008   12653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key ...
	I0916 10:23:43.055031   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key: {Name:mk4cf19316dad04ab708c5c17e172ec92fc35230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.055134   12653 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 10:23:43.268289   12653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt ...
	I0916 10:23:43.268318   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt: {Name:mk68da284b9ad8d396a1f11e7cfb94cc6f208c5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.268510   12653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key ...
	I0916 10:23:43.268532   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key: {Name:mkdf8c5da2a6d70c9ece2277843ebe69f9105c4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.268626   12653 certs.go:256] generating profile certs ...
	I0916 10:23:43.268694   12653 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.key
	I0916 10:23:43.268720   12653 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt with IP's: []
	I0916 10:23:43.341520   12653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt ...
	I0916 10:23:43.341551   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: {Name:mke3c2895145f9c692cb1e6451d9766499ccc877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.341738   12653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.key ...
	I0916 10:23:43.341755   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.key: {Name:mkd6237ae8ebf429452ae0c60cea457b1f9cff1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.341855   12653 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.key.ac265369
	I0916 10:23:43.341882   12653 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.crt.ac265369 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0916 10:23:43.403750   12653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.crt.ac265369 ...
	I0916 10:23:43.403775   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.crt.ac265369: {Name:mk72db26b8519849abdf811ed93be5caeac2267d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.403951   12653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.key.ac265369 ...
	I0916 10:23:43.403973   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.key.ac265369: {Name:mk4b11dab0a085e395344dc35616a0c16f298191 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.404065   12653 certs.go:381] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.crt.ac265369 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.crt
	I0916 10:23:43.404155   12653 certs.go:385] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.key.ac265369 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.key
	I0916 10:23:43.404230   12653 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.key
	I0916 10:23:43.404250   12653 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.crt with IP's: []
	I0916 10:23:43.488130   12653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.crt ...
	I0916 10:23:43.488160   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.crt: {Name:mk11d8f9c437e5586897185f4551df7594041471 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.488342   12653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.key ...
	I0916 10:23:43.488360   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.key: {Name:mk18734ee357c50ce0ff509ffb1c7e42743fa1b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.488577   12653 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:23:43.488617   12653 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 10:23:43.488652   12653 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:23:43.488682   12653 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 10:23:43.489279   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:23:43.511557   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:23:43.532934   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:23:43.553377   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:23:43.575078   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0916 10:23:43.595868   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:23:43.616905   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:23:43.637839   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:23:43.658915   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:23:43.680485   12653 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:23:43.696295   12653 ssh_runner.go:195] Run: openssl version
	I0916 10:23:43.701282   12653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:23:43.709681   12653 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:43.712715   12653 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:43.712762   12653 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:43.718832   12653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:23:43.727190   12653 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:23:43.730247   12653 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:23:43.730290   12653 kubeadm.go:392] StartCluster: {Name:addons-191972 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-191972 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:23:43.730356   12653 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 10:23:43.730405   12653 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:23:43.761830   12653 cri.go:89] found id: ""
	I0916 10:23:43.761893   12653 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:23:43.770086   12653 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:23:43.778465   12653 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 10:23:43.778522   12653 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:23:43.786355   12653 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:23:43.786373   12653 kubeadm.go:157] found existing configuration files:
	
	I0916 10:23:43.786419   12653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 10:23:43.794471   12653 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:23:43.794519   12653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:23:43.802487   12653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 10:23:43.810401   12653 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:23:43.810451   12653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:23:43.817541   12653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 10:23:43.824799   12653 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:23:43.824842   12653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:23:43.832032   12653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 10:23:43.839239   12653 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:23:43.839298   12653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:23:43.847649   12653 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 10:23:43.880192   12653 kubeadm.go:310] W0916 10:23:43.879583    1109 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:23:43.880773   12653 kubeadm.go:310] W0916 10:23:43.880291    1109 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:23:43.896580   12653 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 10:23:43.944226   12653 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:23:52.227261   12653 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 10:23:52.227338   12653 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 10:23:52.227418   12653 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 10:23:52.227466   12653 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 10:23:52.227501   12653 kubeadm.go:310] OS: Linux
	I0916 10:23:52.227541   12653 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 10:23:52.227584   12653 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 10:23:52.227625   12653 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 10:23:52.227670   12653 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 10:23:52.227711   12653 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 10:23:52.227786   12653 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 10:23:52.227872   12653 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 10:23:52.227947   12653 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 10:23:52.227994   12653 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 10:23:52.228098   12653 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:23:52.228218   12653 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:23:52.228360   12653 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 10:23:52.228491   12653 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:23:52.230143   12653 out.go:235]   - Generating certificates and keys ...
	I0916 10:23:52.230239   12653 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 10:23:52.230328   12653 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 10:23:52.230422   12653 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 10:23:52.230504   12653 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 10:23:52.230596   12653 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 10:23:52.230685   12653 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 10:23:52.230768   12653 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 10:23:52.230910   12653 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-191972 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 10:23:52.230984   12653 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 10:23:52.231130   12653 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-191972 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 10:23:52.231228   12653 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 10:23:52.231331   12653 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 10:23:52.231395   12653 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 10:23:52.231471   12653 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:23:52.231543   12653 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:23:52.231622   12653 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 10:23:52.231683   12653 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:23:52.231759   12653 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:23:52.231871   12653 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:23:52.231979   12653 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:23:52.232069   12653 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:23:52.233407   12653 out.go:235]   - Booting up control plane ...
	I0916 10:23:52.233500   12653 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:23:52.233589   12653 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:23:52.233654   12653 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:23:52.233747   12653 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:23:52.233846   12653 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:23:52.233895   12653 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 10:23:52.234011   12653 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 10:23:52.234102   12653 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:23:52.234155   12653 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.63037ms
	I0916 10:23:52.234224   12653 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 10:23:52.234282   12653 kubeadm.go:310] [api-check] The API server is healthy after 4.501222011s
	I0916 10:23:52.234402   12653 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:23:52.234544   12653 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:23:52.234625   12653 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:23:52.234780   12653 kubeadm.go:310] [mark-control-plane] Marking the node addons-191972 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:23:52.234830   12653 kubeadm.go:310] [bootstrap-token] Using token: fe3fo6.40ynbll2pbwpp3it
	I0916 10:23:52.236918   12653 out.go:235]   - Configuring RBAC rules ...
	I0916 10:23:52.237043   12653 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:23:52.237118   12653 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:23:52.237261   12653 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:23:52.237418   12653 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:23:52.237547   12653 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:23:52.237659   12653 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:23:52.237791   12653 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:23:52.237856   12653 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 10:23:52.237898   12653 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 10:23:52.237904   12653 kubeadm.go:310] 
	I0916 10:23:52.237963   12653 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 10:23:52.237971   12653 kubeadm.go:310] 
	I0916 10:23:52.238040   12653 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 10:23:52.238046   12653 kubeadm.go:310] 
	I0916 10:23:52.238070   12653 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 10:23:52.238123   12653 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:23:52.238167   12653 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:23:52.238173   12653 kubeadm.go:310] 
	I0916 10:23:52.238218   12653 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 10:23:52.238223   12653 kubeadm.go:310] 
	I0916 10:23:52.238268   12653 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:23:52.238274   12653 kubeadm.go:310] 
	I0916 10:23:52.238329   12653 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 10:23:52.238418   12653 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:23:52.238507   12653 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:23:52.238515   12653 kubeadm.go:310] 
	I0916 10:23:52.238598   12653 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:23:52.238681   12653 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 10:23:52.238690   12653 kubeadm.go:310] 
	I0916 10:23:52.238801   12653 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fe3fo6.40ynbll2pbwpp3it \
	I0916 10:23:52.238908   12653 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 \
	I0916 10:23:52.238933   12653 kubeadm.go:310] 	--control-plane 
	I0916 10:23:52.238939   12653 kubeadm.go:310] 
	I0916 10:23:52.239012   12653 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:23:52.239020   12653 kubeadm.go:310] 
	I0916 10:23:52.239095   12653 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fe3fo6.40ynbll2pbwpp3it \
	I0916 10:23:52.239199   12653 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 
	I0916 10:23:52.239210   12653 cni.go:84] Creating CNI manager for ""
	I0916 10:23:52.239215   12653 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 10:23:52.240733   12653 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 10:23:52.241980   12653 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 10:23:52.245609   12653 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 10:23:52.245625   12653 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 10:23:52.261912   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 10:23:52.447057   12653 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:23:52.447144   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:52.447165   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-191972 minikube.k8s.io/updated_at=2024_09_16T10_23_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=addons-191972 minikube.k8s.io/primary=true
	I0916 10:23:52.543497   12653 ops.go:34] apiserver oom_adj: -16
	I0916 10:23:52.543643   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:53.044491   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:53.543770   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:54.044061   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:54.544691   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:55.044249   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:55.543918   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:56.043679   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:56.543717   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:57.044619   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:57.107839   12653 kubeadm.go:1113] duration metric: took 4.660750668s to wait for elevateKubeSystemPrivileges
	I0916 10:23:57.107871   12653 kubeadm.go:394] duration metric: took 13.37758355s to StartCluster
	I0916 10:23:57.107890   12653 settings.go:142] acquiring lock: {Name:mk5f140da961bb985c566f732d611f3e238f1f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:57.107998   12653 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:23:57.108383   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:57.108581   12653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 10:23:57.108610   12653 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 10:23:57.108666   12653 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0916 10:23:57.108789   12653 addons.go:69] Setting yakd=true in profile "addons-191972"
	I0916 10:23:57.108813   12653 addons.go:234] Setting addon yakd=true in "addons-191972"
	I0916 10:23:57.108830   12653 config.go:182] Loaded profile config "addons-191972": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:23:57.108844   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.108885   12653 addons.go:69] Setting inspektor-gadget=true in profile "addons-191972"
	I0916 10:23:57.108900   12653 addons.go:234] Setting addon inspektor-gadget=true in "addons-191972"
	I0916 10:23:57.108928   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.109000   12653 addons.go:69] Setting gcp-auth=true in profile "addons-191972"
	I0916 10:23:57.109025   12653 mustload.go:65] Loading cluster: addons-191972
	I0916 10:23:57.109143   12653 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-191972"
	I0916 10:23:57.109187   12653 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-191972"
	I0916 10:23:57.109185   12653 addons.go:69] Setting default-storageclass=true in profile "addons-191972"
	I0916 10:23:57.109211   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.109225   12653 config.go:182] Loaded profile config "addons-191972": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:23:57.109232   12653 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-191972"
	I0916 10:23:57.109216   12653 addons.go:69] Setting cloud-spanner=true in profile "addons-191972"
	I0916 10:23:57.109259   12653 addons.go:69] Setting storage-provisioner=true in profile "addons-191972"
	I0916 10:23:57.109265   12653 addons.go:234] Setting addon cloud-spanner=true in "addons-191972"
	I0916 10:23:57.109274   12653 addons.go:234] Setting addon storage-provisioner=true in "addons-191972"
	I0916 10:23:57.109308   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.109323   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.109407   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.109485   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.109507   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.109547   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.109684   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.109757   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.109825   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.110167   12653 addons.go:69] Setting ingress-dns=true in profile "addons-191972"
	I0916 10:23:57.110372   12653 addons.go:234] Setting addon ingress-dns=true in "addons-191972"
	I0916 10:23:57.110546   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.111202   12653 addons.go:69] Setting helm-tiller=true in profile "addons-191972"
	I0916 10:23:57.111255   12653 addons.go:234] Setting addon helm-tiller=true in "addons-191972"
	I0916 10:23:57.111282   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.111445   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.111484   12653 addons.go:69] Setting ingress=true in profile "addons-191972"
	I0916 10:23:57.111498   12653 addons.go:234] Setting addon ingress=true in "addons-191972"
	I0916 10:23:57.111527   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.111731   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.110913   12653 addons.go:69] Setting metrics-server=true in profile "addons-191972"
	I0916 10:23:57.111983   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.111987   12653 addons.go:234] Setting addon metrics-server=true in "addons-191972"
	I0916 10:23:57.112171   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.110926   12653 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-191972"
	I0916 10:23:57.113223   12653 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-191972"
	I0916 10:23:57.113258   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.113700   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.115817   12653 out.go:177] * Verifying Kubernetes components...
	I0916 10:23:57.116675   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.110938   12653 addons.go:69] Setting registry=true in profile "addons-191972"
	I0916 10:23:57.116963   12653 addons.go:234] Setting addon registry=true in "addons-191972"
	I0916 10:23:57.117093   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.110938   12653 addons.go:69] Setting volcano=true in profile "addons-191972"
	I0916 10:23:57.117245   12653 addons.go:234] Setting addon volcano=true in "addons-191972"
	I0916 10:23:57.117313   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.110949   12653 addons.go:69] Setting volumesnapshots=true in profile "addons-191972"
	I0916 10:23:57.117350   12653 addons.go:234] Setting addon volumesnapshots=true in "addons-191972"
	I0916 10:23:57.117397   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.117799   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.117919   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.118954   12653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:23:57.110924   12653 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-191972"
	I0916 10:23:57.120855   12653 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-191972"
	I0916 10:23:57.121186   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.148826   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.156121   12653 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:23:57.158094   12653 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0916 10:23:57.160078   12653 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0916 10:23:57.160230   12653 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:23:57.163394   12653 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:23:57.163405   12653 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0916 10:23:57.163428   12653 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0916 10:23:57.163491   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.163933   12653 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 10:23:57.163952   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0916 10:23:57.163999   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.166339   12653 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0916 10:23:57.166352   12653 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0916 10:23:57.166505   12653 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:57.166525   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:23:57.166591   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.176509   12653 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:23:57.176539   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0916 10:23:57.176597   12653 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0916 10:23:57.176613   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0916 10:23:57.176614   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.176667   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.176871   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.184510   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0916 10:23:57.184923   12653 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0916 10:23:57.187620   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0916 10:23:57.187908   12653 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 10:23:57.187925   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0916 10:23:57.188005   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.190192   12653 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0916 10:23:57.190888   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0916 10:23:57.191984   12653 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0916 10:23:57.192004   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0916 10:23:57.192062   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.192462   12653 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-191972"
	I0916 10:23:57.192519   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.193186   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.195485   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0916 10:23:57.196395   12653 addons.go:234] Setting addon default-storageclass=true in "addons-191972"
	I0916 10:23:57.196441   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.197033   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.200024   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0916 10:23:57.200756   12653 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0916 10:23:57.202388   12653 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 10:23:57.202409   12653 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 10:23:57.202572   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.204739   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0916 10:23:57.206967   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0916 10:23:57.217725   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0916 10:23:57.217900   12653 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0916 10:23:57.219581   12653 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0916 10:23:57.219714   12653 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0916 10:23:57.219798   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.219620   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 10:23:57.220511   12653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0916 10:23:57.221727   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.235796   12653 out.go:177]   - Using image docker.io/registry:2.8.3
	I0916 10:23:57.237579   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0916 10:23:57.239326   12653 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 10:23:57.239350   12653 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0916 10:23:57.239411   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.239514   12653 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0916 10:23:57.241480   12653 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0916 10:23:57.241502   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0916 10:23:57.241555   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.243883   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.255850   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.256610   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.261965   12653 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0916 10:23:57.263559   12653 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0916 10:23:57.265255   12653 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0916 10:23:57.266412   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.267838   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.268005   12653 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 10:23:57.268022   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0916 10:23:57.268074   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.269050   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.276483   12653 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:57.276507   12653 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:23:57.276573   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.283025   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.284257   12653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 10:23:57.288880   12653 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0916 10:23:57.290776   12653 out.go:177]   - Using image docker.io/busybox:stable
	I0916 10:23:57.292419   12653 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:23:57.292444   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0916 10:23:57.292510   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.295145   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.295780   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.297628   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.298120   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.300416   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.306147   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.311231   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.314549   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	W0916 10:23:57.324739   12653 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0916 10:23:57.324769   12653 retry.go:31] will retry after 374.435778ms: ssh: handshake failed: EOF
	W0916 10:23:57.325602   12653 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0916 10:23:57.325619   12653 retry.go:31] will retry after 150.651165ms: ssh: handshake failed: EOF
	I0916 10:23:57.330682   12653 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:23:57.629690   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 10:23:57.729822   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:57.730227   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:23:57.742355   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:57.824974   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 10:23:57.842831   12653 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 10:23:57.842917   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0916 10:23:57.843332   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:23:57.921972   12653 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0916 10:23:57.922058   12653 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0916 10:23:57.922011   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0916 10:23:57.922034   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 10:23:57.922195   12653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0916 10:23:57.929874   12653 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 10:23:57.929901   12653 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0916 10:23:57.941141   12653 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0916 10:23:57.941166   12653 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0916 10:23:58.138273   12653 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0916 10:23:58.138369   12653 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0916 10:23:58.222261   12653 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:23:58.222352   12653 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0916 10:23:58.229572   12653 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 10:23:58.229660   12653 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0916 10:23:58.232627   12653 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0916 10:23:58.232698   12653 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0916 10:23:58.322393   12653 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 10:23:58.322420   12653 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 10:23:58.339998   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 10:23:58.435282   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 10:23:58.435313   12653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0916 10:23:58.435591   12653 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.15128486s)
	I0916 10:23:58.435618   12653 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0916 10:23:58.436958   12653 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.1062474s)
	I0916 10:23:58.437947   12653 node_ready.go:35] waiting up to 6m0s for node "addons-191972" to be "Ready" ...
	I0916 10:23:58.441471   12653 node_ready.go:49] node "addons-191972" has status "Ready":"True"
	I0916 10:23:58.441502   12653 node_ready.go:38] duration metric: took 3.529013ms for node "addons-191972" to be "Ready" ...
	I0916 10:23:58.441514   12653 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:23:58.442873   12653 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0916 10:23:58.442897   12653 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0916 10:23:58.534045   12653 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2l862" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:58.540468   12653 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:23:58.540496   12653 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 10:23:58.642810   12653 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:23:58.642885   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0916 10:23:58.728521   12653 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 10:23:58.728554   12653 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0916 10:23:58.840472   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:23:58.921026   12653 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0916 10:23:58.921059   12653 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0916 10:23:58.936525   12653 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0916 10:23:58.936552   12653 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0916 10:23:58.939212   12653 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-191972" context rescaled to 1 replicas
	I0916 10:23:59.131614   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:23:59.224079   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 10:23:59.224104   12653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0916 10:23:59.230203   12653 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0916 10:23:59.230238   12653 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0916 10:23:59.423686   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:23:59.430144   12653 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0916 10:23:59.430176   12653 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0916 10:23:59.433784   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 10:23:59.433810   12653 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0916 10:23:59.542608   12653 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:23:59.542635   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0916 10:23:59.630644   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 10:23:59.630734   12653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0916 10:23:59.840282   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:23:59.927613   12653 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:23:59.927705   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0916 10:24:00.030859   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 10:24:00.030936   12653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0916 10:24:00.034479   12653 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0916 10:24:00.034549   12653 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0916 10:24:00.038488   12653 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-2l862" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-2l862" not found
	I0916 10:24:00.038522   12653 pod_ready.go:82] duration metric: took 1.504385632s for pod "coredns-7c65d6cfc9-2l862" in "kube-system" namespace to be "Ready" ...
	E0916 10:24:00.038535   12653 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-2l862" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-2l862" not found
	I0916 10:24:00.038552   12653 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:00.333635   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:24:00.339910   12653 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 10:24:00.339994   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0916 10:24:00.627234   12653 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0916 10:24:00.627262   12653 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0916 10:24:00.929780   12653 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 10:24:00.929809   12653 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0916 10:24:01.128973   12653 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:24:01.129062   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0916 10:24:01.334031   12653 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 10:24:01.334116   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0916 10:24:01.525220   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:24:02.022039   12653 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 10:24:02.022114   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0916 10:24:02.136463   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:02.532736   12653 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:24:02.532829   12653 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0916 10:24:02.738986   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:24:04.426813   12653 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0916 10:24:04.426903   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:24:04.456284   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:24:04.624938   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:04.638370   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.008571899s)
	I0916 10:24:04.638414   12653 addons.go:475] Verifying addon ingress=true in "addons-191972"
	I0916 10:24:04.638488   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.908226437s)
	I0916 10:24:04.638570   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.908717103s)
	I0916 10:24:04.638623   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.896188028s)
	I0916 10:24:04.638699   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.81369606s)
	I0916 10:24:04.638718   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.795359026s)
	I0916 10:24:04.638742   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.716592394s)
	I0916 10:24:04.641681   12653 out.go:177] * Verifying ingress addon...
	I0916 10:24:04.644857   12653 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	W0916 10:24:04.722084   12653 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0916 10:24:04.723574   12653 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0916 10:24:04.723598   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:04.841083   12653 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0916 10:24:04.932849   12653 addons.go:234] Setting addon gcp-auth=true in "addons-191972"
	I0916 10:24:04.932903   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:24:04.933372   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:24:04.957393   12653 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0916 10:24:04.957464   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:24:04.975728   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:24:05.150342   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:05.650366   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:06.149809   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:06.649391   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:06.834167   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (8.494119031s)
	I0916 10:24:06.834259   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.993750099s)
	I0916 10:24:06.834355   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.702687859s)
	I0916 10:24:06.834379   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.410662864s)
	I0916 10:24:06.834381   12653 addons.go:475] Verifying addon metrics-server=true in "addons-191972"
	I0916 10:24:06.834394   12653 addons.go:475] Verifying addon registry=true in "addons-191972"
	I0916 10:24:06.834447   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.994082306s)
	I0916 10:24:06.834595   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.500877662s)
	W0916 10:24:06.834635   12653 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:24:06.834660   12653 retry.go:31] will retry after 180.492463ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:24:06.834694   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.309367322s)
	I0916 10:24:06.836029   12653 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-191972 service yakd-dashboard -n yakd-dashboard
	
	I0916 10:24:06.836032   12653 out.go:177] * Verifying registry addon...
	I0916 10:24:06.838577   12653 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0916 10:24:06.842659   12653 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 10:24:06.842681   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:07.016329   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:24:07.122253   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:07.229433   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:07.346049   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:07.428384   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.689342475s)
	I0916 10:24:07.428423   12653 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-191972"
	I0916 10:24:07.428557   12653 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.471115449s)
	I0916 10:24:07.430137   12653 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:24:07.430140   12653 out.go:177] * Verifying csi-hostpath-driver addon...
	I0916 10:24:07.432142   12653 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0916 10:24:07.433350   12653 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0916 10:24:07.433452   12653 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 10:24:07.433472   12653 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0916 10:24:07.446890   12653 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 10:24:07.446929   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:07.523198   12653 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 10:24:07.523247   12653 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0916 10:24:07.543809   12653 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:24:07.543877   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0916 10:24:07.627288   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:24:07.649744   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:07.842799   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:07.943700   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.149515   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:08.343117   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:08.438263   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.651360   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:08.739263   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.722876496s)
	I0916 10:24:08.739377   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.111993041s)
	I0916 10:24:08.740565   12653 addons.go:475] Verifying addon gcp-auth=true in "addons-191972"
	I0916 10:24:08.742658   12653 out.go:177] * Verifying gcp-auth addon...
	I0916 10:24:08.744959   12653 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0916 10:24:08.752275   12653 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 10:24:08.842486   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:08.937942   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.148485   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:09.342745   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:09.444884   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.544117   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:09.649057   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:09.850158   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:09.951607   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.149384   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:10.342403   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:10.437953   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.648926   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:10.842555   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:10.938628   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:11.149265   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:11.341824   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:11.438269   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:11.544664   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:11.649663   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:11.842706   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:11.938382   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:12.149747   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:12.341485   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:12.438115   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:12.649444   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:12.842417   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:12.937808   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:13.149247   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:13.342184   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:13.443397   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:13.544742   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:13.649342   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:13.842433   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:13.938156   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:14.148884   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:14.342230   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:14.437378   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:14.648929   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:14.841404   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:14.938373   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:15.148947   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:15.342062   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:15.437442   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:15.544833   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:15.649729   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:15.875330   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:16.063181   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:16.148410   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:16.342704   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:16.437759   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:16.649599   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:16.842196   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:16.937322   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:17.148973   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:17.342240   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:17.438331   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:17.649287   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:17.842346   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:17.937786   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:18.044459   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:18.148462   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:18.342098   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:18.438245   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:18.650618   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:18.842115   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:18.937393   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:19.148210   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:19.342331   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:19.437753   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:19.649206   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:19.841659   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:19.937929   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:20.149095   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:20.341559   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:20.437389   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:20.543697   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:20.649389   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:20.841724   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:20.939911   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:21.148803   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:21.341867   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:21.437743   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:21.649220   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:21.841636   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:21.937733   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:22.148853   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:22.341623   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:22.438291   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:22.544155   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:22.648789   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:22.842117   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:22.937569   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:23.148605   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:23.342228   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:23.437946   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:23.648725   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:23.848611   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:23.937702   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:24.148830   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:24.341472   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:24.437746   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:24.648857   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:24.841524   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:24.937579   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:25.043875   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:25.148986   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:25.341729   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:25.438614   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:25.648859   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:25.842571   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:25.937660   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:26.148067   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:26.342525   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:26.442495   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:26.649368   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:26.841986   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:26.937210   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:27.044290   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:27.148266   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:27.341925   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:27.437369   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:27.648710   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:27.842271   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:27.937289   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:28.149389   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:28.341712   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:28.437988   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:28.649507   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:28.841935   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:28.937651   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:29.148305   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:29.341758   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:29.437230   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:29.544648   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:29.648789   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:29.842453   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:29.937780   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:30.149144   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:30.341971   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:30.436935   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:30.648826   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:30.842241   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:30.937301   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:31.148532   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:31.342364   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:31.438028   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:31.649021   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:31.842529   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:31.938084   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:32.044452   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:32.148477   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:32.342165   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:32.437629   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:32.649007   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:32.841446   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:32.937583   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:33.148965   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:33.341801   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:33.437144   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:33.649484   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:33.842344   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:33.937348   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:34.148522   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:34.342404   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:34.438126   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:34.543640   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:34.649404   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:34.842417   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:34.937940   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:35.149191   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:35.341955   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:35.437296   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:35.649499   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:35.841951   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:35.937835   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:36.148878   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:36.342396   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:36.437451   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:36.648935   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:36.841429   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:36.937515   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:37.043652   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:37.148879   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:37.341650   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:37.438917   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:37.648863   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:37.843665   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:37.937755   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:38.148476   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:38.342129   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:38.437617   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:38.648850   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:38.842096   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:38.937210   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:39.044295   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:39.148546   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:39.342070   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:39.437434   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:39.649394   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:39.850992   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:39.937068   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:40.148412   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:40.342026   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:40.438818   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:40.648424   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:40.842673   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:40.937959   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:41.149077   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:41.341573   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:41.437823   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:41.544866   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:41.649385   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:41.842400   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:41.942736   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:42.148726   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:42.342124   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:42.438550   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:42.649404   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:42.841927   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:42.937808   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:43.149523   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:43.341957   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:43.437318   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:43.545247   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:43.648618   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:43.842970   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:43.938236   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:44.149170   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:44.342180   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:44.437399   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:44.649533   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:44.842942   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:44.937846   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:45.149581   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:45.342185   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:45.437873   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:45.649109   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:45.842031   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:45.937050   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:46.043865   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:46.149131   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:46.342272   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:46.437555   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:46.649645   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:46.850195   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:46.951731   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:47.044952   12653 pod_ready.go:93] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:47.044977   12653 pod_ready.go:82] duration metric: took 47.006412913s for pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.044991   12653 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.048830   12653 pod_ready.go:93] pod "etcd-addons-191972" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:47.048847   12653 pod_ready.go:82] duration metric: took 3.848159ms for pod "etcd-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.048861   12653 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.052536   12653 pod_ready.go:93] pod "kube-apiserver-addons-191972" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:47.052558   12653 pod_ready.go:82] duration metric: took 3.691187ms for pod "kube-apiserver-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.052566   12653 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.056167   12653 pod_ready.go:93] pod "kube-controller-manager-addons-191972" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:47.056192   12653 pod_ready.go:82] duration metric: took 3.620465ms for pod "kube-controller-manager-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.056201   12653 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fnr7f" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.060021   12653 pod_ready.go:93] pod "kube-proxy-fnr7f" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:47.060038   12653 pod_ready.go:82] duration metric: took 3.830746ms for pod "kube-proxy-fnr7f" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.060046   12653 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.149672   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:47.342533   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:47.437808   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:47.441161   12653 pod_ready.go:93] pod "kube-scheduler-addons-191972" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:47.441181   12653 pod_ready.go:82] duration metric: took 381.129532ms for pod "kube-scheduler-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.441188   12653 pod_ready.go:39] duration metric: took 48.999654984s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:24:47.441205   12653 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:24:47.441254   12653 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:24:47.453909   12653 api_server.go:72] duration metric: took 50.345260117s to wait for apiserver process to appear ...
	I0916 10:24:47.453935   12653 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:24:47.453960   12653 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 10:24:47.458673   12653 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 10:24:47.459648   12653 api_server.go:141] control plane version: v1.31.1
	I0916 10:24:47.459673   12653 api_server.go:131] duration metric: took 5.729621ms to wait for apiserver health ...
	I0916 10:24:47.459683   12653 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:24:47.648237   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:47.648583   12653 system_pods.go:59] 19 kube-system pods found
	I0916 10:24:47.648620   12653 system_pods.go:61] "coredns-7c65d6cfc9-9rccl" [f2ffddc5-3995-4d5a-8059-297b3859f9c5] Running
	I0916 10:24:47.648634   12653 system_pods.go:61] "csi-hostpath-attacher-0" [07b91be6-2f2b-441c-80ee-f832e9ee2e18] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 10:24:47.648642   12653 system_pods.go:61] "csi-hostpath-resizer-0" [311c306b-accd-4242-8415-81199a4ce054] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 10:24:47.648653   12653 system_pods.go:61] "csi-hostpathplugin-qdnbn" [8f5408a2-a7eb-4f76-8ce0-b885fb4b47e5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 10:24:47.648667   12653 system_pods.go:61] "etcd-addons-191972" [81af20b7-9b19-4723-9b92-0ded3d775cd3] Running
	I0916 10:24:47.648673   12653 system_pods.go:61] "kindnet-rxp8k" [02b143c0-bbb4-4f94-8448-9c3c4f248a87] Running
	I0916 10:24:47.648678   12653 system_pods.go:61] "kube-apiserver-addons-191972" [1aabf917-f381-4e69-8524-954958c99b7e] Running
	I0916 10:24:47.648684   12653 system_pods.go:61] "kube-controller-manager-addons-191972" [ee796e67-bd06-4d93-9d20-aabcbb395ba2] Running
	I0916 10:24:47.648690   12653 system_pods.go:61] "kube-ingress-dns-minikube" [0f28fa0b-84f1-4215-aa41-4596ab4cef8b] Running
	I0916 10:24:47.648696   12653 system_pods.go:61] "kube-proxy-fnr7f" [a9e53f94-30ad-4178-b9e2-3ba4354a5adf] Running
	I0916 10:24:47.648700   12653 system_pods.go:61] "kube-scheduler-addons-191972" [32bbb72d-e291-4c84-8709-6066cf10b0cf] Running
	I0916 10:24:47.648709   12653 system_pods.go:61] "metrics-server-84c5f94fbc-s7654" [14e280ea-8ba8-4805-844c-aeff8fb18ce0] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 10:24:47.648719   12653 system_pods.go:61] "nvidia-device-plugin-daemonset-vpb85" [14ca6c72-b73b-4254-910a-0b876ca73f90] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 10:24:47.648732   12653 system_pods.go:61] "registry-66c9cd494c-vsbgv" [bd70fcec-e032-4dbd-902c-a139ac179bbf] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 10:24:47.648740   12653 system_pods.go:61] "registry-proxy-6vsnj" [05d6014b-9706-4d7a-a816-dbc7f557cd15] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 10:24:47.648749   12653 system_pods.go:61] "snapshot-controller-56fcc65765-4g9w6" [ad55cf42-58df-4d10-aae8-7626a86e7e0d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:24:47.648760   12653 system_pods.go:61] "snapshot-controller-56fcc65765-htkmc" [70a5c810-f514-4b23-b3a3-7a4cc46c35e2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:24:47.648766   12653 system_pods.go:61] "storage-provisioner" [ca9a25b7-8324-4ea1-a525-24f8c308baea] Running
	I0916 10:24:47.648777   12653 system_pods.go:61] "tiller-deploy-b48cc5f79-ddkxz" [dfe534c4-9e29-4907-b8cc-1dd12fc52f45] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 10:24:47.648789   12653 system_pods.go:74] duration metric: took 189.097544ms to wait for pod list to return data ...
	I0916 10:24:47.648801   12653 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:24:47.841018   12653 default_sa.go:45] found service account: "default"
	I0916 10:24:47.841043   12653 default_sa.go:55] duration metric: took 192.233696ms for default service account to be created ...
	I0916 10:24:47.841053   12653 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:24:47.841394   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:47.937402   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:48.049475   12653 system_pods.go:86] 19 kube-system pods found
	I0916 10:24:48.049509   12653 system_pods.go:89] "coredns-7c65d6cfc9-9rccl" [f2ffddc5-3995-4d5a-8059-297b3859f9c5] Running
	I0916 10:24:48.049523   12653 system_pods.go:89] "csi-hostpath-attacher-0" [07b91be6-2f2b-441c-80ee-f832e9ee2e18] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 10:24:48.049533   12653 system_pods.go:89] "csi-hostpath-resizer-0" [311c306b-accd-4242-8415-81199a4ce054] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 10:24:48.049541   12653 system_pods.go:89] "csi-hostpathplugin-qdnbn" [8f5408a2-a7eb-4f76-8ce0-b885fb4b47e5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 10:24:48.049546   12653 system_pods.go:89] "etcd-addons-191972" [81af20b7-9b19-4723-9b92-0ded3d775cd3] Running
	I0916 10:24:48.049550   12653 system_pods.go:89] "kindnet-rxp8k" [02b143c0-bbb4-4f94-8448-9c3c4f248a87] Running
	I0916 10:24:48.049554   12653 system_pods.go:89] "kube-apiserver-addons-191972" [1aabf917-f381-4e69-8524-954958c99b7e] Running
	I0916 10:24:48.049560   12653 system_pods.go:89] "kube-controller-manager-addons-191972" [ee796e67-bd06-4d93-9d20-aabcbb395ba2] Running
	I0916 10:24:48.049569   12653 system_pods.go:89] "kube-ingress-dns-minikube" [0f28fa0b-84f1-4215-aa41-4596ab4cef8b] Running
	I0916 10:24:48.049572   12653 system_pods.go:89] "kube-proxy-fnr7f" [a9e53f94-30ad-4178-b9e2-3ba4354a5adf] Running
	I0916 10:24:48.049576   12653 system_pods.go:89] "kube-scheduler-addons-191972" [32bbb72d-e291-4c84-8709-6066cf10b0cf] Running
	I0916 10:24:48.049579   12653 system_pods.go:89] "metrics-server-84c5f94fbc-s7654" [14e280ea-8ba8-4805-844c-aeff8fb18ce0] Running
	I0916 10:24:48.049587   12653 system_pods.go:89] "nvidia-device-plugin-daemonset-vpb85" [14ca6c72-b73b-4254-910a-0b876ca73f90] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 10:24:48.049595   12653 system_pods.go:89] "registry-66c9cd494c-vsbgv" [bd70fcec-e032-4dbd-902c-a139ac179bbf] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 10:24:48.049600   12653 system_pods.go:89] "registry-proxy-6vsnj" [05d6014b-9706-4d7a-a816-dbc7f557cd15] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 10:24:48.049605   12653 system_pods.go:89] "snapshot-controller-56fcc65765-4g9w6" [ad55cf42-58df-4d10-aae8-7626a86e7e0d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:24:48.049613   12653 system_pods.go:89] "snapshot-controller-56fcc65765-htkmc" [70a5c810-f514-4b23-b3a3-7a4cc46c35e2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:24:48.049618   12653 system_pods.go:89] "storage-provisioner" [ca9a25b7-8324-4ea1-a525-24f8c308baea] Running
	I0916 10:24:48.049625   12653 system_pods.go:89] "tiller-deploy-b48cc5f79-ddkxz" [dfe534c4-9e29-4907-b8cc-1dd12fc52f45] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 10:24:48.049634   12653 system_pods.go:126] duration metric: took 208.573497ms to wait for k8s-apps to be running ...
	I0916 10:24:48.049644   12653 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:24:48.049682   12653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:24:48.060846   12653 system_svc.go:56] duration metric: took 11.19263ms WaitForService to wait for kubelet
	I0916 10:24:48.060871   12653 kubeadm.go:582] duration metric: took 50.952228588s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:24:48.060890   12653 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:24:48.148219   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:48.242671   12653 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:24:48.242705   12653 node_conditions.go:123] node cpu capacity is 8
	I0916 10:24:48.242718   12653 node_conditions.go:105] duration metric: took 181.823571ms to run NodePressure ...
	I0916 10:24:48.242730   12653 start.go:241] waiting for startup goroutines ...
	I0916 10:24:48.342074   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:48.437253   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:48.650425   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:48.850814   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:48.937328   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:49.149694   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:49.342609   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:49.438289   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:49.649584   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:49.842847   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:49.936933   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:50.149348   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:50.342164   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:50.438163   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:50.649197   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:50.853453   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:50.938034   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:51.148940   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:51.341925   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:51.437207   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:51.649501   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:51.841516   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:51.937843   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:52.148973   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:52.341463   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:52.437548   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:52.649904   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:52.842395   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:52.938876   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:53.150346   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:53.342226   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:53.437852   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:53.650214   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:53.841999   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:53.938041   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:54.149543   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:54.342470   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:54.438196   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:54.649301   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:54.842219   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:54.937405   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:55.148757   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:55.342352   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:55.437453   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:55.649467   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:55.842884   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:55.938335   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:56.149527   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:56.342461   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:56.438207   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:56.649107   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:56.841744   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:56.938316   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:57.150214   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:57.342941   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:57.438321   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:57.650060   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:57.841776   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:57.937801   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:58.148724   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:58.342609   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:58.437714   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:58.648506   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:58.842214   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:58.937202   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:59.149022   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:59.341924   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:59.437205   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:59.649919   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:59.842721   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:59.943895   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:00.148461   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:00.342965   12653 kapi.go:107] duration metric: took 53.504381408s to wait for kubernetes.io/minikube-addons=registry ...
	I0916 10:25:00.438324   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:00.649093   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:00.937839   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:01.148871   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:01.436988   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:01.649359   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:01.937842   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:02.149127   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:02.439235   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:02.648644   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:02.937625   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:03.148437   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:03.438471   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:03.649883   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:03.936881   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:04.149787   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:04.438325   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:04.649405   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:04.937307   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:05.148501   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:05.437162   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:05.649408   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:05.937329   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:06.148922   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:06.437615   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:06.648794   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:06.937817   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:07.149424   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:07.437622   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:07.648805   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:07.975373   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:08.148579   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:08.438130   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:08.649051   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:08.938155   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:09.241812   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:09.438112   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:09.649051   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:09.937597   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:10.148065   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:10.438452   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:10.649615   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:10.937657   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:11.150286   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:11.438138   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:11.648515   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:11.938254   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:12.148855   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:12.437045   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:12.648984   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:12.937480   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:13.149222   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:13.437879   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:13.648073   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:13.937714   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:14.148744   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:14.437856   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:14.648905   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:14.937125   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:15.149947   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:15.438534   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:15.649415   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:15.938563   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:16.148929   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:16.437971   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:16.649574   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:16.938374   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:17.149584   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:17.437332   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:17.649230   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:17.939095   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:18.148655   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:18.437781   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:18.648991   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:18.937887   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:19.149216   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:19.437411   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:19.649222   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:19.937654   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:20.149853   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:20.438168   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:20.648811   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:20.948409   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:21.172608   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:21.655855   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:21.656415   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:21.973917   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:22.149178   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:22.438576   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:22.649097   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:22.939034   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:23.149425   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:23.438124   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:23.650285   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:23.938421   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:24.148909   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:24.441944   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:24.649383   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:24.938850   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:25.149722   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:25.437832   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:25.649648   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:25.938500   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:26.149259   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:26.437884   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:26.649790   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:26.937641   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:27.149739   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:27.438223   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:27.648728   12653 kapi.go:107] duration metric: took 1m23.003864669s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0916 10:25:27.938153   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:28.438461   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:28.939228   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:29.438060   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:29.937952   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:30.438284   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:30.938383   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:31.437781   12653 kapi.go:107] duration metric: took 1m24.004430138s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0916 10:26:53.748019   12653 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 10:26:53.748042   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:54.248033   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:54.748085   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:55.248231   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:55.748800   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:56.251601   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:56.748202   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:57.248415   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:57.748866   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:58.248439   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:58.748615   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:59.248797   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:59.748674   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:00.248751   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:00.748977   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:01.247802   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:01.749050   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:02.247827   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:02.751439   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:03.248607   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:03.748774   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:04.248993   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:04.748179   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:05.248453   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:05.748269   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:06.248843   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:06.749191   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:07.248224   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:07.748003   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:08.248208   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:08.748339   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:09.248558   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:09.748890   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:10.247853   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:10.748462   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:11.248698   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:11.748605   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:12.249209   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:12.747956   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:13.247977   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:13.748012   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:14.248098   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:14.748444   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:15.248890   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:15.748752   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:16.248803   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:16.749124   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:17.248063   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:17.747865   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:18.247931   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:18.748279   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:19.248473   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:19.748289   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:20.248375   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:20.748484   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:21.248848   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:21.748816   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:22.247827   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:22.748462   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:23.248760   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:23.749167   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:24.248424   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:24.748963   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:25.248350   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:25.748222   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:26.248413   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:26.748789   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:27.247908   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:27.747837   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:28.248226   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:28.748371   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:29.249618   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:29.748597   12653 kapi.go:107] duration metric: took 3m21.003635946s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0916 10:27:29.750701   12653 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-191972 cluster.
	I0916 10:27:29.752412   12653 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0916 10:27:29.754028   12653 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0916 10:27:29.756074   12653 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, nvidia-device-plugin, cloud-spanner, storage-provisioner-rancher, volcano, helm-tiller, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0916 10:27:29.757930   12653 addons.go:510] duration metric: took 3m32.649258168s for enable addons: enabled=[storage-provisioner ingress-dns nvidia-device-plugin cloud-spanner storage-provisioner-rancher volcano helm-tiller metrics-server inspektor-gadget yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0916 10:27:29.758012   12653 start.go:246] waiting for cluster config update ...
	I0916 10:27:29.758039   12653 start.go:255] writing updated cluster config ...
	I0916 10:27:29.758383   12653 ssh_runner.go:195] Run: rm -f paused
	I0916 10:27:29.765351   12653 out.go:177] * Done! kubectl is now configured to use "addons-191972" cluster and "default" namespace by default
	E0916 10:27:29.767004   12653 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	85bcbbfdfc074       195d612ae7722       2 minutes ago       Exited              gadget                                   6                   d4feba9de8c25       gadget-rwwbs
	cfade64badb92       db2fc13d44d50       5 minutes ago       Running             gcp-auth                                 0                   99d0fe27850b3       gcp-auth-89d5ffd79-6r2td
	df81f1fc28725       a876393c9504b       6 minutes ago       Running             admission                                0                   0aa4b1d0acb5a       volcano-admission-77d7d48b68-rcfsk
	9dd4a83ba6d70       6041e92ec449f       6 minutes ago       Running             volcano-scheduler                        1                   9564c1c96bcee       volcano-scheduler-576bc46687-jtz7f
	72101e37ab665       738351fd438f0       7 minutes ago       Running             csi-snapshotter                          0                   73e3689e18fe9       csi-hostpathplugin-qdnbn
	da8f6a34306e1       931dbfd16f87c       7 minutes ago       Running             csi-provisioner                          0                   73e3689e18fe9       csi-hostpathplugin-qdnbn
	1649420a66573       e899260153aed       7 minutes ago       Running             liveness-probe                           0                   73e3689e18fe9       csi-hostpathplugin-qdnbn
	e0e474b6d95e5       e255e073c508c       7 minutes ago       Running             hostpath                                 0                   73e3689e18fe9       csi-hostpathplugin-qdnbn
	d5fc898fd874b       a80c8fd6e5229       7 minutes ago       Running             controller                               0                   30db636a12234       ingress-nginx-controller-bc57996ff-lpb7q
	06d43e898075b       88ef14a257f42       7 minutes ago       Running             node-driver-registrar                    0                   73e3689e18fe9       csi-hostpathplugin-qdnbn
	39c5183f27011       ce263a8653f9c       7 minutes ago       Exited              patch                                    0                   589d98ccee909       ingress-nginx-admission-patch-8f8nz
	a8bb0086c52b5       6041e92ec449f       7 minutes ago       Exited              volcano-scheduler                        0                   9564c1c96bcee       volcano-scheduler-576bc46687-jtz7f
	c87d3f3268f2d       159abe21a6880       7 minutes ago       Exited              nvidia-device-plugin-ctr                 0                   4d5298be39c95       nvidia-device-plugin-daemonset-vpb85
	ddf31d8b68bc1       a876393c9504b       7 minutes ago       Exited              main                                     0                   b49978f431ab4       volcano-admission-init-57gk4
	06cf11b7a83f9       ce263a8653f9c       7 minutes ago       Exited              create                                   0                   6301c91177942       ingress-nginx-admission-create-5rjsx
	1cd468b4437bd       a1ed5895ba635       7 minutes ago       Running             csi-external-health-monitor-controller   0                   73e3689e18fe9       csi-hostpathplugin-qdnbn
	79266075c79ff       59cbb42146a37       7 minutes ago       Running             csi-attacher                             0                   a4c401b363464       csi-hostpath-attacher-0
	c65d9de60c2d0       aa61ee9c70bc4       7 minutes ago       Running             volume-snapshot-controller               0                   dba5883c9dc9b       snapshot-controller-56fcc65765-4g9w6
	0c025c1b7dd4c       19a639eda60f0       7 minutes ago       Running             csi-resizer                              0                   176615116e8de       csi-hostpath-resizer-0
	c7d7b6bb58927       96e410111f023       7 minutes ago       Running             volcano-controllers                      0                   84cb34271a61b       volcano-controllers-56675bb4d5-hdpdb
	6819af68287c4       aa61ee9c70bc4       7 minutes ago       Running             volume-snapshot-controller               0                   bb404cbffba4e       snapshot-controller-56fcc65765-htkmc
	89cfd63e70df2       3f39089e90831       7 minutes ago       Running             tiller                                   0                   79bab02e559b8       tiller-deploy-b48cc5f79-ddkxz
	576d6c9483015       48d9cfaaf3904       7 minutes ago       Running             metrics-server                           0                   debbe4f662687       metrics-server-84c5f94fbc-s7654
	3c2ba113f3a92       c69fa2e9cbf5f       7 minutes ago       Running             coredns                                  0                   e557eec597dbb       coredns-7c65d6cfc9-9rccl
	74825d98cba88       e16d1e3a10667       8 minutes ago       Running             local-path-provisioner                   0                   1e611781a41cb       local-path-provisioner-86d989889c-w6mf9
	dfe8c0b03e5c3       30dd67412fdea       8 minutes ago       Running             minikube-ingress-dns                     0                   6682d7fdc0949       kube-ingress-dns-minikube
	62a4b8c25074d       6e38f40d628db       8 minutes ago       Running             storage-provisioner                      0                   54247c11bac23       storage-provisioner
	4c4482bfa98cf       12968670680f4       8 minutes ago       Running             kindnet-cni                              0                   48c4106711b6e       kindnet-rxp8k
	d9d3353287790       60c005f310ff3       8 minutes ago       Running             kube-proxy                               0                   b70e27ed4bc15       kube-proxy-fnr7f
	6e4dbd39a8ef5       175ffd71cce3d       8 minutes ago       Running             kube-controller-manager                  0                   f593f7267aeda       kube-controller-manager-addons-191972
	c76b948fbd083       6bab7719df100       8 minutes ago       Running             kube-apiserver                           0                   a7eb33c199dbc       kube-apiserver-addons-191972
	0539bdd901d4a       9aa1fad941575       8 minutes ago       Running             kube-scheduler                           0                   3aba8d618e3fa       kube-scheduler-addons-191972
	92c65a04535dd       2e96e5913fc06       8 minutes ago       Running             etcd                                     0                   84fc0865b25fe       etcd-addons-191972
	
	
	==> containerd <==
	Sep 16 10:32:44 addons-191972 containerd[858]: time="2024-09-16T10:32:44.626466454Z" level=warning msg="cleanup warnings time=\"2024-09-16T10:32:44Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
	Sep 16 10:32:44 addons-191972 containerd[858]: time="2024-09-16T10:32:44.772565379Z" level=info msg="TearDown network for sandbox \"a76629f8ed521569273bb6f7244341b0da48a8f1944c9ee71209491b5a016045\" successfully"
	Sep 16 10:32:44 addons-191972 containerd[858]: time="2024-09-16T10:32:44.772619417Z" level=info msg="StopPodSandbox for \"a76629f8ed521569273bb6f7244341b0da48a8f1944c9ee71209491b5a016045\" returns successfully"
	Sep 16 10:32:44 addons-191972 containerd[858]: time="2024-09-16T10:32:44.901713412Z" level=info msg="StopContainer for \"4b7eae44645859f7ca2baf4dc371b6c2238b1a7377d492aaabaff2524b385302\" with timeout 30 (s)"
	Sep 16 10:32:44 addons-191972 containerd[858]: time="2024-09-16T10:32:44.902218428Z" level=info msg="Stop container \"4b7eae44645859f7ca2baf4dc371b6c2238b1a7377d492aaabaff2524b385302\" with signal terminated"
	Sep 16 10:32:44 addons-191972 containerd[858]: time="2024-09-16T10:32:44.955507656Z" level=info msg="shim disconnected" id=4b7eae44645859f7ca2baf4dc371b6c2238b1a7377d492aaabaff2524b385302 namespace=k8s.io
	Sep 16 10:32:44 addons-191972 containerd[858]: time="2024-09-16T10:32:44.955581294Z" level=warning msg="cleaning up after shim disconnected" id=4b7eae44645859f7ca2baf4dc371b6c2238b1a7377d492aaabaff2524b385302 namespace=k8s.io
	Sep 16 10:32:44 addons-191972 containerd[858]: time="2024-09-16T10:32:44.955593977Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 16 10:32:44 addons-191972 containerd[858]: time="2024-09-16T10:32:44.974418120Z" level=info msg="StopContainer for \"4b7eae44645859f7ca2baf4dc371b6c2238b1a7377d492aaabaff2524b385302\" returns successfully"
	Sep 16 10:32:44 addons-191972 containerd[858]: time="2024-09-16T10:32:44.975011168Z" level=info msg="StopPodSandbox for \"d087511b13dbf3a07fb94895d6e0bdb8a92895ab352fca17de5cda7d27a93625\""
	Sep 16 10:32:44 addons-191972 containerd[858]: time="2024-09-16T10:32:44.975082859Z" level=info msg="Container to stop \"4b7eae44645859f7ca2baf4dc371b6c2238b1a7377d492aaabaff2524b385302\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Sep 16 10:32:45 addons-191972 containerd[858]: time="2024-09-16T10:32:45.037094965Z" level=info msg="shim disconnected" id=d087511b13dbf3a07fb94895d6e0bdb8a92895ab352fca17de5cda7d27a93625 namespace=k8s.io
	Sep 16 10:32:45 addons-191972 containerd[858]: time="2024-09-16T10:32:45.037156554Z" level=warning msg="cleaning up after shim disconnected" id=d087511b13dbf3a07fb94895d6e0bdb8a92895ab352fca17de5cda7d27a93625 namespace=k8s.io
	Sep 16 10:32:45 addons-191972 containerd[858]: time="2024-09-16T10:32:45.037167204Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 16 10:32:45 addons-191972 containerd[858]: time="2024-09-16T10:32:45.120403833Z" level=info msg="TearDown network for sandbox \"d087511b13dbf3a07fb94895d6e0bdb8a92895ab352fca17de5cda7d27a93625\" successfully"
	Sep 16 10:32:45 addons-191972 containerd[858]: time="2024-09-16T10:32:45.120447025Z" level=info msg="StopPodSandbox for \"d087511b13dbf3a07fb94895d6e0bdb8a92895ab352fca17de5cda7d27a93625\" returns successfully"
	Sep 16 10:32:45 addons-191972 containerd[858]: time="2024-09-16T10:32:45.330890836Z" level=info msg="RemoveContainer for \"7aa17b075bc66addfb41c37bd138411b2f3df8cfefcd1b9d98fb4258d712b0d9\""
	Sep 16 10:32:45 addons-191972 containerd[858]: time="2024-09-16T10:32:45.337080411Z" level=info msg="RemoveContainer for \"7aa17b075bc66addfb41c37bd138411b2f3df8cfefcd1b9d98fb4258d712b0d9\" returns successfully"
	Sep 16 10:32:45 addons-191972 containerd[858]: time="2024-09-16T10:32:45.338102231Z" level=error msg="ContainerStatus for \"7aa17b075bc66addfb41c37bd138411b2f3df8cfefcd1b9d98fb4258d712b0d9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7aa17b075bc66addfb41c37bd138411b2f3df8cfefcd1b9d98fb4258d712b0d9\": not found"
	Sep 16 10:32:45 addons-191972 containerd[858]: time="2024-09-16T10:32:45.339506615Z" level=info msg="RemoveContainer for \"b2d8c858e6464044ecb7241662664ad6f99a290eb886500fea4d1549372fa8f8\""
	Sep 16 10:32:45 addons-191972 containerd[858]: time="2024-09-16T10:32:45.346381841Z" level=info msg="RemoveContainer for \"b2d8c858e6464044ecb7241662664ad6f99a290eb886500fea4d1549372fa8f8\" returns successfully"
	Sep 16 10:32:45 addons-191972 containerd[858]: time="2024-09-16T10:32:45.351557719Z" level=error msg="ContainerStatus for \"b2d8c858e6464044ecb7241662664ad6f99a290eb886500fea4d1549372fa8f8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b2d8c858e6464044ecb7241662664ad6f99a290eb886500fea4d1549372fa8f8\": not found"
	Sep 16 10:32:45 addons-191972 containerd[858]: time="2024-09-16T10:32:45.353389948Z" level=info msg="RemoveContainer for \"4b7eae44645859f7ca2baf4dc371b6c2238b1a7377d492aaabaff2524b385302\""
	Sep 16 10:32:45 addons-191972 containerd[858]: time="2024-09-16T10:32:45.359622377Z" level=info msg="RemoveContainer for \"4b7eae44645859f7ca2baf4dc371b6c2238b1a7377d492aaabaff2524b385302\" returns successfully"
	Sep 16 10:32:45 addons-191972 containerd[858]: time="2024-09-16T10:32:45.420440064Z" level=error msg="ContainerStatus for \"4b7eae44645859f7ca2baf4dc371b6c2238b1a7377d492aaabaff2524b385302\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4b7eae44645859f7ca2baf4dc371b6c2238b1a7377d492aaabaff2524b385302\": not found"
	
	
	==> coredns [3c2ba113f3a928b6de94c4ca0bf607534ff798f3d85ffd2a7685ed6dacc00744] <==
	[INFO] 10.244.0.3:34722 - 16813 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000126799s
	[INFO] 10.244.0.3:47807 - 19593 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000078163s
	[INFO] 10.244.0.3:47807 - 48005 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00012131s
	[INFO] 10.244.0.3:52137 - 389 "AAAA IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.004304691s
	[INFO] 10.244.0.3:52137 - 40577 "A IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.004777432s
	[INFO] 10.244.0.3:37044 - 23366 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.003875752s
	[INFO] 10.244.0.3:37044 - 14153 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004520489s
	[INFO] 10.244.0.3:37775 - 29429 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003806717s
	[INFO] 10.244.0.3:37775 - 41674 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003872738s
	[INFO] 10.244.0.3:58704 - 7476 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000090446s
	[INFO] 10.244.0.3:58704 - 1849 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000134094s
	[INFO] 10.244.0.25:38825 - 37363 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000216144s
	[INFO] 10.244.0.25:38931 - 39307 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000245831s
	[INFO] 10.244.0.25:50024 - 16483 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000164924s
	[INFO] 10.244.0.25:42236 - 32299 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000196632s
	[INFO] 10.244.0.25:49331 - 38072 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000114124s
	[INFO] 10.244.0.25:36861 - 61813 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000164666s
	[INFO] 10.244.0.25:33081 - 5019 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.00927584s
	[INFO] 10.244.0.25:32825 - 10257 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.009718235s
	[INFO] 10.244.0.25:50215 - 44243 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007980557s
	[INFO] 10.244.0.25:46089 - 36172 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008374403s
	[INFO] 10.244.0.25:60708 - 60516 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00523636s
	[INFO] 10.244.0.25:53932 - 3930 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005436837s
	[INFO] 10.244.0.25:33968 - 30856 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.002295196s
	[INFO] 10.244.0.25:51453 - 49493 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002387298s
	
	
	==> describe nodes <==
	Name:               addons-191972
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-191972
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=addons-191972
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_23_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-191972
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-191972"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:23:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-191972
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:32:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:27:56 +0000   Mon, 16 Sep 2024 10:23:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:27:56 +0000   Mon, 16 Sep 2024 10:23:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:27:56 +0000   Mon, 16 Sep 2024 10:23:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:27:56 +0000   Mon, 16 Sep 2024 10:23:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-191972
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 0263fbb37d3545b09ff38a7b68907e4c
	  System UUID:                45c87f39-d597-4b0c-a097-439ebdb945ff
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (24 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  gadget                      gadget-rwwbs                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m42s
	  gcp-auth                    gcp-auth-89d5ffd79-6r2td                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  headlamp                    headlamp-57fb76fcdb-6zpfv                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-lpb7q    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         8m42s
	  kube-system                 coredns-7c65d6cfc9-9rccl                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     8m50s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m39s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m39s
	  kube-system                 csi-hostpathplugin-qdnbn                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m39s
	  kube-system                 etcd-addons-191972                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m56s
	  kube-system                 kindnet-rxp8k                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      8m50s
	  kube-system                 kube-apiserver-addons-191972                250m (3%)     0 (0%)      0 (0%)           0 (0%)         8m55s
	  kube-system                 kube-controller-manager-addons-191972       200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m56s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m46s
	  kube-system                 kube-proxy-fnr7f                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m50s
	  kube-system                 kube-scheduler-addons-191972                100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m55s
	  kube-system                 metrics-server-84c5f94fbc-s7654             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         8m44s
	  kube-system                 snapshot-controller-56fcc65765-4g9w6        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m41s
	  kube-system                 snapshot-controller-56fcc65765-htkmc        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m41s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m46s
	  kube-system                 tiller-deploy-b48cc5f79-ddkxz               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m44s
	  local-path-storage          local-path-provisioner-86d989889c-w6mf9     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m46s
	  volcano-system              volcano-admission-77d7d48b68-rcfsk          0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m41s
	  volcano-system              volcano-controllers-56675bb4d5-hdpdb        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m41s
	  volcano-system              volcano-scheduler-576bc46687-jtz7f          0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             510Mi (1%)   220Mi (0%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 8m45s  kube-proxy       
	  Normal   Starting                 8m55s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m55s  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  8m55s  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8m55s  kubelet          Node addons-191972 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m55s  kubelet          Node addons-191972 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m55s  kubelet          Node addons-191972 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m51s  node-controller  Node addons-191972 event: Registered Node addons-191972 in Controller
	
	
	==> dmesg <==
	[Sep16 10:17]  #2
	[  +0.001391]  #3
	[  +0.000000]  #4
	[  +0.003060] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.003238] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.002037] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.002135]  #5
	[  +0.000696]  #6
	[  +0.003195]  #7
	[  +0.058540] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.423486] i8042: Warning: Keylock active
	[  +0.007424] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.002994] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000654] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000607] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000622] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000638] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000657] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000607] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000608] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.588189] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +7.251152] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [92c65a04535ddef6879f2eb4260843c6961d1fb2395f595b3a5665263c562002] <==
	{"level":"info","ts":"2024-09-16T10:23:47.260476Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:23:47.261160Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:23:47.261447Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:23:47.262322Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-16T10:23:47.262576Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:24:15.873285Z","caller":"traceutil/trace.go:171","msg":"trace[187537689] transaction","detail":"{read_only:false; response_revision:986; number_of_response:1; }","duration":"119.841789ms","start":"2024-09-16T10:24:15.753419Z","end":"2024-09-16T10:24:15.873261Z","steps":["trace[187537689] 'process raft request'  (duration: 119.705144ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:24:16.060589Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.178284ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:24:16.060680Z","caller":"traceutil/trace.go:171","msg":"trace[2127996318] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:986; }","duration":"125.313412ms","start":"2024-09-16T10:24:15.935346Z","end":"2024-09-16T10:24:16.060659Z","steps":["trace[2127996318] 'range keys from in-memory index tree'  (duration: 125.097316ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:07.796336Z","caller":"traceutil/trace.go:171","msg":"trace[28147226] transaction","detail":"{read_only:false; response_revision:1242; number_of_response:1; }","duration":"128.826483ms","start":"2024-09-16T10:25:07.667485Z","end":"2024-09-16T10:25:07.796311Z","steps":["trace[28147226] 'process raft request'  (duration: 41.106171ms)","trace[28147226] 'compare'  (duration: 87.53434ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:25:21.424319Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.488522ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128031931970271159 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/volcano-system/volcano-scheduler-576bc46687-jtz7f\" mod_revision:812 > success:<request_put:<key:\"/registry/pods/volcano-system/volcano-scheduler-576bc46687-jtz7f\" value_size:4029 >> failure:<request_range:<key:\"/registry/pods/volcano-system/volcano-scheduler-576bc46687-jtz7f\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-09-16T10:25:21.424401Z","caller":"traceutil/trace.go:171","msg":"trace[1168470588] linearizableReadLoop","detail":"{readStateIndex:1335; appliedIndex:1334; }","duration":"177.395065ms","start":"2024-09-16T10:25:21.246995Z","end":"2024-09-16T10:25:21.424390Z","steps":["trace[1168470588] 'read index received'  (duration: 48.427907ms)","trace[1168470588] 'applied index is now lower than readState.Index'  (duration: 128.965162ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:25:21.424444Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.446761ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:21.424466Z","caller":"traceutil/trace.go:171","msg":"trace[1171179904] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1302; }","duration":"177.469291ms","start":"2024-09-16T10:25:21.246991Z","end":"2024-09-16T10:25:21.424460Z","steps":["trace[1171179904] 'agreement among raft nodes before linearized reading'  (duration: 177.429463ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:21.424486Z","caller":"traceutil/trace.go:171","msg":"trace[1930200040] transaction","detail":"{read_only:false; response_revision:1302; number_of_response:1; }","duration":"247.357795ms","start":"2024-09-16T10:25:21.177107Z","end":"2024-09-16T10:25:21.424464Z","steps":["trace[1930200040] 'process raft request'  (duration: 118.297085ms)","trace[1930200040] 'compare'  (duration: 128.26971ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T10:25:21.652910Z","caller":"traceutil/trace.go:171","msg":"trace[1856019889] linearizableReadLoop","detail":"{readStateIndex:1338; appliedIndex:1335; }","duration":"218.326846ms","start":"2024-09-16T10:25:21.434567Z","end":"2024-09-16T10:25:21.652894Z","steps":["trace[1856019889] 'read index received'  (duration: 55.93458ms)","trace[1856019889] 'applied index is now lower than readState.Index'  (duration: 162.391571ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T10:25:21.652969Z","caller":"traceutil/trace.go:171","msg":"trace[1279722024] transaction","detail":"{read_only:false; response_revision:1305; number_of_response:1; }","duration":"224.683287ms","start":"2024-09-16T10:25:21.428268Z","end":"2024-09-16T10:25:21.652951Z","steps":["trace[1279722024] 'process raft request'  (duration: 224.540452ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:21.653003Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"218.415614ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:21.653027Z","caller":"traceutil/trace.go:171","msg":"trace[1008371896] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1305; }","duration":"218.457307ms","start":"2024-09-16T10:25:21.434563Z","end":"2024-09-16T10:25:21.653020Z","steps":["trace[1008371896] 'agreement among raft nodes before linearized reading'  (duration: 218.392253ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:21.652921Z","caller":"traceutil/trace.go:171","msg":"trace[1132385399] transaction","detail":"{read_only:false; response_revision:1304; number_of_response:1; }","duration":"225.049342ms","start":"2024-09-16T10:25:21.427850Z","end":"2024-09-16T10:25:21.652899Z","steps":["trace[1132385399] 'process raft request'  (duration: 131.625555ms)","trace[1132385399] 'compare'  (duration: 93.227933ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T10:25:21.868227Z","caller":"traceutil/trace.go:171","msg":"trace[1246984751] linearizableReadLoop","detail":"{readStateIndex:1339; appliedIndex:1338; }","duration":"139.924393ms","start":"2024-09-16T10:25:21.728284Z","end":"2024-09-16T10:25:21.868208Z","steps":["trace[1246984751] 'read index received'  (duration: 63.202511ms)","trace[1246984751] 'applied index is now lower than readState.Index'  (duration: 76.72121ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T10:25:21.868259Z","caller":"traceutil/trace.go:171","msg":"trace[501466804] transaction","detail":"{read_only:false; response_revision:1306; number_of_response:1; }","duration":"210.400699ms","start":"2024-09-16T10:25:21.657832Z","end":"2024-09-16T10:25:21.868233Z","steps":["trace[501466804] 'process raft request'  (duration: 133.673421ms)","trace[501466804] 'compare'  (duration: 76.618072ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:25:21.868373Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.878283ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:21.868410Z","caller":"traceutil/trace.go:171","msg":"trace[1169815467] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1306; }","duration":"121.931335ms","start":"2024-09-16T10:25:21.746471Z","end":"2024-09-16T10:25:21.868402Z","steps":["trace[1169815467] 'agreement among raft nodes before linearized reading'  (duration: 121.861476ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:21.868538Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.236255ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-09-16T10:25:21.868579Z","caller":"traceutil/trace.go:171","msg":"trace[344111638] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1306; }","duration":"140.292497ms","start":"2024-09-16T10:25:21.728276Z","end":"2024-09-16T10:25:21.868569Z","steps":["trace[344111638] 'agreement among raft nodes before linearized reading'  (duration: 140.016451ms)"],"step_count":1}
	
	
	==> gcp-auth [cfade64badb92dacf9d0c56d24c0fb7e95088f5abf7a814ef4801971e4b26216] <==
	2024/09/16 10:27:29 GCP Auth Webhook started!
	2024/09/16 10:32:45 Ready to marshal response ...
	2024/09/16 10:32:45 Ready to write response ...
	2024/09/16 10:32:45 Ready to marshal response ...
	2024/09/16 10:32:45 Ready to write response ...
	2024/09/16 10:32:45 Ready to marshal response ...
	2024/09/16 10:32:45 Ready to write response ...
	
	
	==> kernel <==
	 10:32:46 up 15 min,  0 users,  load average: 1.13, 0.59, 0.40
	Linux addons-191972 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [4c4482bfa98cf1024c4b123130c5a320a891204919b9a1459b6f3269e1e7d29d] <==
	I0916 10:30:39.447865       1 main.go:299] handling current node
	I0916 10:30:49.449838       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:30:49.449872       1 main.go:299] handling current node
	I0916 10:30:59.441603       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:30:59.441634       1 main.go:299] handling current node
	I0916 10:31:09.447812       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:31:09.447844       1 main.go:299] handling current node
	I0916 10:31:19.450907       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:31:19.450940       1 main.go:299] handling current node
	I0916 10:31:29.448450       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:31:29.448483       1 main.go:299] handling current node
	I0916 10:31:39.447884       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:31:39.447915       1 main.go:299] handling current node
	I0916 10:31:49.443809       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:31:49.443842       1 main.go:299] handling current node
	I0916 10:31:59.441426       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:31:59.441461       1 main.go:299] handling current node
	I0916 10:32:09.447827       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:32:09.447865       1 main.go:299] handling current node
	I0916 10:32:19.448134       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:32:19.448165       1 main.go:299] handling current node
	I0916 10:32:29.443818       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:32:29.443852       1 main.go:299] handling current node
	I0916 10:32:39.441647       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:32:39.441692       1 main.go:299] handling current node
	
	
	==> kube-apiserver [c76b948fbd083e0e5229c3ac96548e67224afd5a037343a2b118da9b9ae5ad3a] <==
	W0916 10:26:13.326100       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:14.378343       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:15.413935       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:16.459096       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:17.509475       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:18.532761       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:19.545400       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:20.553347       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:21.640741       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:22.735942       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:24.007851       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:25.084707       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:26.137166       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:27.215912       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:28.269709       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:29.285978       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:30.385745       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:31.389520       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:53.671732       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.114.210:443: connect: connection refused
	E0916 10:26:53.671804       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.114.210:443: connect: connection refused" logger="UnhandledError"
	W0916 10:27:11.712823       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.114.210:443: connect: connection refused
	E0916 10:27:11.712858       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.114.210:443: connect: connection refused" logger="UnhandledError"
	W0916 10:27:11.785537       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.114.210:443: connect: connection refused
	E0916 10:27:11.785576       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.114.210:443: connect: connection refused" logger="UnhandledError"
	I0916 10:32:45.560480       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.245.36"}
	
	
	==> kube-controller-manager [6e4dbd39a8ef56c5a753071ab0489111fcbcaac9f7cbe3b4fdf88030aa41c77b] <==
	I0916 10:27:11.810216       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 10:27:12.442562       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0916 10:27:12.450572       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 10:27:13.528146       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 10:27:13.560100       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0916 10:27:14.534075       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 10:27:14.540099       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 10:27:14.543857       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0916 10:27:14.564649       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0916 10:27:14.570957       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0916 10:27:14.576033       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0916 10:27:29.502878       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="6.820035ms"
	I0916 10:27:29.502976       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="56.15µs"
	I0916 10:27:44.013104       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0916 10:27:44.016022       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0916 10:27:44.039693       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0916 10:27:44.041144       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0916 10:27:56.735238       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-191972"
	I0916 10:32:39.064214       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="8.395µs"
	I0916 10:32:44.235424       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="6.98µs"
	I0916 10:32:44.890679       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-769b77f747" duration="8.047µs"
	I0916 10:32:45.720872       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="146.32449ms"
	I0916 10:32:45.726198       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="5.274582ms"
	I0916 10:32:45.726288       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="52.808µs"
	I0916 10:32:45.732102       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="86.759µs"
	
	
	==> kube-proxy [d9d335328779062c055353442bb9ca0c1e2fef63bc1c598650e6ea25604013a5] <==
	I0916 10:23:59.129562       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:23:59.824945       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:23:59.825067       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:24:00.037013       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:24:00.040602       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:24:00.135054       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:24:00.135450       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:24:00.135471       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:24:00.237323       1 config.go:199] "Starting service config controller"
	I0916 10:24:00.237372       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:24:00.237410       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:24:00.237416       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:24:00.237471       1 config.go:328] "Starting node config controller"
	I0916 10:24:00.237491       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:24:00.337642       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:24:00.337724       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:24:00.337829       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0539bdd901d4af068b2160b27df45018e72113a7a75c6a082ae7e2f64f3f908b] <==
	W0916 10:23:49.138663       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0916 10:23:49.138662       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:23:49.138689       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0916 10:23:49.138707       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:49.138696       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0916 10:23:49.138760       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0916 10:23:49.138769       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 10:23:49.138774       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 10:23:49.138779       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 10:23:49.138787       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:49.139877       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:49.139916       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:50.064082       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 10:23:50.064133       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:50.118512       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:23:50.118558       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:50.132045       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:23:50.132096       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:50.175403       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:50.175438       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:50.199805       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:23:50.199848       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:50.241540       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:50.241599       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0916 10:23:50.633994       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:32:45 addons-191972 kubelet[1565]: E0916 10:32:45.351786    1565 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b2d8c858e6464044ecb7241662664ad6f99a290eb886500fea4d1549372fa8f8\": not found" containerID="b2d8c858e6464044ecb7241662664ad6f99a290eb886500fea4d1549372fa8f8"
	Sep 16 10:32:45 addons-191972 kubelet[1565]: I0916 10:32:45.351834    1565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b2d8c858e6464044ecb7241662664ad6f99a290eb886500fea4d1549372fa8f8"} err="failed to get container status \"b2d8c858e6464044ecb7241662664ad6f99a290eb886500fea4d1549372fa8f8\": rpc error: code = NotFound desc = an error occurred when try to find container \"b2d8c858e6464044ecb7241662664ad6f99a290eb886500fea4d1549372fa8f8\": not found"
	Sep 16 10:32:45 addons-191972 kubelet[1565]: I0916 10:32:45.351865    1565 scope.go:117] "RemoveContainer" containerID="4b7eae44645859f7ca2baf4dc371b6c2238b1a7377d492aaabaff2524b385302"
	Sep 16 10:32:45 addons-191972 kubelet[1565]: I0916 10:32:45.420093    1565 scope.go:117] "RemoveContainer" containerID="4b7eae44645859f7ca2baf4dc371b6c2238b1a7377d492aaabaff2524b385302"
	Sep 16 10:32:45 addons-191972 kubelet[1565]: E0916 10:32:45.420707    1565 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4b7eae44645859f7ca2baf4dc371b6c2238b1a7377d492aaabaff2524b385302\": not found" containerID="4b7eae44645859f7ca2baf4dc371b6c2238b1a7377d492aaabaff2524b385302"
	Sep 16 10:32:45 addons-191972 kubelet[1565]: I0916 10:32:45.420757    1565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4b7eae44645859f7ca2baf4dc371b6c2238b1a7377d492aaabaff2524b385302"} err="failed to get container status \"4b7eae44645859f7ca2baf4dc371b6c2238b1a7377d492aaabaff2524b385302\": rpc error: code = NotFound desc = an error occurred when try to find container \"4b7eae44645859f7ca2baf4dc371b6c2238b1a7377d492aaabaff2524b385302\": not found"
	Sep 16 10:32:45 addons-191972 kubelet[1565]: I0916 10:32:45.523148    1565 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05d6014b-9706-4d7a-a816-dbc7f557cd15" path="/var/lib/kubelet/pods/05d6014b-9706-4d7a-a816-dbc7f557cd15/volumes"
	Sep 16 10:32:45 addons-191972 kubelet[1565]: I0916 10:32:45.523505    1565 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3be99604-1ee4-4c70-96c9-466cd2d9349f" path="/var/lib/kubelet/pods/3be99604-1ee4-4c70-96c9-466cd2d9349f/volumes"
	Sep 16 10:32:45 addons-191972 kubelet[1565]: I0916 10:32:45.523851    1565 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd70fcec-e032-4dbd-902c-a139ac179bbf" path="/var/lib/kubelet/pods/bd70fcec-e032-4dbd-902c-a139ac179bbf/volumes"
	Sep 16 10:32:45 addons-191972 kubelet[1565]: E0916 10:32:45.721855    1565 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dcd47793-2cb2-4850-996d-f9cb3fb47a2d" containerName="create"
	Sep 16 10:32:45 addons-191972 kubelet[1565]: E0916 10:32:45.721904    1565 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="3be99604-1ee4-4c70-96c9-466cd2d9349f" containerName="cloud-spanner-emulator"
	Sep 16 10:32:45 addons-191972 kubelet[1565]: E0916 10:32:45.721915    1565 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="bd70fcec-e032-4dbd-902c-a139ac179bbf" containerName="registry"
	Sep 16 10:32:45 addons-191972 kubelet[1565]: E0916 10:32:45.721924    1565 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="14ca6c72-b73b-4254-910a-0b876ca73f90" containerName="nvidia-device-plugin-ctr"
	Sep 16 10:32:45 addons-191972 kubelet[1565]: E0916 10:32:45.721934    1565 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aa381c71-e508-46bc-afd6-1c593c0dc6f8" containerName="yakd"
	Sep 16 10:32:45 addons-191972 kubelet[1565]: E0916 10:32:45.721943    1565 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ed9833df-9662-4c0b-90c1-3c23f7243496" containerName="patch"
	Sep 16 10:32:45 addons-191972 kubelet[1565]: E0916 10:32:45.721953    1565 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="05d6014b-9706-4d7a-a816-dbc7f557cd15" containerName="registry-proxy"
	Sep 16 10:32:45 addons-191972 kubelet[1565]: I0916 10:32:45.722011    1565 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed9833df-9662-4c0b-90c1-3c23f7243496" containerName="patch"
	Sep 16 10:32:45 addons-191972 kubelet[1565]: I0916 10:32:45.722022    1565 memory_manager.go:354] "RemoveStaleState removing state" podUID="3be99604-1ee4-4c70-96c9-466cd2d9349f" containerName="cloud-spanner-emulator"
	Sep 16 10:32:45 addons-191972 kubelet[1565]: I0916 10:32:45.722030    1565 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa381c71-e508-46bc-afd6-1c593c0dc6f8" containerName="yakd"
	Sep 16 10:32:45 addons-191972 kubelet[1565]: I0916 10:32:45.722039    1565 memory_manager.go:354] "RemoveStaleState removing state" podUID="14ca6c72-b73b-4254-910a-0b876ca73f90" containerName="nvidia-device-plugin-ctr"
	Sep 16 10:32:45 addons-191972 kubelet[1565]: I0916 10:32:45.722048    1565 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd70fcec-e032-4dbd-902c-a139ac179bbf" containerName="registry"
	Sep 16 10:32:45 addons-191972 kubelet[1565]: I0916 10:32:45.722057    1565 memory_manager.go:354] "RemoveStaleState removing state" podUID="05d6014b-9706-4d7a-a816-dbc7f557cd15" containerName="registry-proxy"
	Sep 16 10:32:45 addons-191972 kubelet[1565]: I0916 10:32:45.722065    1565 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcd47793-2cb2-4850-996d-f9cb3fb47a2d" containerName="create"
	Sep 16 10:32:45 addons-191972 kubelet[1565]: I0916 10:32:45.835353    1565 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7f9s\" (UniqueName: \"kubernetes.io/projected/59a7200b-607c-4d45-8e9f-2de2431e2196-kube-api-access-c7f9s\") pod \"headlamp-57fb76fcdb-6zpfv\" (UID: \"59a7200b-607c-4d45-8e9f-2de2431e2196\") " pod="headlamp/headlamp-57fb76fcdb-6zpfv"
	Sep 16 10:32:45 addons-191972 kubelet[1565]: I0916 10:32:45.835408    1565 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/59a7200b-607c-4d45-8e9f-2de2431e2196-gcp-creds\") pod \"headlamp-57fb76fcdb-6zpfv\" (UID: \"59a7200b-607c-4d45-8e9f-2de2431e2196\") " pod="headlamp/headlamp-57fb76fcdb-6zpfv"
	
	
	==> storage-provisioner [62a4b8c25074dcef9656a9b6e749de86b5f7c97f45a25cd328153d14be1d5a78] <==
	I0916 10:24:03.139108       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:24:03.230289       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:24:03.230361       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:24:03.238016       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:24:03.238457       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ff346362-6d54-491c-b142-6d85e8abf2d5", APIVersion:"v1", ResourceVersion:"576", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-191972_e8089787-9f1d-4116-8123-a579d9482714 became leader
	I0916 10:24:03.238505       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-191972_e8089787-9f1d-4116-8123-a579d9482714!
	I0916 10:24:03.339118       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-191972_e8089787-9f1d-4116-8123-a579d9482714!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-191972 -n addons-191972
helpers_test.go:261: (dbg) Run:  kubectl --context addons-191972 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context addons-191972 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (385.318µs)
helpers_test.go:263: kubectl --context addons-191972 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestAddons/parallel/Registry (14.30s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (1.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-191972 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:209: (dbg) Non-zero exit: kubectl --context addons-191972 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: fork/exec /usr/local/bin/kubectl: exec format error (422.978µs)
addons_test.go:210: failed waiting for ingress-nginx-controller : fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-191972
helpers_test.go:235: (dbg) docker inspect addons-191972:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "49285aed0ac6b4add7b1c7856bcff882cf5b64bc1fd5779afefda3979360aedd",
	        "Created": "2024-09-16T10:23:37.048894749Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 13416,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:23:37.183215602Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/49285aed0ac6b4add7b1c7856bcff882cf5b64bc1fd5779afefda3979360aedd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/49285aed0ac6b4add7b1c7856bcff882cf5b64bc1fd5779afefda3979360aedd/hostname",
	        "HostsPath": "/var/lib/docker/containers/49285aed0ac6b4add7b1c7856bcff882cf5b64bc1fd5779afefda3979360aedd/hosts",
	        "LogPath": "/var/lib/docker/containers/49285aed0ac6b4add7b1c7856bcff882cf5b64bc1fd5779afefda3979360aedd/49285aed0ac6b4add7b1c7856bcff882cf5b64bc1fd5779afefda3979360aedd-json.log",
	        "Name": "/addons-191972",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-191972:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-191972",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2254b5220082ec2e338390341321e26cfa70d77e8e8e98f86dc832205812162a-init/diff:/var/lib/docker/overlay2/e391a31d00d345931d18da68f8a7d7338830f6d9b98fe203461225d03a65d594/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2254b5220082ec2e338390341321e26cfa70d77e8e8e98f86dc832205812162a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2254b5220082ec2e338390341321e26cfa70d77e8e8e98f86dc832205812162a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2254b5220082ec2e338390341321e26cfa70d77e8e8e98f86dc832205812162a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-191972",
	                "Source": "/var/lib/docker/volumes/addons-191972/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-191972",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-191972",
	                "name.minikube.sigs.k8s.io": "addons-191972",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b247e3d2e57f223fa64fb9fece255c3b6a0f61eb064ba71e6e8c51f7e6b8590a",
	            "SandboxKey": "/var/run/docker/netns/b247e3d2e57f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-191972": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "aac8db9a46c7b7c219b85113240d1d4a2ee20d1c156fb7315fdf6aa5e797f6a8",
	                    "EndpointID": "ab683490c93590fb0411cd607b8ad8f3100f7ae01f11dd3e855f6321d940faae",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-191972",
	                        "49285aed0ac6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-191972 -n addons-191972
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-191972 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-191972 logs -n 25: (1.174200844s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-297488   | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | -p download-only-297488              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| delete  | -p download-only-297488              | download-only-297488   | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| start   | -o=json --download-only              | download-only-024449   | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | -p download-only-024449              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| delete  | -p download-only-024449              | download-only-024449   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| delete  | -p download-only-297488              | download-only-297488   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| delete  | -p download-only-024449              | download-only-024449   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| start   | --download-only -p                   | download-docker-065822 | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | download-docker-065822               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-065822            | download-docker-065822 | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| start   | --download-only -p                   | binary-mirror-727123   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | binary-mirror-727123                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:34779               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-727123              | binary-mirror-727123   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| addons  | enable dashboard -p                  | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | addons-191972                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | addons-191972                        |                        |         |         |                     |                     |
	| start   | -p addons-191972 --wait=true         | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:27 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                        |         |         |                     |                     |
	| addons  | addons-191972 addons disable         | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | yakd --alsologtostderr -v=1          |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | -p addons-191972                     |                        |         |         |                     |                     |
	| ip      | addons-191972 ip                     | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	| addons  | addons-191972 addons disable         | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p             | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | addons-191972                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | -p addons-191972                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-191972 addons disable         | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:33 UTC |
	|         | headlamp --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC | 16 Sep 24 10:33 UTC |
	|         | addons-191972                        |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:23:15
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:23:15.015457   12653 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:23:15.015610   12653 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:23:15.015623   12653 out.go:358] Setting ErrFile to fd 2...
	I0916 10:23:15.015629   12653 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:23:15.015835   12653 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 10:23:15.016423   12653 out.go:352] Setting JSON to false
	I0916 10:23:15.017221   12653 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":339,"bootTime":1726481856,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:23:15.017316   12653 start.go:139] virtualization: kvm guest
	I0916 10:23:15.019468   12653 out.go:177] * [addons-191972] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:23:15.020856   12653 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:23:15.020860   12653 notify.go:220] Checking for updates...
	I0916 10:23:15.023158   12653 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:23:15.024282   12653 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:23:15.025336   12653 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 10:23:15.026362   12653 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:23:15.027468   12653 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:23:15.028714   12653 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:23:15.049632   12653 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:23:15.049710   12653 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:23:15.095467   12653 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-16 10:23:15.085826834 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:23:15.095614   12653 docker.go:318] overlay module found
	I0916 10:23:15.097552   12653 out.go:177] * Using the docker driver based on user configuration
	I0916 10:23:15.098917   12653 start.go:297] selected driver: docker
	I0916 10:23:15.098932   12653 start.go:901] validating driver "docker" against <nil>
	I0916 10:23:15.098957   12653 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:23:15.099817   12653 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:23:15.144749   12653 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-16 10:23:15.136589077 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:23:15.144922   12653 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:23:15.145171   12653 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:23:15.147081   12653 out.go:177] * Using Docker driver with root privileges
	I0916 10:23:15.148504   12653 cni.go:84] Creating CNI manager for ""
	I0916 10:23:15.148563   12653 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 10:23:15.148575   12653 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 10:23:15.148632   12653 start.go:340] cluster config:
	{Name:addons-191972 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-191972 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:23:15.149981   12653 out.go:177] * Starting "addons-191972" primary control-plane node in "addons-191972" cluster
	I0916 10:23:15.151239   12653 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 10:23:15.152375   12653 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:23:15.153439   12653 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:23:15.153479   12653 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4
	I0916 10:23:15.153492   12653 cache.go:56] Caching tarball of preloaded images
	I0916 10:23:15.153495   12653 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:23:15.153601   12653 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 10:23:15.153613   12653 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 10:23:15.153950   12653 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/config.json ...
	I0916 10:23:15.153974   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/config.json: {Name:mk77e04db13eac753d69895eba14a3f7223b28d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:15.169560   12653 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:23:15.169666   12653 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:23:15.169681   12653 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:23:15.169685   12653 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:23:15.169694   12653 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:23:15.169701   12653 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:23:27.861517   12653 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:23:27.861553   12653 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:23:27.861589   12653 start.go:360] acquireMachinesLock for addons-191972: {Name:mk1204ee6335c794af5ff39cd93a214e3c1d654b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:23:27.861691   12653 start.go:364] duration metric: took 80.959µs to acquireMachinesLock for "addons-191972"
	I0916 10:23:27.861720   12653 start.go:93] Provisioning new machine with config: &{Name:addons-191972 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-191972 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 10:23:27.861797   12653 start.go:125] createHost starting for "" (driver="docker")
	I0916 10:23:27.864363   12653 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0916 10:23:27.864609   12653 start.go:159] libmachine.API.Create for "addons-191972" (driver="docker")
	I0916 10:23:27.864644   12653 client.go:168] LocalClient.Create starting
	I0916 10:23:27.864787   12653 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem
	I0916 10:23:28.100386   12653 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem
	I0916 10:23:28.472961   12653 cli_runner.go:164] Run: docker network inspect addons-191972 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 10:23:28.488573   12653 cli_runner.go:211] docker network inspect addons-191972 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 10:23:28.488653   12653 network_create.go:284] running [docker network inspect addons-191972] to gather additional debugging logs...
	I0916 10:23:28.488675   12653 cli_runner.go:164] Run: docker network inspect addons-191972
	W0916 10:23:28.503724   12653 cli_runner.go:211] docker network inspect addons-191972 returned with exit code 1
	I0916 10:23:28.503773   12653 network_create.go:287] error running [docker network inspect addons-191972]: docker network inspect addons-191972: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-191972 not found
	I0916 10:23:28.503790   12653 network_create.go:289] output of [docker network inspect addons-191972]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-191972 not found
	
	** /stderr **
	I0916 10:23:28.503874   12653 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:23:28.520445   12653 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ac6790}
	I0916 10:23:28.520486   12653 network_create.go:124] attempt to create docker network addons-191972 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0916 10:23:28.520531   12653 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-191972 addons-191972
	I0916 10:23:28.578324   12653 network_create.go:108] docker network addons-191972 192.168.49.0/24 created
	I0916 10:23:28.578353   12653 kic.go:121] calculated static IP "192.168.49.2" for the "addons-191972" container
	I0916 10:23:28.578405   12653 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 10:23:28.593459   12653 cli_runner.go:164] Run: docker volume create addons-191972 --label name.minikube.sigs.k8s.io=addons-191972 --label created_by.minikube.sigs.k8s.io=true
	I0916 10:23:28.611104   12653 oci.go:103] Successfully created a docker volume addons-191972
	I0916 10:23:28.611189   12653 cli_runner.go:164] Run: docker run --rm --name addons-191972-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-191972 --entrypoint /usr/bin/test -v addons-191972:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 10:23:32.566442   12653 cli_runner.go:217] Completed: docker run --rm --name addons-191972-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-191972 --entrypoint /usr/bin/test -v addons-191972:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib: (3.955205965s)
	I0916 10:23:32.566475   12653 oci.go:107] Successfully prepared a docker volume addons-191972
	I0916 10:23:32.566499   12653 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:23:32.566524   12653 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 10:23:32.566588   12653 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-191972:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 10:23:36.989473   12653 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-191972:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.422844639s)
	I0916 10:23:36.989499   12653 kic.go:203] duration metric: took 4.422974303s to extract preloaded images to volume ...
	W0916 10:23:36.989616   12653 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 10:23:36.989704   12653 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 10:23:37.034645   12653 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-191972 --name addons-191972 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-191972 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-191972 --network addons-191972 --ip 192.168.49.2 --volume addons-191972:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 10:23:37.351088   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Running}}
	I0916 10:23:37.369798   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:37.389505   12653 cli_runner.go:164] Run: docker exec addons-191972 stat /var/lib/dpkg/alternatives/iptables
	I0916 10:23:37.432507   12653 oci.go:144] the created container "addons-191972" has a running status.
	I0916 10:23:37.432542   12653 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa...
	I0916 10:23:37.512853   12653 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 10:23:37.532177   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:37.549342   12653 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 10:23:37.549361   12653 kic_runner.go:114] Args: [docker exec --privileged addons-191972 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 10:23:37.594990   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:37.611429   12653 machine.go:93] provisionDockerMachine start ...
	I0916 10:23:37.611513   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:37.628951   12653 main.go:141] libmachine: Using SSH client type: native
	I0916 10:23:37.629230   12653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 10:23:37.629249   12653 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:23:37.630101   12653 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54456->127.0.0.1:32768: read: connection reset by peer
	I0916 10:23:40.759062   12653 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-191972
	
	I0916 10:23:40.759087   12653 ubuntu.go:169] provisioning hostname "addons-191972"
	I0916 10:23:40.759139   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:40.776123   12653 main.go:141] libmachine: Using SSH client type: native
	I0916 10:23:40.776294   12653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 10:23:40.776306   12653 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-191972 && echo "addons-191972" | sudo tee /etc/hostname
	I0916 10:23:40.917999   12653 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-191972
	
	I0916 10:23:40.918073   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:40.934369   12653 main.go:141] libmachine: Using SSH client type: native
	I0916 10:23:40.934536   12653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 10:23:40.934552   12653 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-191972' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-191972/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-191972' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:23:41.063670   12653 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:23:41.063696   12653 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 10:23:41.063755   12653 ubuntu.go:177] setting up certificates
	I0916 10:23:41.063769   12653 provision.go:84] configureAuth start
	I0916 10:23:41.063821   12653 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-191972
	I0916 10:23:41.080185   12653 provision.go:143] copyHostCerts
	I0916 10:23:41.080289   12653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 10:23:41.080452   12653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 10:23:41.080539   12653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 10:23:41.080607   12653 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.addons-191972 san=[127.0.0.1 192.168.49.2 addons-191972 localhost minikube]
	I0916 10:23:41.189624   12653 provision.go:177] copyRemoteCerts
	I0916 10:23:41.189685   12653 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:23:41.189718   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:41.206072   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:41.299940   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 10:23:41.321259   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:23:41.342100   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:23:41.362764   12653 provision.go:87] duration metric: took 298.977855ms to configureAuth
	I0916 10:23:41.362793   12653 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:23:41.362955   12653 config.go:182] Loaded profile config "addons-191972": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:23:41.362966   12653 machine.go:96] duration metric: took 3.751519266s to provisionDockerMachine
	I0916 10:23:41.362991   12653 client.go:171] duration metric: took 13.498318264s to LocalClient.Create
	I0916 10:23:41.363014   12653 start.go:167] duration metric: took 13.498406844s to libmachine.API.Create "addons-191972"
	I0916 10:23:41.363024   12653 start.go:293] postStartSetup for "addons-191972" (driver="docker")
	I0916 10:23:41.363035   12653 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:23:41.363112   12653 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:23:41.363159   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:41.379631   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:41.472315   12653 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:23:41.475416   12653 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:23:41.475455   12653 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:23:41.475469   12653 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:23:41.475477   12653 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:23:41.475490   12653 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 10:23:41.475562   12653 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 10:23:41.475593   12653 start.go:296] duration metric: took 112.560003ms for postStartSetup
	I0916 10:23:41.475953   12653 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-191972
	I0916 10:23:41.491831   12653 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/config.json ...
	I0916 10:23:41.492098   12653 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:23:41.492159   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:41.508709   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:41.604422   12653 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:23:41.608355   12653 start.go:128] duration metric: took 13.746544864s to createHost
	I0916 10:23:41.608378   12653 start.go:83] releasing machines lock for "addons-191972", held for 13.74667303s
	I0916 10:23:41.608449   12653 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-191972
	I0916 10:23:41.624552   12653 ssh_runner.go:195] Run: cat /version.json
	I0916 10:23:41.624594   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:41.624666   12653 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:23:41.624742   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:41.640830   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:41.641558   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:41.811513   12653 ssh_runner.go:195] Run: systemctl --version
	I0916 10:23:41.816090   12653 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:23:41.820031   12653 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 10:23:41.841966   12653 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:23:41.842040   12653 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:23:41.867614   12653 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 10:23:41.867637   12653 start.go:495] detecting cgroup driver to use...
	I0916 10:23:41.867665   12653 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:23:41.867707   12653 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 10:23:41.878761   12653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 10:23:41.889209   12653 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:23:41.889272   12653 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:23:41.901658   12653 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:23:41.914376   12653 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:23:41.989625   12653 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:23:42.064036   12653 docker.go:233] disabling docker service ...
	I0916 10:23:42.064087   12653 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:23:42.082378   12653 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:23:42.092694   12653 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:23:42.163431   12653 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:23:42.235566   12653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:23:42.245920   12653 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:23:42.260071   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 10:23:42.268844   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:23:42.277914   12653 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:23:42.277973   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:23:42.287090   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:23:42.295426   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:23:42.303716   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:23:42.312468   12653 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:23:42.320449   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:23:42.328970   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:23:42.337386   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:23:42.345791   12653 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:23:42.352855   12653 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:23:42.359971   12653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:23:42.438798   12653 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 10:23:42.548862   12653 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 10:23:42.548940   12653 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 10:23:42.552403   12653 start.go:563] Will wait 60s for crictl version
	I0916 10:23:42.552460   12653 ssh_runner.go:195] Run: which crictl
	I0916 10:23:42.555471   12653 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:23:42.586679   12653 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 10:23:42.586752   12653 ssh_runner.go:195] Run: containerd --version
	I0916 10:23:42.608454   12653 ssh_runner.go:195] Run: containerd --version
	I0916 10:23:42.632432   12653 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 10:23:42.633762   12653 cli_runner.go:164] Run: docker network inspect addons-191972 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:23:42.650400   12653 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:23:42.653892   12653 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:23:42.664053   12653 kubeadm.go:883] updating cluster {Name:addons-191972 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-191972 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:23:42.664154   12653 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:23:42.664195   12653 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:23:42.695688   12653 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 10:23:42.695710   12653 containerd.go:534] Images already preloaded, skipping extraction
	I0916 10:23:42.695778   12653 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:23:42.727148   12653 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 10:23:42.727166   12653 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:23:42.727174   12653 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I0916 10:23:42.727255   12653 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-191972 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-191972 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:23:42.727302   12653 ssh_runner.go:195] Run: sudo crictl info
	I0916 10:23:42.757474   12653 cni.go:84] Creating CNI manager for ""
	I0916 10:23:42.757493   12653 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 10:23:42.757502   12653 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:23:42.757520   12653 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-191972 NodeName:addons-191972 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:23:42.757633   12653 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-191972"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:23:42.757684   12653 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:23:42.765604   12653 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:23:42.765672   12653 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:23:42.773363   12653 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0916 10:23:42.789280   12653 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:23:42.805100   12653 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0916 10:23:42.820420   12653 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0916 10:23:42.823264   12653 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:23:42.832700   12653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:23:42.907069   12653 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:23:42.919246   12653 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972 for IP: 192.168.49.2
	I0916 10:23:42.919266   12653 certs.go:194] generating shared ca certs ...
	I0916 10:23:42.919279   12653 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:42.919399   12653 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 10:23:43.054784   12653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt ...
	I0916 10:23:43.054815   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt: {Name:mkf05eaa3032985e939bd1a93aa36a6d50242974 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.055008   12653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key ...
	I0916 10:23:43.055031   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key: {Name:mk4cf19316dad04ab708c5c17e172ec92fc35230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.055134   12653 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 10:23:43.268289   12653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt ...
	I0916 10:23:43.268318   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt: {Name:mk68da284b9ad8d396a1f11e7cfb94cc6f208c5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.268510   12653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key ...
	I0916 10:23:43.268532   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key: {Name:mkdf8c5da2a6d70c9ece2277843ebe69f9105c4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.268626   12653 certs.go:256] generating profile certs ...
	I0916 10:23:43.268694   12653 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.key
	I0916 10:23:43.268720   12653 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt with IP's: []
	I0916 10:23:43.341520   12653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt ...
	I0916 10:23:43.341551   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: {Name:mke3c2895145f9c692cb1e6451d9766499ccc877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.341738   12653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.key ...
	I0916 10:23:43.341755   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.key: {Name:mkd6237ae8ebf429452ae0c60cea457b1f9cff1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.341855   12653 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.key.ac265369
	I0916 10:23:43.341882   12653 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.crt.ac265369 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0916 10:23:43.403750   12653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.crt.ac265369 ...
	I0916 10:23:43.403775   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.crt.ac265369: {Name:mk72db26b8519849abdf811ed93be5caeac2267d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.403951   12653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.key.ac265369 ...
	I0916 10:23:43.403973   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.key.ac265369: {Name:mk4b11dab0a085e395344dc35616a0c16f298191 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.404065   12653 certs.go:381] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.crt.ac265369 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.crt
	I0916 10:23:43.404155   12653 certs.go:385] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.key.ac265369 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.key
	I0916 10:23:43.404230   12653 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.key
	I0916 10:23:43.404250   12653 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.crt with IP's: []
	I0916 10:23:43.488130   12653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.crt ...
	I0916 10:23:43.488160   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.crt: {Name:mk11d8f9c437e5586897185f4551df7594041471 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.488342   12653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.key ...
	I0916 10:23:43.488360   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.key: {Name:mk18734ee357c50ce0ff509ffb1c7e42743fa1b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.488577   12653 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:23:43.488617   12653 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 10:23:43.488652   12653 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:23:43.488682   12653 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 10:23:43.489279   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:23:43.511557   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:23:43.532934   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:23:43.553377   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:23:43.575078   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0916 10:23:43.595868   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:23:43.616905   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:23:43.637839   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:23:43.658915   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:23:43.680485   12653 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:23:43.696295   12653 ssh_runner.go:195] Run: openssl version
	I0916 10:23:43.701282   12653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:23:43.709681   12653 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:43.712715   12653 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:43.712762   12653 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:43.718832   12653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:23:43.727190   12653 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:23:43.730247   12653 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:23:43.730290   12653 kubeadm.go:392] StartCluster: {Name:addons-191972 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-191972 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:23:43.730356   12653 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 10:23:43.730405   12653 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:23:43.761830   12653 cri.go:89] found id: ""
	I0916 10:23:43.761893   12653 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:23:43.770086   12653 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:23:43.778465   12653 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 10:23:43.778522   12653 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:23:43.786355   12653 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:23:43.786373   12653 kubeadm.go:157] found existing configuration files:
	
	I0916 10:23:43.786419   12653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 10:23:43.794471   12653 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:23:43.794519   12653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:23:43.802487   12653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 10:23:43.810401   12653 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:23:43.810451   12653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:23:43.817541   12653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 10:23:43.824799   12653 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:23:43.824842   12653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:23:43.832032   12653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 10:23:43.839239   12653 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:23:43.839298   12653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:23:43.847649   12653 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 10:23:43.880192   12653 kubeadm.go:310] W0916 10:23:43.879583    1109 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:23:43.880773   12653 kubeadm.go:310] W0916 10:23:43.880291    1109 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:23:43.896580   12653 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 10:23:43.944226   12653 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:23:52.227261   12653 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 10:23:52.227338   12653 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 10:23:52.227418   12653 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 10:23:52.227466   12653 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 10:23:52.227501   12653 kubeadm.go:310] OS: Linux
	I0916 10:23:52.227541   12653 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 10:23:52.227584   12653 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 10:23:52.227625   12653 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 10:23:52.227670   12653 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 10:23:52.227711   12653 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 10:23:52.227786   12653 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 10:23:52.227872   12653 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 10:23:52.227947   12653 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 10:23:52.227994   12653 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 10:23:52.228098   12653 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:23:52.228218   12653 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:23:52.228360   12653 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 10:23:52.228491   12653 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:23:52.230143   12653 out.go:235]   - Generating certificates and keys ...
	I0916 10:23:52.230239   12653 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 10:23:52.230328   12653 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 10:23:52.230422   12653 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 10:23:52.230504   12653 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 10:23:52.230596   12653 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 10:23:52.230685   12653 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 10:23:52.230768   12653 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 10:23:52.230910   12653 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-191972 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 10:23:52.230984   12653 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 10:23:52.231130   12653 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-191972 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 10:23:52.231228   12653 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 10:23:52.231331   12653 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 10:23:52.231395   12653 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 10:23:52.231471   12653 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:23:52.231543   12653 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:23:52.231622   12653 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 10:23:52.231683   12653 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:23:52.231759   12653 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:23:52.231871   12653 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:23:52.231979   12653 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:23:52.232069   12653 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:23:52.233407   12653 out.go:235]   - Booting up control plane ...
	I0916 10:23:52.233500   12653 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:23:52.233589   12653 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:23:52.233654   12653 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:23:52.233747   12653 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:23:52.233846   12653 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:23:52.233895   12653 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 10:23:52.234011   12653 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 10:23:52.234102   12653 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:23:52.234155   12653 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.63037ms
	I0916 10:23:52.234224   12653 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 10:23:52.234282   12653 kubeadm.go:310] [api-check] The API server is healthy after 4.501222011s
	I0916 10:23:52.234402   12653 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:23:52.234544   12653 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:23:52.234625   12653 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:23:52.234780   12653 kubeadm.go:310] [mark-control-plane] Marking the node addons-191972 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:23:52.234830   12653 kubeadm.go:310] [bootstrap-token] Using token: fe3fo6.40ynbll2pbwpp3it
	I0916 10:23:52.236918   12653 out.go:235]   - Configuring RBAC rules ...
	I0916 10:23:52.237043   12653 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:23:52.237118   12653 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:23:52.237261   12653 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:23:52.237418   12653 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:23:52.237547   12653 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:23:52.237659   12653 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:23:52.237791   12653 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:23:52.237856   12653 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 10:23:52.237898   12653 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 10:23:52.237904   12653 kubeadm.go:310] 
	I0916 10:23:52.237963   12653 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 10:23:52.237971   12653 kubeadm.go:310] 
	I0916 10:23:52.238040   12653 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 10:23:52.238046   12653 kubeadm.go:310] 
	I0916 10:23:52.238070   12653 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 10:23:52.238123   12653 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:23:52.238167   12653 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:23:52.238173   12653 kubeadm.go:310] 
	I0916 10:23:52.238218   12653 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 10:23:52.238223   12653 kubeadm.go:310] 
	I0916 10:23:52.238268   12653 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:23:52.238274   12653 kubeadm.go:310] 
	I0916 10:23:52.238329   12653 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 10:23:52.238418   12653 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:23:52.238507   12653 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:23:52.238515   12653 kubeadm.go:310] 
	I0916 10:23:52.238598   12653 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:23:52.238681   12653 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 10:23:52.238690   12653 kubeadm.go:310] 
	I0916 10:23:52.238801   12653 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fe3fo6.40ynbll2pbwpp3it \
	I0916 10:23:52.238908   12653 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 \
	I0916 10:23:52.238933   12653 kubeadm.go:310] 	--control-plane 
	I0916 10:23:52.238939   12653 kubeadm.go:310] 
	I0916 10:23:52.239012   12653 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:23:52.239020   12653 kubeadm.go:310] 
	I0916 10:23:52.239095   12653 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fe3fo6.40ynbll2pbwpp3it \
	I0916 10:23:52.239199   12653 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 
	I0916 10:23:52.239210   12653 cni.go:84] Creating CNI manager for ""
	I0916 10:23:52.239215   12653 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 10:23:52.240733   12653 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 10:23:52.241980   12653 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 10:23:52.245609   12653 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 10:23:52.245625   12653 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 10:23:52.261912   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 10:23:52.447057   12653 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:23:52.447144   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:52.447165   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-191972 minikube.k8s.io/updated_at=2024_09_16T10_23_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=addons-191972 minikube.k8s.io/primary=true
	I0916 10:23:52.543497   12653 ops.go:34] apiserver oom_adj: -16
	I0916 10:23:52.543643   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:53.044491   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:53.543770   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:54.044061   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:54.544691   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:55.044249   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:55.543918   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:56.043679   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:56.543717   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:57.044619   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:57.107839   12653 kubeadm.go:1113] duration metric: took 4.660750668s to wait for elevateKubeSystemPrivileges
	I0916 10:23:57.107871   12653 kubeadm.go:394] duration metric: took 13.37758355s to StartCluster
	I0916 10:23:57.107890   12653 settings.go:142] acquiring lock: {Name:mk5f140da961bb985c566f732d611f3e238f1f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:57.107998   12653 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:23:57.108383   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:57.108581   12653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 10:23:57.108610   12653 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 10:23:57.108666   12653 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0916 10:23:57.108789   12653 addons.go:69] Setting yakd=true in profile "addons-191972"
	I0916 10:23:57.108813   12653 addons.go:234] Setting addon yakd=true in "addons-191972"
	I0916 10:23:57.108830   12653 config.go:182] Loaded profile config "addons-191972": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:23:57.108844   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.108885   12653 addons.go:69] Setting inspektor-gadget=true in profile "addons-191972"
	I0916 10:23:57.108900   12653 addons.go:234] Setting addon inspektor-gadget=true in "addons-191972"
	I0916 10:23:57.108928   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.109000   12653 addons.go:69] Setting gcp-auth=true in profile "addons-191972"
	I0916 10:23:57.109025   12653 mustload.go:65] Loading cluster: addons-191972
	I0916 10:23:57.109143   12653 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-191972"
	I0916 10:23:57.109187   12653 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-191972"
	I0916 10:23:57.109185   12653 addons.go:69] Setting default-storageclass=true in profile "addons-191972"
	I0916 10:23:57.109211   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.109225   12653 config.go:182] Loaded profile config "addons-191972": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:23:57.109232   12653 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-191972"
	I0916 10:23:57.109216   12653 addons.go:69] Setting cloud-spanner=true in profile "addons-191972"
	I0916 10:23:57.109259   12653 addons.go:69] Setting storage-provisioner=true in profile "addons-191972"
	I0916 10:23:57.109265   12653 addons.go:234] Setting addon cloud-spanner=true in "addons-191972"
	I0916 10:23:57.109274   12653 addons.go:234] Setting addon storage-provisioner=true in "addons-191972"
	I0916 10:23:57.109308   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.109323   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.109407   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.109485   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.109507   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.109547   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.109684   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.109757   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.109825   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.110167   12653 addons.go:69] Setting ingress-dns=true in profile "addons-191972"
	I0916 10:23:57.110372   12653 addons.go:234] Setting addon ingress-dns=true in "addons-191972"
	I0916 10:23:57.110546   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.111202   12653 addons.go:69] Setting helm-tiller=true in profile "addons-191972"
	I0916 10:23:57.111255   12653 addons.go:234] Setting addon helm-tiller=true in "addons-191972"
	I0916 10:23:57.111282   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.111445   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.111484   12653 addons.go:69] Setting ingress=true in profile "addons-191972"
	I0916 10:23:57.111498   12653 addons.go:234] Setting addon ingress=true in "addons-191972"
	I0916 10:23:57.111527   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.111731   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.110913   12653 addons.go:69] Setting metrics-server=true in profile "addons-191972"
	I0916 10:23:57.111983   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.111987   12653 addons.go:234] Setting addon metrics-server=true in "addons-191972"
	I0916 10:23:57.112171   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.110926   12653 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-191972"
	I0916 10:23:57.113223   12653 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-191972"
	I0916 10:23:57.113258   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.113700   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.115817   12653 out.go:177] * Verifying Kubernetes components...
	I0916 10:23:57.116675   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.110938   12653 addons.go:69] Setting registry=true in profile "addons-191972"
	I0916 10:23:57.116963   12653 addons.go:234] Setting addon registry=true in "addons-191972"
	I0916 10:23:57.117093   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.110938   12653 addons.go:69] Setting volcano=true in profile "addons-191972"
	I0916 10:23:57.117245   12653 addons.go:234] Setting addon volcano=true in "addons-191972"
	I0916 10:23:57.117313   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.110949   12653 addons.go:69] Setting volumesnapshots=true in profile "addons-191972"
	I0916 10:23:57.117350   12653 addons.go:234] Setting addon volumesnapshots=true in "addons-191972"
	I0916 10:23:57.117397   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.117799   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.117919   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.118954   12653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:23:57.110924   12653 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-191972"
	I0916 10:23:57.120855   12653 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-191972"
	I0916 10:23:57.121186   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.148826   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.156121   12653 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:23:57.158094   12653 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0916 10:23:57.160078   12653 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0916 10:23:57.160230   12653 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:23:57.163394   12653 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:23:57.163405   12653 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0916 10:23:57.163428   12653 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0916 10:23:57.163491   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.163933   12653 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 10:23:57.163952   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0916 10:23:57.163999   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.166339   12653 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0916 10:23:57.166352   12653 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0916 10:23:57.166505   12653 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:57.166525   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:23:57.166591   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.176509   12653 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:23:57.176539   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0916 10:23:57.176597   12653 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0916 10:23:57.176613   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0916 10:23:57.176614   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.176667   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.176871   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.184510   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0916 10:23:57.184923   12653 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0916 10:23:57.187620   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0916 10:23:57.187908   12653 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 10:23:57.187925   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0916 10:23:57.188005   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.190192   12653 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0916 10:23:57.190888   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0916 10:23:57.191984   12653 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0916 10:23:57.192004   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0916 10:23:57.192062   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.192462   12653 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-191972"
	I0916 10:23:57.192519   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.193186   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.195485   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0916 10:23:57.196395   12653 addons.go:234] Setting addon default-storageclass=true in "addons-191972"
	I0916 10:23:57.196441   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.197033   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.200024   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0916 10:23:57.200756   12653 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0916 10:23:57.202388   12653 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 10:23:57.202409   12653 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 10:23:57.202572   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.204739   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0916 10:23:57.206967   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0916 10:23:57.217725   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0916 10:23:57.217900   12653 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0916 10:23:57.219581   12653 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0916 10:23:57.219714   12653 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0916 10:23:57.219798   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.219620   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 10:23:57.220511   12653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0916 10:23:57.221727   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.235796   12653 out.go:177]   - Using image docker.io/registry:2.8.3
	I0916 10:23:57.237579   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0916 10:23:57.239326   12653 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 10:23:57.239350   12653 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0916 10:23:57.239411   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.239514   12653 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0916 10:23:57.241480   12653 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0916 10:23:57.241502   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0916 10:23:57.241555   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.243883   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.255850   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.256610   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.261965   12653 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0916 10:23:57.263559   12653 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0916 10:23:57.265255   12653 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0916 10:23:57.266412   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.267838   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.268005   12653 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 10:23:57.268022   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0916 10:23:57.268074   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.269050   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.276483   12653 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:57.276507   12653 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:23:57.276573   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.283025   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.284257   12653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 10:23:57.288880   12653 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0916 10:23:57.290776   12653 out.go:177]   - Using image docker.io/busybox:stable
	I0916 10:23:57.292419   12653 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:23:57.292444   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0916 10:23:57.292510   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.295145   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.295780   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.297628   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.298120   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.300416   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.306147   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.311231   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.314549   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	W0916 10:23:57.324739   12653 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0916 10:23:57.324769   12653 retry.go:31] will retry after 374.435778ms: ssh: handshake failed: EOF
	W0916 10:23:57.325602   12653 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0916 10:23:57.325619   12653 retry.go:31] will retry after 150.651165ms: ssh: handshake failed: EOF
	I0916 10:23:57.330682   12653 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:23:57.629690   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 10:23:57.729822   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:57.730227   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:23:57.742355   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:57.824974   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 10:23:57.842831   12653 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 10:23:57.842917   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0916 10:23:57.843332   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:23:57.921972   12653 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0916 10:23:57.922058   12653 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0916 10:23:57.922011   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0916 10:23:57.922034   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 10:23:57.922195   12653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0916 10:23:57.929874   12653 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 10:23:57.929901   12653 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0916 10:23:57.941141   12653 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0916 10:23:57.941166   12653 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0916 10:23:58.138273   12653 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0916 10:23:58.138369   12653 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0916 10:23:58.222261   12653 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:23:58.222352   12653 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0916 10:23:58.229572   12653 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 10:23:58.229660   12653 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0916 10:23:58.232627   12653 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0916 10:23:58.232698   12653 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0916 10:23:58.322393   12653 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 10:23:58.322420   12653 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 10:23:58.339998   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 10:23:58.435282   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 10:23:58.435313   12653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0916 10:23:58.435591   12653 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.15128486s)
	I0916 10:23:58.435618   12653 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0916 10:23:58.436958   12653 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.1062474s)
	I0916 10:23:58.437947   12653 node_ready.go:35] waiting up to 6m0s for node "addons-191972" to be "Ready" ...
	I0916 10:23:58.441471   12653 node_ready.go:49] node "addons-191972" has status "Ready":"True"
	I0916 10:23:58.441502   12653 node_ready.go:38] duration metric: took 3.529013ms for node "addons-191972" to be "Ready" ...
	I0916 10:23:58.441514   12653 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:23:58.442873   12653 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0916 10:23:58.442897   12653 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0916 10:23:58.534045   12653 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2l862" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:58.540468   12653 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:23:58.540496   12653 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 10:23:58.642810   12653 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:23:58.642885   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0916 10:23:58.728521   12653 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 10:23:58.728554   12653 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0916 10:23:58.840472   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:23:58.921026   12653 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0916 10:23:58.921059   12653 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0916 10:23:58.936525   12653 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0916 10:23:58.936552   12653 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0916 10:23:58.939212   12653 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-191972" context rescaled to 1 replicas
	I0916 10:23:59.131614   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:23:59.224079   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 10:23:59.224104   12653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0916 10:23:59.230203   12653 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0916 10:23:59.230238   12653 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0916 10:23:59.423686   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:23:59.430144   12653 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0916 10:23:59.430176   12653 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0916 10:23:59.433784   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 10:23:59.433810   12653 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0916 10:23:59.542608   12653 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:23:59.542635   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0916 10:23:59.630644   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 10:23:59.630734   12653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0916 10:23:59.840282   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:23:59.927613   12653 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:23:59.927705   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0916 10:24:00.030859   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 10:24:00.030936   12653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0916 10:24:00.034479   12653 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0916 10:24:00.034549   12653 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0916 10:24:00.038488   12653 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-2l862" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-2l862" not found
	I0916 10:24:00.038522   12653 pod_ready.go:82] duration metric: took 1.504385632s for pod "coredns-7c65d6cfc9-2l862" in "kube-system" namespace to be "Ready" ...
	E0916 10:24:00.038535   12653 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-2l862" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-2l862" not found
	I0916 10:24:00.038552   12653 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:00.333635   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:24:00.339910   12653 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 10:24:00.339994   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0916 10:24:00.627234   12653 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0916 10:24:00.627262   12653 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0916 10:24:00.929780   12653 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 10:24:00.929809   12653 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0916 10:24:01.128973   12653 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:24:01.129062   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0916 10:24:01.334031   12653 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 10:24:01.334116   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0916 10:24:01.525220   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:24:02.022039   12653 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 10:24:02.022114   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0916 10:24:02.136463   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:02.532736   12653 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:24:02.532829   12653 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0916 10:24:02.738986   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:24:04.426813   12653 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0916 10:24:04.426903   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:24:04.456284   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:24:04.624938   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:04.638370   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.008571899s)
	I0916 10:24:04.638414   12653 addons.go:475] Verifying addon ingress=true in "addons-191972"
	I0916 10:24:04.638488   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.908226437s)
	I0916 10:24:04.638570   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.908717103s)
	I0916 10:24:04.638623   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.896188028s)
	I0916 10:24:04.638699   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.81369606s)
	I0916 10:24:04.638718   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.795359026s)
	I0916 10:24:04.638742   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.716592394s)
	I0916 10:24:04.641681   12653 out.go:177] * Verifying ingress addon...
	I0916 10:24:04.644857   12653 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	W0916 10:24:04.722084   12653 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0916 10:24:04.723574   12653 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0916 10:24:04.723598   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:04.841083   12653 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0916 10:24:04.932849   12653 addons.go:234] Setting addon gcp-auth=true in "addons-191972"
	I0916 10:24:04.932903   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:24:04.933372   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:24:04.957393   12653 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0916 10:24:04.957464   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:24:04.975728   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:24:05.150342   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:05.650366   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:06.149809   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:06.649391   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:06.834167   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (8.494119031s)
	I0916 10:24:06.834259   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.993750099s)
	I0916 10:24:06.834355   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.702687859s)
	I0916 10:24:06.834379   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.410662864s)
	I0916 10:24:06.834381   12653 addons.go:475] Verifying addon metrics-server=true in "addons-191972"
	I0916 10:24:06.834394   12653 addons.go:475] Verifying addon registry=true in "addons-191972"
	I0916 10:24:06.834447   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.994082306s)
	I0916 10:24:06.834595   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.500877662s)
	W0916 10:24:06.834635   12653 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:24:06.834660   12653 retry.go:31] will retry after 180.492463ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:24:06.834694   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.309367322s)
	I0916 10:24:06.836029   12653 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-191972 service yakd-dashboard -n yakd-dashboard
	
	I0916 10:24:06.836032   12653 out.go:177] * Verifying registry addon...
	I0916 10:24:06.838577   12653 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0916 10:24:06.842659   12653 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 10:24:06.842681   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:07.016329   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:24:07.122253   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:07.229433   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:07.346049   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:07.428384   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.689342475s)
	I0916 10:24:07.428423   12653 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-191972"
	I0916 10:24:07.428557   12653 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.471115449s)
	I0916 10:24:07.430137   12653 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:24:07.430140   12653 out.go:177] * Verifying csi-hostpath-driver addon...
	I0916 10:24:07.432142   12653 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0916 10:24:07.433350   12653 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0916 10:24:07.433452   12653 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 10:24:07.433472   12653 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0916 10:24:07.446890   12653 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 10:24:07.446929   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:07.523198   12653 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 10:24:07.523247   12653 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0916 10:24:07.543809   12653 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:24:07.543877   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0916 10:24:07.627288   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:24:07.649744   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:07.842799   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:07.943700   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.149515   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:08.343117   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:08.438263   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.651360   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:08.739263   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.722876496s)
	I0916 10:24:08.739377   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.111993041s)
	I0916 10:24:08.740565   12653 addons.go:475] Verifying addon gcp-auth=true in "addons-191972"
	I0916 10:24:08.742658   12653 out.go:177] * Verifying gcp-auth addon...
	I0916 10:24:08.744959   12653 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0916 10:24:08.752275   12653 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 10:24:08.842486   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:08.937942   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.148485   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:09.342745   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:09.444884   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.544117   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:09.649057   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:09.850158   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:09.951607   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.149384   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:10.342403   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:10.437953   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.648926   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:10.842555   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:10.938628   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:11.149265   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:11.341824   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:11.438269   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:11.544664   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:11.649663   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:11.842706   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:11.938382   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:12.149747   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:12.341485   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:12.438115   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:12.649444   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:12.842417   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:12.937808   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:13.149247   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:13.342184   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:13.443397   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:13.544742   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:13.649342   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:13.842433   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:13.938156   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:14.148884   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:14.342230   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:14.437378   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:14.648929   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:14.841404   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:14.938373   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:15.148947   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:15.342062   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:15.437442   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:15.544833   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:15.649729   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:15.875330   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:16.063181   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:16.148410   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:16.342704   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:16.437759   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:16.649599   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:16.842196   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:16.937322   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:17.148973   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:17.342240   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:17.438331   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:17.649287   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:17.842346   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:17.937786   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:18.044459   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:18.148462   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:18.342098   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:18.438245   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:18.650618   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:18.842115   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:18.937393   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:19.148210   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:19.342331   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:19.437753   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:19.649206   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:19.841659   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:19.937929   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:20.149095   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:20.341559   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:20.437389   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:20.543697   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:20.649389   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:20.841724   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:20.939911   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:21.148803   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:21.341867   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:21.437743   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:21.649220   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:21.841636   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:21.937733   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:22.148853   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:22.341623   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:22.438291   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:22.544155   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:22.648789   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:22.842117   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:22.937569   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:23.148605   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:23.342228   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:23.437946   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:23.648725   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:23.848611   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:23.937702   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:24.148830   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:24.341472   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:24.437746   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:24.648857   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:24.841524   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:24.937579   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:25.043875   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:25.148986   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:25.341729   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:25.438614   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:25.648859   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:25.842571   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:25.937660   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:26.148067   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:26.342525   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:26.442495   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:26.649368   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:26.841986   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:26.937210   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:27.044290   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:27.148266   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:27.341925   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:27.437369   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:27.648710   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:27.842271   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:27.937289   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:28.149389   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:28.341712   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:28.437988   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:28.649507   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:28.841935   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:28.937651   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:29.148305   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:29.341758   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:29.437230   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:29.544648   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:29.648789   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:29.842453   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:29.937780   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:30.149144   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:30.341971   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:30.436935   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:30.648826   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:30.842241   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:30.937301   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:31.148532   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:31.342364   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:31.438028   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:31.649021   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:31.842529   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:31.938084   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:32.044452   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:32.148477   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:32.342165   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:32.437629   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:32.649007   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:32.841446   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:32.937583   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:33.148965   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:33.341801   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:33.437144   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:33.649484   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:33.842344   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:33.937348   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:34.148522   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:34.342404   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:34.438126   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:34.543640   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:34.649404   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:34.842417   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:34.937940   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:35.149191   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:35.341955   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:35.437296   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:35.649499   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:35.841951   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:35.937835   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:36.148878   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:36.342396   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:36.437451   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:36.648935   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:36.841429   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:36.937515   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:37.043652   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:37.148879   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:37.341650   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:37.438917   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:37.648863   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:37.843665   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:37.937755   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:38.148476   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:38.342129   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:38.437617   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:38.648850   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:38.842096   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:38.937210   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:39.044295   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:39.148546   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:39.342070   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:39.437434   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:39.649394   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:39.850992   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:39.937068   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:40.148412   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:40.342026   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:40.438818   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:40.648424   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:40.842673   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:40.937959   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:41.149077   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:41.341573   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:41.437823   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:41.544866   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:41.649385   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:41.842400   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:41.942736   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:42.148726   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:42.342124   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:42.438550   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:42.649404   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:42.841927   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:42.937808   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:43.149523   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:43.341957   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:43.437318   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:43.545247   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:43.648618   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:43.842970   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:43.938236   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:44.149170   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:44.342180   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:44.437399   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:44.649533   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:44.842942   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:44.937846   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:45.149581   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:45.342185   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:45.437873   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:45.649109   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:45.842031   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:45.937050   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:46.043865   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:46.149131   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:46.342272   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:46.437555   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:46.649645   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:46.850195   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:46.951731   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:47.044952   12653 pod_ready.go:93] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:47.044977   12653 pod_ready.go:82] duration metric: took 47.006412913s for pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.044991   12653 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.048830   12653 pod_ready.go:93] pod "etcd-addons-191972" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:47.048847   12653 pod_ready.go:82] duration metric: took 3.848159ms for pod "etcd-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.048861   12653 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.052536   12653 pod_ready.go:93] pod "kube-apiserver-addons-191972" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:47.052558   12653 pod_ready.go:82] duration metric: took 3.691187ms for pod "kube-apiserver-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.052566   12653 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.056167   12653 pod_ready.go:93] pod "kube-controller-manager-addons-191972" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:47.056192   12653 pod_ready.go:82] duration metric: took 3.620465ms for pod "kube-controller-manager-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.056201   12653 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fnr7f" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.060021   12653 pod_ready.go:93] pod "kube-proxy-fnr7f" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:47.060038   12653 pod_ready.go:82] duration metric: took 3.830746ms for pod "kube-proxy-fnr7f" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.060046   12653 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.149672   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:47.342533   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:47.437808   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:47.441161   12653 pod_ready.go:93] pod "kube-scheduler-addons-191972" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:47.441181   12653 pod_ready.go:82] duration metric: took 381.129532ms for pod "kube-scheduler-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.441188   12653 pod_ready.go:39] duration metric: took 48.999654984s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:24:47.441205   12653 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:24:47.441254   12653 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:24:47.453909   12653 api_server.go:72] duration metric: took 50.345260117s to wait for apiserver process to appear ...
	I0916 10:24:47.453935   12653 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:24:47.453960   12653 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 10:24:47.458673   12653 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 10:24:47.459648   12653 api_server.go:141] control plane version: v1.31.1
	I0916 10:24:47.459673   12653 api_server.go:131] duration metric: took 5.729621ms to wait for apiserver health ...
	I0916 10:24:47.459683   12653 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:24:47.648237   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:47.648583   12653 system_pods.go:59] 19 kube-system pods found
	I0916 10:24:47.648620   12653 system_pods.go:61] "coredns-7c65d6cfc9-9rccl" [f2ffddc5-3995-4d5a-8059-297b3859f9c5] Running
	I0916 10:24:47.648634   12653 system_pods.go:61] "csi-hostpath-attacher-0" [07b91be6-2f2b-441c-80ee-f832e9ee2e18] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 10:24:47.648642   12653 system_pods.go:61] "csi-hostpath-resizer-0" [311c306b-accd-4242-8415-81199a4ce054] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 10:24:47.648653   12653 system_pods.go:61] "csi-hostpathplugin-qdnbn" [8f5408a2-a7eb-4f76-8ce0-b885fb4b47e5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 10:24:47.648667   12653 system_pods.go:61] "etcd-addons-191972" [81af20b7-9b19-4723-9b92-0ded3d775cd3] Running
	I0916 10:24:47.648673   12653 system_pods.go:61] "kindnet-rxp8k" [02b143c0-bbb4-4f94-8448-9c3c4f248a87] Running
	I0916 10:24:47.648678   12653 system_pods.go:61] "kube-apiserver-addons-191972" [1aabf917-f381-4e69-8524-954958c99b7e] Running
	I0916 10:24:47.648684   12653 system_pods.go:61] "kube-controller-manager-addons-191972" [ee796e67-bd06-4d93-9d20-aabcbb395ba2] Running
	I0916 10:24:47.648690   12653 system_pods.go:61] "kube-ingress-dns-minikube" [0f28fa0b-84f1-4215-aa41-4596ab4cef8b] Running
	I0916 10:24:47.648696   12653 system_pods.go:61] "kube-proxy-fnr7f" [a9e53f94-30ad-4178-b9e2-3ba4354a5adf] Running
	I0916 10:24:47.648700   12653 system_pods.go:61] "kube-scheduler-addons-191972" [32bbb72d-e291-4c84-8709-6066cf10b0cf] Running
	I0916 10:24:47.648709   12653 system_pods.go:61] "metrics-server-84c5f94fbc-s7654" [14e280ea-8ba8-4805-844c-aeff8fb18ce0] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 10:24:47.648719   12653 system_pods.go:61] "nvidia-device-plugin-daemonset-vpb85" [14ca6c72-b73b-4254-910a-0b876ca73f90] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 10:24:47.648732   12653 system_pods.go:61] "registry-66c9cd494c-vsbgv" [bd70fcec-e032-4dbd-902c-a139ac179bbf] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 10:24:47.648740   12653 system_pods.go:61] "registry-proxy-6vsnj" [05d6014b-9706-4d7a-a816-dbc7f557cd15] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 10:24:47.648749   12653 system_pods.go:61] "snapshot-controller-56fcc65765-4g9w6" [ad55cf42-58df-4d10-aae8-7626a86e7e0d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:24:47.648760   12653 system_pods.go:61] "snapshot-controller-56fcc65765-htkmc" [70a5c810-f514-4b23-b3a3-7a4cc46c35e2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:24:47.648766   12653 system_pods.go:61] "storage-provisioner" [ca9a25b7-8324-4ea1-a525-24f8c308baea] Running
	I0916 10:24:47.648777   12653 system_pods.go:61] "tiller-deploy-b48cc5f79-ddkxz" [dfe534c4-9e29-4907-b8cc-1dd12fc52f45] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 10:24:47.648789   12653 system_pods.go:74] duration metric: took 189.097544ms to wait for pod list to return data ...
	I0916 10:24:47.648801   12653 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:24:47.841018   12653 default_sa.go:45] found service account: "default"
	I0916 10:24:47.841043   12653 default_sa.go:55] duration metric: took 192.233696ms for default service account to be created ...
	I0916 10:24:47.841053   12653 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:24:47.841394   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:47.937402   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:48.049475   12653 system_pods.go:86] 19 kube-system pods found
	I0916 10:24:48.049509   12653 system_pods.go:89] "coredns-7c65d6cfc9-9rccl" [f2ffddc5-3995-4d5a-8059-297b3859f9c5] Running
	I0916 10:24:48.049523   12653 system_pods.go:89] "csi-hostpath-attacher-0" [07b91be6-2f2b-441c-80ee-f832e9ee2e18] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 10:24:48.049533   12653 system_pods.go:89] "csi-hostpath-resizer-0" [311c306b-accd-4242-8415-81199a4ce054] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 10:24:48.049541   12653 system_pods.go:89] "csi-hostpathplugin-qdnbn" [8f5408a2-a7eb-4f76-8ce0-b885fb4b47e5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 10:24:48.049546   12653 system_pods.go:89] "etcd-addons-191972" [81af20b7-9b19-4723-9b92-0ded3d775cd3] Running
	I0916 10:24:48.049550   12653 system_pods.go:89] "kindnet-rxp8k" [02b143c0-bbb4-4f94-8448-9c3c4f248a87] Running
	I0916 10:24:48.049554   12653 system_pods.go:89] "kube-apiserver-addons-191972" [1aabf917-f381-4e69-8524-954958c99b7e] Running
	I0916 10:24:48.049560   12653 system_pods.go:89] "kube-controller-manager-addons-191972" [ee796e67-bd06-4d93-9d20-aabcbb395ba2] Running
	I0916 10:24:48.049569   12653 system_pods.go:89] "kube-ingress-dns-minikube" [0f28fa0b-84f1-4215-aa41-4596ab4cef8b] Running
	I0916 10:24:48.049572   12653 system_pods.go:89] "kube-proxy-fnr7f" [a9e53f94-30ad-4178-b9e2-3ba4354a5adf] Running
	I0916 10:24:48.049576   12653 system_pods.go:89] "kube-scheduler-addons-191972" [32bbb72d-e291-4c84-8709-6066cf10b0cf] Running
	I0916 10:24:48.049579   12653 system_pods.go:89] "metrics-server-84c5f94fbc-s7654" [14e280ea-8ba8-4805-844c-aeff8fb18ce0] Running
	I0916 10:24:48.049587   12653 system_pods.go:89] "nvidia-device-plugin-daemonset-vpb85" [14ca6c72-b73b-4254-910a-0b876ca73f90] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 10:24:48.049595   12653 system_pods.go:89] "registry-66c9cd494c-vsbgv" [bd70fcec-e032-4dbd-902c-a139ac179bbf] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 10:24:48.049600   12653 system_pods.go:89] "registry-proxy-6vsnj" [05d6014b-9706-4d7a-a816-dbc7f557cd15] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 10:24:48.049605   12653 system_pods.go:89] "snapshot-controller-56fcc65765-4g9w6" [ad55cf42-58df-4d10-aae8-7626a86e7e0d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:24:48.049613   12653 system_pods.go:89] "snapshot-controller-56fcc65765-htkmc" [70a5c810-f514-4b23-b3a3-7a4cc46c35e2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:24:48.049618   12653 system_pods.go:89] "storage-provisioner" [ca9a25b7-8324-4ea1-a525-24f8c308baea] Running
	I0916 10:24:48.049625   12653 system_pods.go:89] "tiller-deploy-b48cc5f79-ddkxz" [dfe534c4-9e29-4907-b8cc-1dd12fc52f45] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 10:24:48.049634   12653 system_pods.go:126] duration metric: took 208.573497ms to wait for k8s-apps to be running ...
	I0916 10:24:48.049644   12653 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:24:48.049682   12653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:24:48.060846   12653 system_svc.go:56] duration metric: took 11.19263ms WaitForService to wait for kubelet
	I0916 10:24:48.060871   12653 kubeadm.go:582] duration metric: took 50.952228588s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:24:48.060890   12653 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:24:48.148219   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:48.242671   12653 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:24:48.242705   12653 node_conditions.go:123] node cpu capacity is 8
	I0916 10:24:48.242718   12653 node_conditions.go:105] duration metric: took 181.823571ms to run NodePressure ...
	I0916 10:24:48.242730   12653 start.go:241] waiting for startup goroutines ...
	I0916 10:24:48.342074   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:48.437253   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:48.650425   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:48.850814   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:48.937328   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:49.149694   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:49.342609   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:49.438289   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:49.649584   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:49.842847   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:49.936933   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:50.149348   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:50.342164   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:50.438163   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:50.649197   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:50.853453   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:50.938034   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:51.148940   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:51.341925   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:51.437207   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:51.649501   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:51.841516   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:51.937843   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:52.148973   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:52.341463   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:52.437548   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:52.649904   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:52.842395   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:52.938876   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:53.150346   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:53.342226   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:53.437852   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:53.650214   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:53.841999   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:53.938041   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:54.149543   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:54.342470   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:54.438196   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:54.649301   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:54.842219   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:54.937405   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:55.148757   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:55.342352   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:55.437453   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:55.649467   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:55.842884   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:55.938335   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:56.149527   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:56.342461   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:56.438207   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:56.649107   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:56.841744   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:56.938316   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:57.150214   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:57.342941   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:57.438321   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:57.650060   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:57.841776   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:57.937801   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:58.148724   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:58.342609   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:58.437714   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:58.648506   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:58.842214   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:58.937202   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:59.149022   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:59.341924   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:59.437205   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:59.649919   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:59.842721   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:59.943895   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:00.148461   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:00.342965   12653 kapi.go:107] duration metric: took 53.504381408s to wait for kubernetes.io/minikube-addons=registry ...
	I0916 10:25:00.438324   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:00.649093   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:00.937839   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:01.148871   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:01.436988   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:01.649359   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:01.937842   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:02.149127   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:02.439235   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:02.648644   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:02.937625   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:03.148437   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:03.438471   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:03.649883   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:03.936881   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:04.149787   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:04.438325   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:04.649405   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:04.937307   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:05.148501   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:05.437162   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:05.649408   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:05.937329   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:06.148922   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:06.437615   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:06.648794   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:06.937817   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:07.149424   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:07.437622   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:07.648805   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:07.975373   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:08.148579   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:08.438130   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:08.649051   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:08.938155   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:09.241812   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:09.438112   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:09.649051   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:09.937597   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:10.148065   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:10.438452   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:10.649615   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:10.937657   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:11.150286   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:11.438138   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:11.648515   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:11.938254   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:12.148855   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:12.437045   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:12.648984   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:12.937480   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:13.149222   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:13.437879   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:13.648073   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:13.937714   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:14.148744   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:14.437856   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:14.648905   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:14.937125   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:15.149947   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:15.438534   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:15.649415   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:15.938563   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:16.148929   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:16.437971   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:16.649574   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:16.938374   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:17.149584   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:17.437332   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:17.649230   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:17.939095   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:18.148655   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:18.437781   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:18.648991   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:18.937887   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:19.149216   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:19.437411   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:19.649222   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:19.937654   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:20.149853   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:20.438168   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:20.648811   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:20.948409   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:21.172608   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:21.655855   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:21.656415   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:21.973917   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:22.149178   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:22.438576   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:22.649097   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:22.939034   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:23.149425   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:23.438124   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:23.650285   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:23.938421   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:24.148909   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:24.441944   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:24.649383   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:24.938850   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:25.149722   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:25.437832   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:25.649648   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:25.938500   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:26.149259   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:26.437884   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:26.649790   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:26.937641   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:27.149739   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:27.438223   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:27.648728   12653 kapi.go:107] duration metric: took 1m23.003864669s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0916 10:25:27.938153   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:28.438461   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:28.939228   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:29.438060   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:29.937952   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:30.438284   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:30.938383   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:31.437781   12653 kapi.go:107] duration metric: took 1m24.004430138s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0916 10:26:53.748019   12653 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 10:26:53.748042   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:54.248033   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:54.748085   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:55.248231   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:55.748800   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:56.251601   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:56.748202   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:57.248415   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:57.748866   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:58.248439   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:58.748615   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:59.248797   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:59.748674   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:00.248751   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:00.748977   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:01.247802   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:01.749050   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:02.247827   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:02.751439   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:03.248607   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:03.748774   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:04.248993   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:04.748179   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:05.248453   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:05.748269   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:06.248843   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:06.749191   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:07.248224   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:07.748003   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:08.248208   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:08.748339   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:09.248558   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:09.748890   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:10.247853   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:10.748462   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:11.248698   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:11.748605   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:12.249209   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:12.747956   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:13.247977   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:13.748012   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:14.248098   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:14.748444   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:15.248890   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:15.748752   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:16.248803   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:16.749124   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:17.248063   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:17.747865   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:18.247931   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:18.748279   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:19.248473   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:19.748289   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:20.248375   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:20.748484   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:21.248848   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:21.748816   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:22.247827   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:22.748462   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:23.248760   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:23.749167   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:24.248424   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:24.748963   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:25.248350   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:25.748222   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:26.248413   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:26.748789   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:27.247908   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:27.747837   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:28.248226   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:28.748371   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:29.249618   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:29.748597   12653 kapi.go:107] duration metric: took 3m21.003635946s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0916 10:27:29.750701   12653 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-191972 cluster.
	I0916 10:27:29.752412   12653 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0916 10:27:29.754028   12653 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0916 10:27:29.756074   12653 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, nvidia-device-plugin, cloud-spanner, storage-provisioner-rancher, volcano, helm-tiller, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0916 10:27:29.757930   12653 addons.go:510] duration metric: took 3m32.649258168s for enable addons: enabled=[storage-provisioner ingress-dns nvidia-device-plugin cloud-spanner storage-provisioner-rancher volcano helm-tiller metrics-server inspektor-gadget yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0916 10:27:29.758012   12653 start.go:246] waiting for cluster config update ...
	I0916 10:27:29.758039   12653 start.go:255] writing updated cluster config ...
	I0916 10:27:29.758383   12653 ssh_runner.go:195] Run: rm -f paused
	I0916 10:27:29.765351   12653 out.go:177] * Done! kubectl is now configured to use "addons-191972" cluster and "default" namespace by default
	E0916 10:27:29.767004   12653 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	cfade64badb92       db2fc13d44d50       5 minutes ago       Running             gcp-auth                                 0                   99d0fe27850b3       gcp-auth-89d5ffd79-6r2td
	df81f1fc28725       a876393c9504b       6 minutes ago       Running             admission                                0                   0aa4b1d0acb5a       volcano-admission-77d7d48b68-rcfsk
	9dd4a83ba6d70       6041e92ec449f       6 minutes ago       Running             volcano-scheduler                        1                   9564c1c96bcee       volcano-scheduler-576bc46687-jtz7f
	72101e37ab665       738351fd438f0       7 minutes ago       Running             csi-snapshotter                          0                   73e3689e18fe9       csi-hostpathplugin-qdnbn
	da8f6a34306e1       931dbfd16f87c       7 minutes ago       Running             csi-provisioner                          0                   73e3689e18fe9       csi-hostpathplugin-qdnbn
	1649420a66573       e899260153aed       7 minutes ago       Running             liveness-probe                           0                   73e3689e18fe9       csi-hostpathplugin-qdnbn
	e0e474b6d95e5       e255e073c508c       7 minutes ago       Running             hostpath                                 0                   73e3689e18fe9       csi-hostpathplugin-qdnbn
	d5fc898fd874b       a80c8fd6e5229       7 minutes ago       Running             controller                               0                   30db636a12234       ingress-nginx-controller-bc57996ff-lpb7q
	06d43e898075b       88ef14a257f42       7 minutes ago       Running             node-driver-registrar                    0                   73e3689e18fe9       csi-hostpathplugin-qdnbn
	39c5183f27011       ce263a8653f9c       7 minutes ago       Exited              patch                                    0                   589d98ccee909       ingress-nginx-admission-patch-8f8nz
	a8bb0086c52b5       6041e92ec449f       7 minutes ago       Exited              volcano-scheduler                        0                   9564c1c96bcee       volcano-scheduler-576bc46687-jtz7f
	ddf31d8b68bc1       a876393c9504b       8 minutes ago       Exited              main                                     0                   b49978f431ab4       volcano-admission-init-57gk4
	06cf11b7a83f9       ce263a8653f9c       8 minutes ago       Exited              create                                   0                   6301c91177942       ingress-nginx-admission-create-5rjsx
	1cd468b4437bd       a1ed5895ba635       8 minutes ago       Running             csi-external-health-monitor-controller   0                   73e3689e18fe9       csi-hostpathplugin-qdnbn
	79266075c79ff       59cbb42146a37       8 minutes ago       Running             csi-attacher                             0                   a4c401b363464       csi-hostpath-attacher-0
	c65d9de60c2d0       aa61ee9c70bc4       8 minutes ago       Running             volume-snapshot-controller               0                   dba5883c9dc9b       snapshot-controller-56fcc65765-4g9w6
	0c025c1b7dd4c       19a639eda60f0       8 minutes ago       Running             csi-resizer                              0                   176615116e8de       csi-hostpath-resizer-0
	c7d7b6bb58927       96e410111f023       8 minutes ago       Running             volcano-controllers                      0                   84cb34271a61b       volcano-controllers-56675bb4d5-hdpdb
	6819af68287c4       aa61ee9c70bc4       8 minutes ago       Running             volume-snapshot-controller               0                   bb404cbffba4e       snapshot-controller-56fcc65765-htkmc
	89cfd63e70df2       3f39089e90831       8 minutes ago       Running             tiller                                   0                   79bab02e559b8       tiller-deploy-b48cc5f79-ddkxz
	576d6c9483015       48d9cfaaf3904       8 minutes ago       Running             metrics-server                           0                   debbe4f662687       metrics-server-84c5f94fbc-s7654
	3c2ba113f3a92       c69fa2e9cbf5f       8 minutes ago       Running             coredns                                  0                   e557eec597dbb       coredns-7c65d6cfc9-9rccl
	74825d98cba88       e16d1e3a10667       8 minutes ago       Running             local-path-provisioner                   0                   1e611781a41cb       local-path-provisioner-86d989889c-w6mf9
	dfe8c0b03e5c3       30dd67412fdea       9 minutes ago       Running             minikube-ingress-dns                     0                   6682d7fdc0949       kube-ingress-dns-minikube
	62a4b8c25074d       6e38f40d628db       9 minutes ago       Running             storage-provisioner                      0                   54247c11bac23       storage-provisioner
	4c4482bfa98cf       12968670680f4       9 minutes ago       Running             kindnet-cni                              0                   48c4106711b6e       kindnet-rxp8k
	d9d3353287790       60c005f310ff3       9 minutes ago       Running             kube-proxy                               0                   b70e27ed4bc15       kube-proxy-fnr7f
	6e4dbd39a8ef5       175ffd71cce3d       9 minutes ago       Running             kube-controller-manager                  0                   f593f7267aeda       kube-controller-manager-addons-191972
	c76b948fbd083       6bab7719df100       9 minutes ago       Running             kube-apiserver                           0                   a7eb33c199dbc       kube-apiserver-addons-191972
	0539bdd901d4a       9aa1fad941575       9 minutes ago       Running             kube-scheduler                           0                   3aba8d618e3fa       kube-scheduler-addons-191972
	92c65a04535dd       2e96e5913fc06       9 minutes ago       Running             etcd                                     0                   84fc0865b25fe       etcd-addons-191972
	
	
	==> containerd <==
	Sep 16 10:32:56 addons-191972 containerd[858]: time="2024-09-16T10:32:56.128768443Z" level=info msg="StopContainer for \"f1f36908b6022528ca210edd622a1c923fb56c286fd5e2643488aec7af94c691\" with timeout 30 (s)"
	Sep 16 10:32:56 addons-191972 containerd[858]: time="2024-09-16T10:32:56.129292373Z" level=info msg="Stop container \"f1f36908b6022528ca210edd622a1c923fb56c286fd5e2643488aec7af94c691\" with signal terminated"
	Sep 16 10:32:56 addons-191972 containerd[858]: time="2024-09-16T10:32:56.580096373Z" level=info msg="shim disconnected" id=f1f36908b6022528ca210edd622a1c923fb56c286fd5e2643488aec7af94c691 namespace=k8s.io
	Sep 16 10:32:56 addons-191972 containerd[858]: time="2024-09-16T10:32:56.580165195Z" level=warning msg="cleaning up after shim disconnected" id=f1f36908b6022528ca210edd622a1c923fb56c286fd5e2643488aec7af94c691 namespace=k8s.io
	Sep 16 10:32:56 addons-191972 containerd[858]: time="2024-09-16T10:32:56.580179613Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 16 10:32:56 addons-191972 containerd[858]: time="2024-09-16T10:32:56.596375843Z" level=info msg="StopContainer for \"f1f36908b6022528ca210edd622a1c923fb56c286fd5e2643488aec7af94c691\" returns successfully"
	Sep 16 10:32:56 addons-191972 containerd[858]: time="2024-09-16T10:32:56.596932159Z" level=info msg="StopPodSandbox for \"e900b36241c1f65531303fa71becfdd0cc9f3b3a9824c2167224d1221a60bba1\""
	Sep 16 10:32:56 addons-191972 containerd[858]: time="2024-09-16T10:32:56.596994509Z" level=info msg="Container to stop \"f1f36908b6022528ca210edd622a1c923fb56c286fd5e2643488aec7af94c691\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Sep 16 10:32:56 addons-191972 containerd[858]: time="2024-09-16T10:32:56.621275801Z" level=info msg="shim disconnected" id=e900b36241c1f65531303fa71becfdd0cc9f3b3a9824c2167224d1221a60bba1 namespace=k8s.io
	Sep 16 10:32:56 addons-191972 containerd[858]: time="2024-09-16T10:32:56.621341743Z" level=warning msg="cleaning up after shim disconnected" id=e900b36241c1f65531303fa71becfdd0cc9f3b3a9824c2167224d1221a60bba1 namespace=k8s.io
	Sep 16 10:32:56 addons-191972 containerd[858]: time="2024-09-16T10:32:56.621366967Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 16 10:32:56 addons-191972 containerd[858]: time="2024-09-16T10:32:56.672588319Z" level=info msg="TearDown network for sandbox \"e900b36241c1f65531303fa71becfdd0cc9f3b3a9824c2167224d1221a60bba1\" successfully"
	Sep 16 10:32:56 addons-191972 containerd[858]: time="2024-09-16T10:32:56.672620380Z" level=info msg="StopPodSandbox for \"e900b36241c1f65531303fa71becfdd0cc9f3b3a9824c2167224d1221a60bba1\" returns successfully"
	Sep 16 10:32:57 addons-191972 containerd[858]: time="2024-09-16T10:32:57.378092173Z" level=info msg="RemoveContainer for \"f1f36908b6022528ca210edd622a1c923fb56c286fd5e2643488aec7af94c691\""
	Sep 16 10:32:57 addons-191972 containerd[858]: time="2024-09-16T10:32:57.384128568Z" level=info msg="RemoveContainer for \"f1f36908b6022528ca210edd622a1c923fb56c286fd5e2643488aec7af94c691\" returns successfully"
	Sep 16 10:32:57 addons-191972 containerd[858]: time="2024-09-16T10:32:57.384610972Z" level=error msg="ContainerStatus for \"f1f36908b6022528ca210edd622a1c923fb56c286fd5e2643488aec7af94c691\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f1f36908b6022528ca210edd622a1c923fb56c286fd5e2643488aec7af94c691\": not found"
	Sep 16 10:33:06 addons-191972 containerd[858]: time="2024-09-16T10:33:06.772612239Z" level=info msg="StopPodSandbox for \"d4feba9de8c251ddcedb6bf5e748a13f7a0bf0cb99f6be81820752487f60aa7e\""
	Sep 16 10:33:06 addons-191972 containerd[858]: time="2024-09-16T10:33:06.772741724Z" level=info msg="Container to stop \"85bcbbfdfc074366faf8d70de5e5b0ae05b2c86caf5118e07c5f5779a11f6f09\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Sep 16 10:33:06 addons-191972 containerd[858]: time="2024-09-16T10:33:06.843175140Z" level=info msg="shim disconnected" id=d4feba9de8c251ddcedb6bf5e748a13f7a0bf0cb99f6be81820752487f60aa7e namespace=k8s.io
	Sep 16 10:33:06 addons-191972 containerd[858]: time="2024-09-16T10:33:06.843237210Z" level=warning msg="cleaning up after shim disconnected" id=d4feba9de8c251ddcedb6bf5e748a13f7a0bf0cb99f6be81820752487f60aa7e namespace=k8s.io
	Sep 16 10:33:06 addons-191972 containerd[858]: time="2024-09-16T10:33:06.843247465Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 16 10:33:06 addons-191972 containerd[858]: time="2024-09-16T10:33:06.856606337Z" level=info msg="TearDown network for sandbox \"d4feba9de8c251ddcedb6bf5e748a13f7a0bf0cb99f6be81820752487f60aa7e\" successfully"
	Sep 16 10:33:06 addons-191972 containerd[858]: time="2024-09-16T10:33:06.856649688Z" level=info msg="StopPodSandbox for \"d4feba9de8c251ddcedb6bf5e748a13f7a0bf0cb99f6be81820752487f60aa7e\" returns successfully"
	Sep 16 10:33:07 addons-191972 containerd[858]: time="2024-09-16T10:33:07.400578462Z" level=info msg="RemoveContainer for \"85bcbbfdfc074366faf8d70de5e5b0ae05b2c86caf5118e07c5f5779a11f6f09\""
	Sep 16 10:33:07 addons-191972 containerd[858]: time="2024-09-16T10:33:07.405968245Z" level=info msg="RemoveContainer for \"85bcbbfdfc074366faf8d70de5e5b0ae05b2c86caf5118e07c5f5779a11f6f09\" returns successfully"
	
	
	==> coredns [3c2ba113f3a928b6de94c4ca0bf607534ff798f3d85ffd2a7685ed6dacc00744] <==
	[INFO] 10.244.0.3:34722 - 16813 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000126799s
	[INFO] 10.244.0.3:47807 - 19593 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000078163s
	[INFO] 10.244.0.3:47807 - 48005 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00012131s
	[INFO] 10.244.0.3:52137 - 389 "AAAA IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.004304691s
	[INFO] 10.244.0.3:52137 - 40577 "A IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.004777432s
	[INFO] 10.244.0.3:37044 - 23366 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.003875752s
	[INFO] 10.244.0.3:37044 - 14153 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004520489s
	[INFO] 10.244.0.3:37775 - 29429 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003806717s
	[INFO] 10.244.0.3:37775 - 41674 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003872738s
	[INFO] 10.244.0.3:58704 - 7476 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000090446s
	[INFO] 10.244.0.3:58704 - 1849 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000134094s
	[INFO] 10.244.0.25:38825 - 37363 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000216144s
	[INFO] 10.244.0.25:38931 - 39307 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000245831s
	[INFO] 10.244.0.25:50024 - 16483 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000164924s
	[INFO] 10.244.0.25:42236 - 32299 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000196632s
	[INFO] 10.244.0.25:49331 - 38072 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000114124s
	[INFO] 10.244.0.25:36861 - 61813 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000164666s
	[INFO] 10.244.0.25:33081 - 5019 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.00927584s
	[INFO] 10.244.0.25:32825 - 10257 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.009718235s
	[INFO] 10.244.0.25:50215 - 44243 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007980557s
	[INFO] 10.244.0.25:46089 - 36172 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008374403s
	[INFO] 10.244.0.25:60708 - 60516 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00523636s
	[INFO] 10.244.0.25:53932 - 3930 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005436837s
	[INFO] 10.244.0.25:33968 - 30856 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.002295196s
	[INFO] 10.244.0.25:51453 - 49493 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002387298s
	
	
	==> describe nodes <==
	Name:               addons-191972
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-191972
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=addons-191972
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_23_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-191972
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-191972"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:23:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-191972
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:33:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:32:52 +0000   Mon, 16 Sep 2024 10:23:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:32:52 +0000   Mon, 16 Sep 2024 10:23:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:32:52 +0000   Mon, 16 Sep 2024 10:23:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:32:52 +0000   Mon, 16 Sep 2024 10:23:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-191972
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 0263fbb37d3545b09ff38a7b68907e4c
	  System UUID:                45c87f39-d597-4b0c-a097-439ebdb945ff
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (22 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-89d5ffd79-6r2td                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m20s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-lpb7q    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         9m9s
	  kube-system                 coredns-7c65d6cfc9-9rccl                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     9m17s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m6s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m6s
	  kube-system                 csi-hostpathplugin-qdnbn                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m6s
	  kube-system                 etcd-addons-191972                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         9m23s
	  kube-system                 kindnet-rxp8k                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      9m17s
	  kube-system                 kube-apiserver-addons-191972                250m (3%)     0 (0%)      0 (0%)           0 (0%)         9m22s
	  kube-system                 kube-controller-manager-addons-191972       200m (2%)     0 (0%)      0 (0%)           0 (0%)         9m23s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 kube-proxy-fnr7f                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m17s
	  kube-system                 kube-scheduler-addons-191972                100m (1%)     0 (0%)      0 (0%)           0 (0%)         9m22s
	  kube-system                 metrics-server-84c5f94fbc-s7654             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         9m11s
	  kube-system                 snapshot-controller-56fcc65765-4g9w6        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m8s
	  kube-system                 snapshot-controller-56fcc65765-htkmc        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m8s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 tiller-deploy-b48cc5f79-ddkxz               0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m11s
	  local-path-storage          local-path-provisioner-86d989889c-w6mf9     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	  volcano-system              volcano-admission-77d7d48b68-rcfsk          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m8s
	  volcano-system              volcano-controllers-56675bb4d5-hdpdb        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m8s
	  volcano-system              volcano-scheduler-576bc46687-jtz7f          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             510Mi (1%)   220Mi (0%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 9m12s  kube-proxy       
	  Normal   Starting                 9m22s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m22s  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  9m22s  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9m22s  kubelet          Node addons-191972 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m22s  kubelet          Node addons-191972 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m22s  kubelet          Node addons-191972 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m18s  node-controller  Node addons-191972 event: Registered Node addons-191972 in Controller
	
	
	==> dmesg <==
	[Sep16 10:17]  #2
	[  +0.001391]  #3
	[  +0.000000]  #4
	[  +0.003060] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.003238] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.002037] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.002135]  #5
	[  +0.000696]  #6
	[  +0.003195]  #7
	[  +0.058540] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.423486] i8042: Warning: Keylock active
	[  +0.007424] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.002994] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000654] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000607] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000622] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000638] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000657] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000607] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000608] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.588189] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +7.251152] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [92c65a04535ddef6879f2eb4260843c6961d1fb2395f595b3a5665263c562002] <==
	{"level":"info","ts":"2024-09-16T10:23:47.260476Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:23:47.261160Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:23:47.261447Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:23:47.262322Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-16T10:23:47.262576Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:24:15.873285Z","caller":"traceutil/trace.go:171","msg":"trace[187537689] transaction","detail":"{read_only:false; response_revision:986; number_of_response:1; }","duration":"119.841789ms","start":"2024-09-16T10:24:15.753419Z","end":"2024-09-16T10:24:15.873261Z","steps":["trace[187537689] 'process raft request'  (duration: 119.705144ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:24:16.060589Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.178284ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:24:16.060680Z","caller":"traceutil/trace.go:171","msg":"trace[2127996318] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:986; }","duration":"125.313412ms","start":"2024-09-16T10:24:15.935346Z","end":"2024-09-16T10:24:16.060659Z","steps":["trace[2127996318] 'range keys from in-memory index tree'  (duration: 125.097316ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:07.796336Z","caller":"traceutil/trace.go:171","msg":"trace[28147226] transaction","detail":"{read_only:false; response_revision:1242; number_of_response:1; }","duration":"128.826483ms","start":"2024-09-16T10:25:07.667485Z","end":"2024-09-16T10:25:07.796311Z","steps":["trace[28147226] 'process raft request'  (duration: 41.106171ms)","trace[28147226] 'compare'  (duration: 87.53434ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:25:21.424319Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.488522ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128031931970271159 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/volcano-system/volcano-scheduler-576bc46687-jtz7f\" mod_revision:812 > success:<request_put:<key:\"/registry/pods/volcano-system/volcano-scheduler-576bc46687-jtz7f\" value_size:4029 >> failure:<request_range:<key:\"/registry/pods/volcano-system/volcano-scheduler-576bc46687-jtz7f\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-09-16T10:25:21.424401Z","caller":"traceutil/trace.go:171","msg":"trace[1168470588] linearizableReadLoop","detail":"{readStateIndex:1335; appliedIndex:1334; }","duration":"177.395065ms","start":"2024-09-16T10:25:21.246995Z","end":"2024-09-16T10:25:21.424390Z","steps":["trace[1168470588] 'read index received'  (duration: 48.427907ms)","trace[1168470588] 'applied index is now lower than readState.Index'  (duration: 128.965162ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:25:21.424444Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.446761ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:21.424466Z","caller":"traceutil/trace.go:171","msg":"trace[1171179904] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1302; }","duration":"177.469291ms","start":"2024-09-16T10:25:21.246991Z","end":"2024-09-16T10:25:21.424460Z","steps":["trace[1171179904] 'agreement among raft nodes before linearized reading'  (duration: 177.429463ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:21.424486Z","caller":"traceutil/trace.go:171","msg":"trace[1930200040] transaction","detail":"{read_only:false; response_revision:1302; number_of_response:1; }","duration":"247.357795ms","start":"2024-09-16T10:25:21.177107Z","end":"2024-09-16T10:25:21.424464Z","steps":["trace[1930200040] 'process raft request'  (duration: 118.297085ms)","trace[1930200040] 'compare'  (duration: 128.26971ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T10:25:21.652910Z","caller":"traceutil/trace.go:171","msg":"trace[1856019889] linearizableReadLoop","detail":"{readStateIndex:1338; appliedIndex:1335; }","duration":"218.326846ms","start":"2024-09-16T10:25:21.434567Z","end":"2024-09-16T10:25:21.652894Z","steps":["trace[1856019889] 'read index received'  (duration: 55.93458ms)","trace[1856019889] 'applied index is now lower than readState.Index'  (duration: 162.391571ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T10:25:21.652969Z","caller":"traceutil/trace.go:171","msg":"trace[1279722024] transaction","detail":"{read_only:false; response_revision:1305; number_of_response:1; }","duration":"224.683287ms","start":"2024-09-16T10:25:21.428268Z","end":"2024-09-16T10:25:21.652951Z","steps":["trace[1279722024] 'process raft request'  (duration: 224.540452ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:21.653003Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"218.415614ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:21.653027Z","caller":"traceutil/trace.go:171","msg":"trace[1008371896] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1305; }","duration":"218.457307ms","start":"2024-09-16T10:25:21.434563Z","end":"2024-09-16T10:25:21.653020Z","steps":["trace[1008371896] 'agreement among raft nodes before linearized reading'  (duration: 218.392253ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:21.652921Z","caller":"traceutil/trace.go:171","msg":"trace[1132385399] transaction","detail":"{read_only:false; response_revision:1304; number_of_response:1; }","duration":"225.049342ms","start":"2024-09-16T10:25:21.427850Z","end":"2024-09-16T10:25:21.652899Z","steps":["trace[1132385399] 'process raft request'  (duration: 131.625555ms)","trace[1132385399] 'compare'  (duration: 93.227933ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T10:25:21.868227Z","caller":"traceutil/trace.go:171","msg":"trace[1246984751] linearizableReadLoop","detail":"{readStateIndex:1339; appliedIndex:1338; }","duration":"139.924393ms","start":"2024-09-16T10:25:21.728284Z","end":"2024-09-16T10:25:21.868208Z","steps":["trace[1246984751] 'read index received'  (duration: 63.202511ms)","trace[1246984751] 'applied index is now lower than readState.Index'  (duration: 76.72121ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T10:25:21.868259Z","caller":"traceutil/trace.go:171","msg":"trace[501466804] transaction","detail":"{read_only:false; response_revision:1306; number_of_response:1; }","duration":"210.400699ms","start":"2024-09-16T10:25:21.657832Z","end":"2024-09-16T10:25:21.868233Z","steps":["trace[501466804] 'process raft request'  (duration: 133.673421ms)","trace[501466804] 'compare'  (duration: 76.618072ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:25:21.868373Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.878283ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:21.868410Z","caller":"traceutil/trace.go:171","msg":"trace[1169815467] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1306; }","duration":"121.931335ms","start":"2024-09-16T10:25:21.746471Z","end":"2024-09-16T10:25:21.868402Z","steps":["trace[1169815467] 'agreement among raft nodes before linearized reading'  (duration: 121.861476ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:21.868538Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.236255ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-09-16T10:25:21.868579Z","caller":"traceutil/trace.go:171","msg":"trace[344111638] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1306; }","duration":"140.292497ms","start":"2024-09-16T10:25:21.728276Z","end":"2024-09-16T10:25:21.868569Z","steps":["trace[344111638] 'agreement among raft nodes before linearized reading'  (duration: 140.016451ms)"],"step_count":1}
	
	
	==> gcp-auth [cfade64badb92dacf9d0c56d24c0fb7e95088f5abf7a814ef4801971e4b26216] <==
	2024/09/16 10:27:29 GCP Auth Webhook started!
	2024/09/16 10:32:45 Ready to marshal response ...
	2024/09/16 10:32:45 Ready to write response ...
	2024/09/16 10:32:45 Ready to marshal response ...
	2024/09/16 10:32:45 Ready to write response ...
	2024/09/16 10:32:45 Ready to marshal response ...
	2024/09/16 10:32:45 Ready to write response ...
	
	
	==> kernel <==
	 10:33:13 up 15 min,  0 users,  load average: 1.14, 0.63, 0.42
	Linux addons-191972 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [4c4482bfa98cf1024c4b123130c5a320a891204919b9a1459b6f3269e1e7d29d] <==
	I0916 10:31:09.447844       1 main.go:299] handling current node
	I0916 10:31:19.450907       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:31:19.450940       1 main.go:299] handling current node
	I0916 10:31:29.448450       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:31:29.448483       1 main.go:299] handling current node
	I0916 10:31:39.447884       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:31:39.447915       1 main.go:299] handling current node
	I0916 10:31:49.443809       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:31:49.443842       1 main.go:299] handling current node
	I0916 10:31:59.441426       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:31:59.441461       1 main.go:299] handling current node
	I0916 10:32:09.447827       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:32:09.447865       1 main.go:299] handling current node
	I0916 10:32:19.448134       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:32:19.448165       1 main.go:299] handling current node
	I0916 10:32:29.443818       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:32:29.443852       1 main.go:299] handling current node
	I0916 10:32:39.441647       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:32:39.441692       1 main.go:299] handling current node
	I0916 10:32:49.441742       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:32:49.441771       1 main.go:299] handling current node
	I0916 10:32:59.441556       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:32:59.441597       1 main.go:299] handling current node
	I0916 10:33:09.442476       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:33:09.442533       1 main.go:299] handling current node
	
	
	==> kube-apiserver [c76b948fbd083e0e5229c3ac96548e67224afd5a037343a2b118da9b9ae5ad3a] <==
	W0916 10:26:15.413935       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:16.459096       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:17.509475       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:18.532761       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:19.545400       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:20.553347       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:21.640741       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:22.735942       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:24.007851       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:25.084707       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:26.137166       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:27.215912       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:28.269709       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:29.285978       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:30.385745       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:31.389520       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:53.671732       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.114.210:443: connect: connection refused
	E0916 10:26:53.671804       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.114.210:443: connect: connection refused" logger="UnhandledError"
	W0916 10:27:11.712823       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.114.210:443: connect: connection refused
	E0916 10:27:11.712858       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.114.210:443: connect: connection refused" logger="UnhandledError"
	W0916 10:27:11.785537       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.114.210:443: connect: connection refused
	E0916 10:27:11.785576       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.114.210:443: connect: connection refused" logger="UnhandledError"
	I0916 10:32:45.560480       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.245.36"}
	I0916 10:33:06.754025       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0916 10:33:07.773034       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	
	
	==> kube-controller-manager [6e4dbd39a8ef56c5a753071ab0489111fcbcaac9f7cbe3b4fdf88030aa41c77b] <==
	I0916 10:27:29.502976       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="56.15µs"
	I0916 10:27:44.013104       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0916 10:27:44.016022       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0916 10:27:44.039693       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0916 10:27:44.041144       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0916 10:27:56.735238       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-191972"
	I0916 10:32:39.064214       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="8.395µs"
	I0916 10:32:44.235424       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="6.98µs"
	I0916 10:32:44.890679       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-769b77f747" duration="8.047µs"
	I0916 10:32:45.720872       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="146.32449ms"
	I0916 10:32:45.726198       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="5.274582ms"
	I0916 10:32:45.726288       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="52.808µs"
	I0916 10:32:45.732102       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="86.759µs"
	I0916 10:32:49.188017       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	I0916 10:32:49.362646       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="72.728µs"
	I0916 10:32:49.382396       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="5.571798ms"
	I0916 10:32:49.382492       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="50.896µs"
	I0916 10:32:52.645155       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-191972"
	I0916 10:32:56.120354       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="11.207µs"
	I0916 10:33:06.234578       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	E0916 10:33:07.774278       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:33:08.763057       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:33:08.763095       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:33:10.493315       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:33:10.493378       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [d9d335328779062c055353442bb9ca0c1e2fef63bc1c598650e6ea25604013a5] <==
	I0916 10:23:59.129562       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:23:59.824945       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:23:59.825067       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:24:00.037013       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:24:00.040602       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:24:00.135054       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:24:00.135450       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:24:00.135471       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:24:00.237323       1 config.go:199] "Starting service config controller"
	I0916 10:24:00.237372       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:24:00.237410       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:24:00.237416       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:24:00.237471       1 config.go:328] "Starting node config controller"
	I0916 10:24:00.237491       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:24:00.337642       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:24:00.337724       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:24:00.337829       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0539bdd901d4af068b2160b27df45018e72113a7a75c6a082ae7e2f64f3f908b] <==
	W0916 10:23:49.138663       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0916 10:23:49.138662       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:23:49.138689       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0916 10:23:49.138707       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:49.138696       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0916 10:23:49.138760       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0916 10:23:49.138769       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 10:23:49.138774       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 10:23:49.138779       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 10:23:49.138787       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:49.139877       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:49.139916       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:50.064082       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 10:23:50.064133       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:50.118512       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:23:50.118558       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:50.132045       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:23:50.132096       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:50.175403       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:50.175438       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:50.199805       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:23:50.199848       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:50.241540       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:50.241599       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0916 10:23:50.633994       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:33:01 addons-191972 kubelet[1565]: I0916 10:33:01.473147    1565 scope.go:117] "RemoveContainer" containerID="85bcbbfdfc074366faf8d70de5e5b0ae05b2c86caf5118e07c5f5779a11f6f09"
	Sep 16 10:33:01 addons-191972 kubelet[1565]: E0916 10:33:01.473385    1565 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-rwwbs_gadget(62b2176c-9dcb-4741-bd18-81ab2a2303f2)\"" pod="gadget/gadget-rwwbs" podUID="62b2176c-9dcb-4741-bd18-81ab2a2303f2"
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.976232    1565 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"modules\" (UniqueName: \"kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-modules\") pod \"62b2176c-9dcb-4741-bd18-81ab2a2303f2\" (UID: \"62b2176c-9dcb-4741-bd18-81ab2a2303f2\") "
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.976274    1565 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-host\") pod \"62b2176c-9dcb-4741-bd18-81ab2a2303f2\" (UID: \"62b2176c-9dcb-4741-bd18-81ab2a2303f2\") "
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.976292    1565 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-bpffs\") pod \"62b2176c-9dcb-4741-bd18-81ab2a2303f2\" (UID: \"62b2176c-9dcb-4741-bd18-81ab2a2303f2\") "
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.976318    1565 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jwxh\" (UniqueName: \"kubernetes.io/projected/62b2176c-9dcb-4741-bd18-81ab2a2303f2-kube-api-access-5jwxh\") pod \"62b2176c-9dcb-4741-bd18-81ab2a2303f2\" (UID: \"62b2176c-9dcb-4741-bd18-81ab2a2303f2\") "
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.976339    1565 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cgroup\" (UniqueName: \"kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-cgroup\") pod \"62b2176c-9dcb-4741-bd18-81ab2a2303f2\" (UID: \"62b2176c-9dcb-4741-bd18-81ab2a2303f2\") "
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.976358    1565 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-run\") pod \"62b2176c-9dcb-4741-bd18-81ab2a2303f2\" (UID: \"62b2176c-9dcb-4741-bd18-81ab2a2303f2\") "
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.976354    1565 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-bpffs" (OuterVolumeSpecName: "bpffs") pod "62b2176c-9dcb-4741-bd18-81ab2a2303f2" (UID: "62b2176c-9dcb-4741-bd18-81ab2a2303f2"). InnerVolumeSpecName "bpffs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.976380    1565 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"debugfs\" (UniqueName: \"kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-debugfs\") pod \"62b2176c-9dcb-4741-bd18-81ab2a2303f2\" (UID: \"62b2176c-9dcb-4741-bd18-81ab2a2303f2\") "
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.976380    1565 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-cgroup" (OuterVolumeSpecName: "cgroup") pod "62b2176c-9dcb-4741-bd18-81ab2a2303f2" (UID: "62b2176c-9dcb-4741-bd18-81ab2a2303f2"). InnerVolumeSpecName "cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.976353    1565 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-modules" (OuterVolumeSpecName: "modules") pod "62b2176c-9dcb-4741-bd18-81ab2a2303f2" (UID: "62b2176c-9dcb-4741-bd18-81ab2a2303f2"). InnerVolumeSpecName "modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.976356    1565 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-host" (OuterVolumeSpecName: "host") pod "62b2176c-9dcb-4741-bd18-81ab2a2303f2" (UID: "62b2176c-9dcb-4741-bd18-81ab2a2303f2"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.976396    1565 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-run" (OuterVolumeSpecName: "run") pod "62b2176c-9dcb-4741-bd18-81ab2a2303f2" (UID: "62b2176c-9dcb-4741-bd18-81ab2a2303f2"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.976402    1565 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-debugfs" (OuterVolumeSpecName: "debugfs") pod "62b2176c-9dcb-4741-bd18-81ab2a2303f2" (UID: "62b2176c-9dcb-4741-bd18-81ab2a2303f2"). InnerVolumeSpecName "debugfs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.976506    1565 reconciler_common.go:288] "Volume detached for volume \"modules\" (UniqueName: \"kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-modules\") on node \"addons-191972\" DevicePath \"\""
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.976522    1565 reconciler_common.go:288] "Volume detached for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-bpffs\") on node \"addons-191972\" DevicePath \"\""
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.976533    1565 reconciler_common.go:288] "Volume detached for volume \"cgroup\" (UniqueName: \"kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-cgroup\") on node \"addons-191972\" DevicePath \"\""
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.976546    1565 reconciler_common.go:288] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-run\") on node \"addons-191972\" DevicePath \"\""
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.978118    1565 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62b2176c-9dcb-4741-bd18-81ab2a2303f2-kube-api-access-5jwxh" (OuterVolumeSpecName: "kube-api-access-5jwxh") pod "62b2176c-9dcb-4741-bd18-81ab2a2303f2" (UID: "62b2176c-9dcb-4741-bd18-81ab2a2303f2"). InnerVolumeSpecName "kube-api-access-5jwxh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:33:07 addons-191972 kubelet[1565]: I0916 10:33:07.076713    1565 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5jwxh\" (UniqueName: \"kubernetes.io/projected/62b2176c-9dcb-4741-bd18-81ab2a2303f2-kube-api-access-5jwxh\") on node \"addons-191972\" DevicePath \"\""
	Sep 16 10:33:07 addons-191972 kubelet[1565]: I0916 10:33:07.076783    1565 reconciler_common.go:288] "Volume detached for volume \"debugfs\" (UniqueName: \"kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-debugfs\") on node \"addons-191972\" DevicePath \"\""
	Sep 16 10:33:07 addons-191972 kubelet[1565]: I0916 10:33:07.076797    1565 reconciler_common.go:288] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-host\") on node \"addons-191972\" DevicePath \"\""
	Sep 16 10:33:07 addons-191972 kubelet[1565]: I0916 10:33:07.398404    1565 scope.go:117] "RemoveContainer" containerID="85bcbbfdfc074366faf8d70de5e5b0ae05b2c86caf5118e07c5f5779a11f6f09"
	Sep 16 10:33:07 addons-191972 kubelet[1565]: I0916 10:33:07.474491    1565 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62b2176c-9dcb-4741-bd18-81ab2a2303f2" path="/var/lib/kubelet/pods/62b2176c-9dcb-4741-bd18-81ab2a2303f2/volumes"
	
	
	==> storage-provisioner [62a4b8c25074dcef9656a9b6e749de86b5f7c97f45a25cd328153d14be1d5a78] <==
	I0916 10:24:03.139108       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:24:03.230289       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:24:03.230361       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:24:03.238016       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:24:03.238457       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ff346362-6d54-491c-b142-6d85e8abf2d5", APIVersion:"v1", ResourceVersion:"576", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-191972_e8089787-9f1d-4116-8123-a579d9482714 became leader
	I0916 10:24:03.238505       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-191972_e8089787-9f1d-4116-8123-a579d9482714!
	I0916 10:24:03.339118       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-191972_e8089787-9f1d-4116-8123-a579d9482714!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-191972 -n addons-191972
helpers_test.go:261: (dbg) Run:  kubectl --context addons-191972 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context addons-191972 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (552.273µs)
helpers_test.go:263: kubectl --context addons-191972 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestAddons/parallel/Ingress (1.88s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (367.04s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.61833ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-s7654" [14e280ea-8ba8-4805-844c-aeff8fb18ce0] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003130802s
addons_test.go:417: (dbg) Run:  kubectl --context addons-191972 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-191972 top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (377.135µs)
addons_test.go:417: (dbg) Run:  kubectl --context addons-191972 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-191972 top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (333.638µs)
addons_test.go:417: (dbg) Run:  kubectl --context addons-191972 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-191972 top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (408.409µs)
addons_test.go:417: (dbg) Run:  kubectl --context addons-191972 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-191972 top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (452.611µs)
addons_test.go:417: (dbg) Run:  kubectl --context addons-191972 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-191972 top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (365.276µs)
addons_test.go:417: (dbg) Run:  kubectl --context addons-191972 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-191972 top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (366.682µs)
addons_test.go:417: (dbg) Run:  kubectl --context addons-191972 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-191972 top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (447.44µs)
addons_test.go:417: (dbg) Run:  kubectl --context addons-191972 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-191972 top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (427.381µs)
addons_test.go:417: (dbg) Run:  kubectl --context addons-191972 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-191972 top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (519.469µs)
addons_test.go:417: (dbg) Run:  kubectl --context addons-191972 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-191972 top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (580.224µs)
addons_test.go:417: (dbg) Run:  kubectl --context addons-191972 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-191972 top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (479.648µs)
addons_test.go:417: (dbg) Run:  kubectl --context addons-191972 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-191972 top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (482.883µs)
addons_test.go:417: (dbg) Run:  kubectl --context addons-191972 top pods -n kube-system
addons_test.go:417: (dbg) Non-zero exit: kubectl --context addons-191972 top pods -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (520.449µs)
addons_test.go:431: failed checking metric server: fork/exec /usr/local/bin/kubectl: exec format error
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-191972 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-191972
helpers_test.go:235: (dbg) docker inspect addons-191972:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "49285aed0ac6b4add7b1c7856bcff882cf5b64bc1fd5779afefda3979360aedd",
	        "Created": "2024-09-16T10:23:37.048894749Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 13416,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:23:37.183215602Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/49285aed0ac6b4add7b1c7856bcff882cf5b64bc1fd5779afefda3979360aedd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/49285aed0ac6b4add7b1c7856bcff882cf5b64bc1fd5779afefda3979360aedd/hostname",
	        "HostsPath": "/var/lib/docker/containers/49285aed0ac6b4add7b1c7856bcff882cf5b64bc1fd5779afefda3979360aedd/hosts",
	        "LogPath": "/var/lib/docker/containers/49285aed0ac6b4add7b1c7856bcff882cf5b64bc1fd5779afefda3979360aedd/49285aed0ac6b4add7b1c7856bcff882cf5b64bc1fd5779afefda3979360aedd-json.log",
	        "Name": "/addons-191972",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-191972:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-191972",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2254b5220082ec2e338390341321e26cfa70d77e8e8e98f86dc832205812162a-init/diff:/var/lib/docker/overlay2/e391a31d00d345931d18da68f8a7d7338830f6d9b98fe203461225d03a65d594/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2254b5220082ec2e338390341321e26cfa70d77e8e8e98f86dc832205812162a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2254b5220082ec2e338390341321e26cfa70d77e8e8e98f86dc832205812162a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2254b5220082ec2e338390341321e26cfa70d77e8e8e98f86dc832205812162a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-191972",
	                "Source": "/var/lib/docker/volumes/addons-191972/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-191972",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-191972",
	                "name.minikube.sigs.k8s.io": "addons-191972",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b247e3d2e57f223fa64fb9fece255c3b6a0f61eb064ba71e6e8c51f7e6b8590a",
	            "SandboxKey": "/var/run/docker/netns/b247e3d2e57f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-191972": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "aac8db9a46c7b7c219b85113240d1d4a2ee20d1c156fb7315fdf6aa5e797f6a8",
	                    "EndpointID": "ab683490c93590fb0411cd607b8ad8f3100f7ae01f11dd3e855f6321d940faae",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-191972",
	                        "49285aed0ac6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-191972 -n addons-191972
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-191972 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-191972 logs -n 25: (1.195425951s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-297488   | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | -p download-only-297488              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| delete  | -p download-only-297488              | download-only-297488   | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| start   | -o=json --download-only              | download-only-024449   | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | -p download-only-024449              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| delete  | -p download-only-024449              | download-only-024449   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| delete  | -p download-only-297488              | download-only-297488   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| delete  | -p download-only-024449              | download-only-024449   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| start   | --download-only -p                   | download-docker-065822 | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | download-docker-065822               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-065822            | download-docker-065822 | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| start   | --download-only -p                   | binary-mirror-727123   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | binary-mirror-727123                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:34779               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-727123              | binary-mirror-727123   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| addons  | enable dashboard -p                  | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | addons-191972                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | addons-191972                        |                        |         |         |                     |                     |
	| start   | -p addons-191972 --wait=true         | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:27 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                        |         |         |                     |                     |
	| addons  | addons-191972 addons disable         | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | yakd --alsologtostderr -v=1          |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | -p addons-191972                     |                        |         |         |                     |                     |
	| ip      | addons-191972 ip                     | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	| addons  | addons-191972 addons disable         | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p             | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | addons-191972                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | -p addons-191972                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-191972 addons disable         | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:33 UTC |
	|         | headlamp --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC | 16 Sep 24 10:33 UTC |
	|         | addons-191972                        |                        |         |         |                     |                     |
	| addons  | addons-191972 addons disable         | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:34 UTC | 16 Sep 24 10:34 UTC |
	|         | helm-tiller --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-191972 addons                 | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:38 UTC | 16 Sep 24 10:38 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:23:15
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:23:15.015457   12653 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:23:15.015610   12653 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:23:15.015623   12653 out.go:358] Setting ErrFile to fd 2...
	I0916 10:23:15.015629   12653 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:23:15.015835   12653 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 10:23:15.016423   12653 out.go:352] Setting JSON to false
	I0916 10:23:15.017221   12653 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":339,"bootTime":1726481856,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:23:15.017316   12653 start.go:139] virtualization: kvm guest
	I0916 10:23:15.019468   12653 out.go:177] * [addons-191972] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:23:15.020856   12653 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:23:15.020860   12653 notify.go:220] Checking for updates...
	I0916 10:23:15.023158   12653 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:23:15.024282   12653 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:23:15.025336   12653 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 10:23:15.026362   12653 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:23:15.027468   12653 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:23:15.028714   12653 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:23:15.049632   12653 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:23:15.049710   12653 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:23:15.095467   12653 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-16 10:23:15.085826834 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:23:15.095614   12653 docker.go:318] overlay module found
	I0916 10:23:15.097552   12653 out.go:177] * Using the docker driver based on user configuration
	I0916 10:23:15.098917   12653 start.go:297] selected driver: docker
	I0916 10:23:15.098932   12653 start.go:901] validating driver "docker" against <nil>
	I0916 10:23:15.098957   12653 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:23:15.099817   12653 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:23:15.144749   12653 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-16 10:23:15.136589077 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:23:15.144922   12653 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:23:15.145171   12653 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:23:15.147081   12653 out.go:177] * Using Docker driver with root privileges
	I0916 10:23:15.148504   12653 cni.go:84] Creating CNI manager for ""
	I0916 10:23:15.148563   12653 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 10:23:15.148575   12653 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 10:23:15.148632   12653 start.go:340] cluster config:
	{Name:addons-191972 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-191972 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:23:15.149981   12653 out.go:177] * Starting "addons-191972" primary control-plane node in "addons-191972" cluster
	I0916 10:23:15.151239   12653 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 10:23:15.152375   12653 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:23:15.153439   12653 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:23:15.153479   12653 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4
	I0916 10:23:15.153492   12653 cache.go:56] Caching tarball of preloaded images
	I0916 10:23:15.153495   12653 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:23:15.153601   12653 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 10:23:15.153613   12653 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 10:23:15.153950   12653 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/config.json ...
	I0916 10:23:15.153974   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/config.json: {Name:mk77e04db13eac753d69895eba14a3f7223b28d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:15.169560   12653 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:23:15.169666   12653 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:23:15.169681   12653 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:23:15.169685   12653 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:23:15.169694   12653 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:23:15.169701   12653 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:23:27.861517   12653 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:23:27.861553   12653 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:23:27.861589   12653 start.go:360] acquireMachinesLock for addons-191972: {Name:mk1204ee6335c794af5ff39cd93a214e3c1d654b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:23:27.861691   12653 start.go:364] duration metric: took 80.959µs to acquireMachinesLock for "addons-191972"
	I0916 10:23:27.861720   12653 start.go:93] Provisioning new machine with config: &{Name:addons-191972 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-191972 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 10:23:27.861797   12653 start.go:125] createHost starting for "" (driver="docker")
	I0916 10:23:27.864363   12653 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0916 10:23:27.864609   12653 start.go:159] libmachine.API.Create for "addons-191972" (driver="docker")
	I0916 10:23:27.864644   12653 client.go:168] LocalClient.Create starting
	I0916 10:23:27.864787   12653 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem
	I0916 10:23:28.100386   12653 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem
	I0916 10:23:28.472961   12653 cli_runner.go:164] Run: docker network inspect addons-191972 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 10:23:28.488573   12653 cli_runner.go:211] docker network inspect addons-191972 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 10:23:28.488653   12653 network_create.go:284] running [docker network inspect addons-191972] to gather additional debugging logs...
	I0916 10:23:28.488675   12653 cli_runner.go:164] Run: docker network inspect addons-191972
	W0916 10:23:28.503724   12653 cli_runner.go:211] docker network inspect addons-191972 returned with exit code 1
	I0916 10:23:28.503773   12653 network_create.go:287] error running [docker network inspect addons-191972]: docker network inspect addons-191972: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-191972 not found
	I0916 10:23:28.503790   12653 network_create.go:289] output of [docker network inspect addons-191972]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-191972 not found
	
	** /stderr **
	I0916 10:23:28.503874   12653 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:23:28.520445   12653 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ac6790}
	I0916 10:23:28.520486   12653 network_create.go:124] attempt to create docker network addons-191972 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0916 10:23:28.520531   12653 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-191972 addons-191972
	I0916 10:23:28.578324   12653 network_create.go:108] docker network addons-191972 192.168.49.0/24 created
	I0916 10:23:28.578353   12653 kic.go:121] calculated static IP "192.168.49.2" for the "addons-191972" container
	I0916 10:23:28.578405   12653 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 10:23:28.593459   12653 cli_runner.go:164] Run: docker volume create addons-191972 --label name.minikube.sigs.k8s.io=addons-191972 --label created_by.minikube.sigs.k8s.io=true
	I0916 10:23:28.611104   12653 oci.go:103] Successfully created a docker volume addons-191972
	I0916 10:23:28.611189   12653 cli_runner.go:164] Run: docker run --rm --name addons-191972-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-191972 --entrypoint /usr/bin/test -v addons-191972:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 10:23:32.566442   12653 cli_runner.go:217] Completed: docker run --rm --name addons-191972-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-191972 --entrypoint /usr/bin/test -v addons-191972:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib: (3.955205965s)
	I0916 10:23:32.566475   12653 oci.go:107] Successfully prepared a docker volume addons-191972
	I0916 10:23:32.566499   12653 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:23:32.566524   12653 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 10:23:32.566588   12653 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-191972:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 10:23:36.989473   12653 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-191972:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.422844639s)
	I0916 10:23:36.989499   12653 kic.go:203] duration metric: took 4.422974303s to extract preloaded images to volume ...
	W0916 10:23:36.989616   12653 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 10:23:36.989704   12653 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 10:23:37.034645   12653 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-191972 --name addons-191972 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-191972 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-191972 --network addons-191972 --ip 192.168.49.2 --volume addons-191972:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 10:23:37.351088   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Running}}
	I0916 10:23:37.369798   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:37.389505   12653 cli_runner.go:164] Run: docker exec addons-191972 stat /var/lib/dpkg/alternatives/iptables
	I0916 10:23:37.432507   12653 oci.go:144] the created container "addons-191972" has a running status.
	I0916 10:23:37.432542   12653 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa...
	I0916 10:23:37.512853   12653 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 10:23:37.532177   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:37.549342   12653 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 10:23:37.549361   12653 kic_runner.go:114] Args: [docker exec --privileged addons-191972 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 10:23:37.594990   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:37.611429   12653 machine.go:93] provisionDockerMachine start ...
	I0916 10:23:37.611513   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:37.628951   12653 main.go:141] libmachine: Using SSH client type: native
	I0916 10:23:37.629230   12653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 10:23:37.629249   12653 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:23:37.630101   12653 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54456->127.0.0.1:32768: read: connection reset by peer
	I0916 10:23:40.759062   12653 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-191972
	
	I0916 10:23:40.759087   12653 ubuntu.go:169] provisioning hostname "addons-191972"
	I0916 10:23:40.759139   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:40.776123   12653 main.go:141] libmachine: Using SSH client type: native
	I0916 10:23:40.776294   12653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 10:23:40.776306   12653 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-191972 && echo "addons-191972" | sudo tee /etc/hostname
	I0916 10:23:40.917999   12653 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-191972
	
	I0916 10:23:40.918073   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:40.934369   12653 main.go:141] libmachine: Using SSH client type: native
	I0916 10:23:40.934536   12653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 10:23:40.934552   12653 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-191972' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-191972/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-191972' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:23:41.063670   12653 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:23:41.063696   12653 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 10:23:41.063755   12653 ubuntu.go:177] setting up certificates
	I0916 10:23:41.063769   12653 provision.go:84] configureAuth start
	I0916 10:23:41.063821   12653 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-191972
	I0916 10:23:41.080185   12653 provision.go:143] copyHostCerts
	I0916 10:23:41.080289   12653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 10:23:41.080452   12653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 10:23:41.080539   12653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 10:23:41.080607   12653 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.addons-191972 san=[127.0.0.1 192.168.49.2 addons-191972 localhost minikube]
	I0916 10:23:41.189624   12653 provision.go:177] copyRemoteCerts
	I0916 10:23:41.189685   12653 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:23:41.189718   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:41.206072   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:41.299940   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 10:23:41.321259   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:23:41.342100   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:23:41.362764   12653 provision.go:87] duration metric: took 298.977855ms to configureAuth
	I0916 10:23:41.362793   12653 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:23:41.362955   12653 config.go:182] Loaded profile config "addons-191972": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:23:41.362966   12653 machine.go:96] duration metric: took 3.751519266s to provisionDockerMachine
	I0916 10:23:41.362991   12653 client.go:171] duration metric: took 13.498318264s to LocalClient.Create
	I0916 10:23:41.363014   12653 start.go:167] duration metric: took 13.498406844s to libmachine.API.Create "addons-191972"
	I0916 10:23:41.363024   12653 start.go:293] postStartSetup for "addons-191972" (driver="docker")
	I0916 10:23:41.363035   12653 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:23:41.363112   12653 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:23:41.363159   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:41.379631   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:41.472315   12653 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:23:41.475416   12653 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:23:41.475455   12653 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:23:41.475469   12653 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:23:41.475477   12653 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:23:41.475490   12653 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 10:23:41.475562   12653 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 10:23:41.475593   12653 start.go:296] duration metric: took 112.560003ms for postStartSetup
	I0916 10:23:41.475953   12653 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-191972
	I0916 10:23:41.491831   12653 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/config.json ...
	I0916 10:23:41.492098   12653 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:23:41.492159   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:41.508709   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:41.604422   12653 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:23:41.608355   12653 start.go:128] duration metric: took 13.746544864s to createHost
	I0916 10:23:41.608378   12653 start.go:83] releasing machines lock for "addons-191972", held for 13.74667303s
	I0916 10:23:41.608449   12653 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-191972
	I0916 10:23:41.624552   12653 ssh_runner.go:195] Run: cat /version.json
	I0916 10:23:41.624594   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:41.624666   12653 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:23:41.624742   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:41.640830   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:41.641558   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:41.811513   12653 ssh_runner.go:195] Run: systemctl --version
	I0916 10:23:41.816090   12653 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:23:41.820031   12653 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 10:23:41.841966   12653 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:23:41.842040   12653 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:23:41.867614   12653 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 10:23:41.867637   12653 start.go:495] detecting cgroup driver to use...
	I0916 10:23:41.867665   12653 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:23:41.867707   12653 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 10:23:41.878761   12653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 10:23:41.889209   12653 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:23:41.889272   12653 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:23:41.901658   12653 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:23:41.914376   12653 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:23:41.989625   12653 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:23:42.064036   12653 docker.go:233] disabling docker service ...
	I0916 10:23:42.064087   12653 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:23:42.082378   12653 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:23:42.092694   12653 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:23:42.163431   12653 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:23:42.235566   12653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:23:42.245920   12653 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:23:42.260071   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 10:23:42.268844   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:23:42.277914   12653 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:23:42.277973   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:23:42.287090   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:23:42.295426   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:23:42.303716   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:23:42.312468   12653 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:23:42.320449   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:23:42.328970   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:23:42.337386   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:23:42.345791   12653 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:23:42.352855   12653 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:23:42.359971   12653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:23:42.438798   12653 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 10:23:42.548862   12653 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 10:23:42.548940   12653 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 10:23:42.552403   12653 start.go:563] Will wait 60s for crictl version
	I0916 10:23:42.552460   12653 ssh_runner.go:195] Run: which crictl
	I0916 10:23:42.555471   12653 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:23:42.586679   12653 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 10:23:42.586752   12653 ssh_runner.go:195] Run: containerd --version
	I0916 10:23:42.608454   12653 ssh_runner.go:195] Run: containerd --version
	I0916 10:23:42.632432   12653 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 10:23:42.633762   12653 cli_runner.go:164] Run: docker network inspect addons-191972 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:23:42.650400   12653 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:23:42.653892   12653 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:23:42.664053   12653 kubeadm.go:883] updating cluster {Name:addons-191972 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-191972 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:23:42.664154   12653 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:23:42.664195   12653 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:23:42.695688   12653 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 10:23:42.695710   12653 containerd.go:534] Images already preloaded, skipping extraction
	I0916 10:23:42.695778   12653 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:23:42.727148   12653 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 10:23:42.727166   12653 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:23:42.727174   12653 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I0916 10:23:42.727255   12653 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-191972 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-191972 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:23:42.727302   12653 ssh_runner.go:195] Run: sudo crictl info
	I0916 10:23:42.757474   12653 cni.go:84] Creating CNI manager for ""
	I0916 10:23:42.757493   12653 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 10:23:42.757502   12653 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:23:42.757520   12653 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-191972 NodeName:addons-191972 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:23:42.757633   12653 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-191972"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:23:42.757684   12653 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:23:42.765604   12653 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:23:42.765672   12653 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:23:42.773363   12653 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0916 10:23:42.789280   12653 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:23:42.805100   12653 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0916 10:23:42.820420   12653 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0916 10:23:42.823264   12653 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:23:42.832700   12653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:23:42.907069   12653 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:23:42.919246   12653 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972 for IP: 192.168.49.2
	I0916 10:23:42.919266   12653 certs.go:194] generating shared ca certs ...
	I0916 10:23:42.919279   12653 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:42.919399   12653 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 10:23:43.054784   12653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt ...
	I0916 10:23:43.054815   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt: {Name:mkf05eaa3032985e939bd1a93aa36a6d50242974 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.055008   12653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key ...
	I0916 10:23:43.055031   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key: {Name:mk4cf19316dad04ab708c5c17e172ec92fc35230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.055134   12653 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 10:23:43.268289   12653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt ...
	I0916 10:23:43.268318   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt: {Name:mk68da284b9ad8d396a1f11e7cfb94cc6f208c5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.268510   12653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key ...
	I0916 10:23:43.268532   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key: {Name:mkdf8c5da2a6d70c9ece2277843ebe69f9105c4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.268626   12653 certs.go:256] generating profile certs ...
	I0916 10:23:43.268694   12653 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.key
	I0916 10:23:43.268720   12653 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt with IP's: []
	I0916 10:23:43.341520   12653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt ...
	I0916 10:23:43.341551   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: {Name:mke3c2895145f9c692cb1e6451d9766499ccc877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.341738   12653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.key ...
	I0916 10:23:43.341755   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.key: {Name:mkd6237ae8ebf429452ae0c60cea457b1f9cff1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.341855   12653 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.key.ac265369
	I0916 10:23:43.341882   12653 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.crt.ac265369 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0916 10:23:43.403750   12653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.crt.ac265369 ...
	I0916 10:23:43.403775   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.crt.ac265369: {Name:mk72db26b8519849abdf811ed93be5caeac2267d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.403951   12653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.key.ac265369 ...
	I0916 10:23:43.403973   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.key.ac265369: {Name:mk4b11dab0a085e395344dc35616a0c16f298191 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.404065   12653 certs.go:381] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.crt.ac265369 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.crt
	I0916 10:23:43.404155   12653 certs.go:385] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.key.ac265369 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.key
	I0916 10:23:43.404230   12653 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.key
	I0916 10:23:43.404250   12653 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.crt with IP's: []
	I0916 10:23:43.488130   12653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.crt ...
	I0916 10:23:43.488160   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.crt: {Name:mk11d8f9c437e5586897185f4551df7594041471 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.488342   12653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.key ...
	I0916 10:23:43.488360   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.key: {Name:mk18734ee357c50ce0ff509ffb1c7e42743fa1b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.488577   12653 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:23:43.488617   12653 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 10:23:43.488652   12653 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:23:43.488682   12653 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 10:23:43.489279   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:23:43.511557   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:23:43.532934   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:23:43.553377   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:23:43.575078   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0916 10:23:43.595868   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:23:43.616905   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:23:43.637839   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:23:43.658915   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:23:43.680485   12653 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:23:43.696295   12653 ssh_runner.go:195] Run: openssl version
	I0916 10:23:43.701282   12653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:23:43.709681   12653 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:43.712715   12653 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:43.712762   12653 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:43.718832   12653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:23:43.727190   12653 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:23:43.730247   12653 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:23:43.730290   12653 kubeadm.go:392] StartCluster: {Name:addons-191972 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-191972 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:23:43.730356   12653 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 10:23:43.730405   12653 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:23:43.761830   12653 cri.go:89] found id: ""
	I0916 10:23:43.761893   12653 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:23:43.770086   12653 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:23:43.778465   12653 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 10:23:43.778522   12653 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:23:43.786355   12653 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:23:43.786373   12653 kubeadm.go:157] found existing configuration files:
	
	I0916 10:23:43.786419   12653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 10:23:43.794471   12653 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:23:43.794519   12653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:23:43.802487   12653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 10:23:43.810401   12653 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:23:43.810451   12653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:23:43.817541   12653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 10:23:43.824799   12653 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:23:43.824842   12653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:23:43.832032   12653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 10:23:43.839239   12653 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:23:43.839298   12653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:23:43.847649   12653 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 10:23:43.880192   12653 kubeadm.go:310] W0916 10:23:43.879583    1109 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:23:43.880773   12653 kubeadm.go:310] W0916 10:23:43.880291    1109 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:23:43.896580   12653 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 10:23:43.944226   12653 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:23:52.227261   12653 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 10:23:52.227338   12653 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 10:23:52.227418   12653 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 10:23:52.227466   12653 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 10:23:52.227501   12653 kubeadm.go:310] OS: Linux
	I0916 10:23:52.227541   12653 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 10:23:52.227584   12653 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 10:23:52.227625   12653 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 10:23:52.227670   12653 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 10:23:52.227711   12653 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 10:23:52.227786   12653 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 10:23:52.227872   12653 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 10:23:52.227947   12653 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 10:23:52.227994   12653 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 10:23:52.228098   12653 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:23:52.228218   12653 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:23:52.228360   12653 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 10:23:52.228491   12653 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:23:52.230143   12653 out.go:235]   - Generating certificates and keys ...
	I0916 10:23:52.230239   12653 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 10:23:52.230328   12653 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 10:23:52.230422   12653 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 10:23:52.230504   12653 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 10:23:52.230596   12653 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 10:23:52.230685   12653 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 10:23:52.230768   12653 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 10:23:52.230910   12653 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-191972 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 10:23:52.230984   12653 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 10:23:52.231130   12653 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-191972 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 10:23:52.231228   12653 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 10:23:52.231331   12653 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 10:23:52.231395   12653 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 10:23:52.231471   12653 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:23:52.231543   12653 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:23:52.231622   12653 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 10:23:52.231683   12653 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:23:52.231759   12653 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:23:52.231871   12653 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:23:52.231979   12653 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:23:52.232069   12653 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:23:52.233407   12653 out.go:235]   - Booting up control plane ...
	I0916 10:23:52.233500   12653 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:23:52.233589   12653 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:23:52.233654   12653 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:23:52.233747   12653 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:23:52.233846   12653 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:23:52.233895   12653 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 10:23:52.234011   12653 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 10:23:52.234102   12653 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:23:52.234155   12653 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.63037ms
	I0916 10:23:52.234224   12653 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 10:23:52.234282   12653 kubeadm.go:310] [api-check] The API server is healthy after 4.501222011s
	I0916 10:23:52.234402   12653 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:23:52.234544   12653 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:23:52.234625   12653 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:23:52.234780   12653 kubeadm.go:310] [mark-control-plane] Marking the node addons-191972 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:23:52.234830   12653 kubeadm.go:310] [bootstrap-token] Using token: fe3fo6.40ynbll2pbwpp3it
	I0916 10:23:52.236918   12653 out.go:235]   - Configuring RBAC rules ...
	I0916 10:23:52.237043   12653 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:23:52.237118   12653 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:23:52.237261   12653 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:23:52.237418   12653 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:23:52.237547   12653 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:23:52.237659   12653 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:23:52.237791   12653 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:23:52.237856   12653 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 10:23:52.237898   12653 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 10:23:52.237904   12653 kubeadm.go:310] 
	I0916 10:23:52.237963   12653 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 10:23:52.237971   12653 kubeadm.go:310] 
	I0916 10:23:52.238040   12653 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 10:23:52.238046   12653 kubeadm.go:310] 
	I0916 10:23:52.238070   12653 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 10:23:52.238123   12653 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:23:52.238167   12653 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:23:52.238173   12653 kubeadm.go:310] 
	I0916 10:23:52.238218   12653 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 10:23:52.238223   12653 kubeadm.go:310] 
	I0916 10:23:52.238268   12653 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:23:52.238274   12653 kubeadm.go:310] 
	I0916 10:23:52.238329   12653 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 10:23:52.238418   12653 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:23:52.238507   12653 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:23:52.238515   12653 kubeadm.go:310] 
	I0916 10:23:52.238598   12653 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:23:52.238681   12653 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 10:23:52.238690   12653 kubeadm.go:310] 
	I0916 10:23:52.238801   12653 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fe3fo6.40ynbll2pbwpp3it \
	I0916 10:23:52.238908   12653 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 \
	I0916 10:23:52.238933   12653 kubeadm.go:310] 	--control-plane 
	I0916 10:23:52.238939   12653 kubeadm.go:310] 
	I0916 10:23:52.239012   12653 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:23:52.239020   12653 kubeadm.go:310] 
	I0916 10:23:52.239095   12653 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fe3fo6.40ynbll2pbwpp3it \
	I0916 10:23:52.239199   12653 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 
	I0916 10:23:52.239210   12653 cni.go:84] Creating CNI manager for ""
	I0916 10:23:52.239215   12653 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 10:23:52.240733   12653 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 10:23:52.241980   12653 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 10:23:52.245609   12653 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 10:23:52.245625   12653 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 10:23:52.261912   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 10:23:52.447057   12653 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:23:52.447144   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:52.447165   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-191972 minikube.k8s.io/updated_at=2024_09_16T10_23_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=addons-191972 minikube.k8s.io/primary=true
	I0916 10:23:52.543497   12653 ops.go:34] apiserver oom_adj: -16
	I0916 10:23:52.543643   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:53.044491   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:53.543770   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:54.044061   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:54.544691   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:55.044249   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:55.543918   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:56.043679   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:56.543717   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:57.044619   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:57.107839   12653 kubeadm.go:1113] duration metric: took 4.660750668s to wait for elevateKubeSystemPrivileges
	I0916 10:23:57.107871   12653 kubeadm.go:394] duration metric: took 13.37758355s to StartCluster
	I0916 10:23:57.107890   12653 settings.go:142] acquiring lock: {Name:mk5f140da961bb985c566f732d611f3e238f1f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:57.107998   12653 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:23:57.108383   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:57.108581   12653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 10:23:57.108610   12653 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 10:23:57.108666   12653 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0916 10:23:57.108789   12653 addons.go:69] Setting yakd=true in profile "addons-191972"
	I0916 10:23:57.108813   12653 addons.go:234] Setting addon yakd=true in "addons-191972"
	I0916 10:23:57.108830   12653 config.go:182] Loaded profile config "addons-191972": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:23:57.108844   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.108885   12653 addons.go:69] Setting inspektor-gadget=true in profile "addons-191972"
	I0916 10:23:57.108900   12653 addons.go:234] Setting addon inspektor-gadget=true in "addons-191972"
	I0916 10:23:57.108928   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.109000   12653 addons.go:69] Setting gcp-auth=true in profile "addons-191972"
	I0916 10:23:57.109025   12653 mustload.go:65] Loading cluster: addons-191972
	I0916 10:23:57.109143   12653 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-191972"
	I0916 10:23:57.109187   12653 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-191972"
	I0916 10:23:57.109185   12653 addons.go:69] Setting default-storageclass=true in profile "addons-191972"
	I0916 10:23:57.109211   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.109225   12653 config.go:182] Loaded profile config "addons-191972": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:23:57.109232   12653 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-191972"
	I0916 10:23:57.109216   12653 addons.go:69] Setting cloud-spanner=true in profile "addons-191972"
	I0916 10:23:57.109259   12653 addons.go:69] Setting storage-provisioner=true in profile "addons-191972"
	I0916 10:23:57.109265   12653 addons.go:234] Setting addon cloud-spanner=true in "addons-191972"
	I0916 10:23:57.109274   12653 addons.go:234] Setting addon storage-provisioner=true in "addons-191972"
	I0916 10:23:57.109308   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.109323   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.109407   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.109485   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.109507   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.109547   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.109684   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.109757   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.109825   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.110167   12653 addons.go:69] Setting ingress-dns=true in profile "addons-191972"
	I0916 10:23:57.110372   12653 addons.go:234] Setting addon ingress-dns=true in "addons-191972"
	I0916 10:23:57.110546   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.111202   12653 addons.go:69] Setting helm-tiller=true in profile "addons-191972"
	I0916 10:23:57.111255   12653 addons.go:234] Setting addon helm-tiller=true in "addons-191972"
	I0916 10:23:57.111282   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.111445   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.111484   12653 addons.go:69] Setting ingress=true in profile "addons-191972"
	I0916 10:23:57.111498   12653 addons.go:234] Setting addon ingress=true in "addons-191972"
	I0916 10:23:57.111527   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.111731   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.110913   12653 addons.go:69] Setting metrics-server=true in profile "addons-191972"
	I0916 10:23:57.111983   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.111987   12653 addons.go:234] Setting addon metrics-server=true in "addons-191972"
	I0916 10:23:57.112171   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.110926   12653 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-191972"
	I0916 10:23:57.113223   12653 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-191972"
	I0916 10:23:57.113258   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.113700   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.115817   12653 out.go:177] * Verifying Kubernetes components...
	I0916 10:23:57.116675   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.110938   12653 addons.go:69] Setting registry=true in profile "addons-191972"
	I0916 10:23:57.116963   12653 addons.go:234] Setting addon registry=true in "addons-191972"
	I0916 10:23:57.117093   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.110938   12653 addons.go:69] Setting volcano=true in profile "addons-191972"
	I0916 10:23:57.117245   12653 addons.go:234] Setting addon volcano=true in "addons-191972"
	I0916 10:23:57.117313   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.110949   12653 addons.go:69] Setting volumesnapshots=true in profile "addons-191972"
	I0916 10:23:57.117350   12653 addons.go:234] Setting addon volumesnapshots=true in "addons-191972"
	I0916 10:23:57.117397   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.117799   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.117919   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.118954   12653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:23:57.110924   12653 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-191972"
	I0916 10:23:57.120855   12653 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-191972"
	I0916 10:23:57.121186   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.148826   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.156121   12653 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:23:57.158094   12653 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0916 10:23:57.160078   12653 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0916 10:23:57.160230   12653 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:23:57.163394   12653 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:23:57.163405   12653 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0916 10:23:57.163428   12653 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0916 10:23:57.163491   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.163933   12653 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 10:23:57.163952   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0916 10:23:57.163999   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.166339   12653 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0916 10:23:57.166352   12653 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0916 10:23:57.166505   12653 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:57.166525   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:23:57.166591   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.176509   12653 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:23:57.176539   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0916 10:23:57.176597   12653 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0916 10:23:57.176613   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0916 10:23:57.176614   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.176667   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.176871   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.184510   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0916 10:23:57.184923   12653 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0916 10:23:57.187620   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0916 10:23:57.187908   12653 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 10:23:57.187925   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0916 10:23:57.188005   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.190192   12653 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0916 10:23:57.190888   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0916 10:23:57.191984   12653 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0916 10:23:57.192004   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0916 10:23:57.192062   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.192462   12653 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-191972"
	I0916 10:23:57.192519   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.193186   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.195485   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0916 10:23:57.196395   12653 addons.go:234] Setting addon default-storageclass=true in "addons-191972"
	I0916 10:23:57.196441   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.197033   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.200024   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0916 10:23:57.200756   12653 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0916 10:23:57.202388   12653 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 10:23:57.202409   12653 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 10:23:57.202572   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.204739   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0916 10:23:57.206967   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0916 10:23:57.217725   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0916 10:23:57.217900   12653 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0916 10:23:57.219581   12653 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0916 10:23:57.219714   12653 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0916 10:23:57.219798   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.219620   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 10:23:57.220511   12653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0916 10:23:57.221727   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.235796   12653 out.go:177]   - Using image docker.io/registry:2.8.3
	I0916 10:23:57.237579   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0916 10:23:57.239326   12653 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 10:23:57.239350   12653 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0916 10:23:57.239411   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.239514   12653 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0916 10:23:57.241480   12653 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0916 10:23:57.241502   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0916 10:23:57.241555   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.243883   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.255850   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.256610   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.261965   12653 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0916 10:23:57.263559   12653 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0916 10:23:57.265255   12653 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0916 10:23:57.266412   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.267838   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.268005   12653 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 10:23:57.268022   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0916 10:23:57.268074   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.269050   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.276483   12653 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:57.276507   12653 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:23:57.276573   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.283025   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.284257   12653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 10:23:57.288880   12653 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0916 10:23:57.290776   12653 out.go:177]   - Using image docker.io/busybox:stable
	I0916 10:23:57.292419   12653 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:23:57.292444   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0916 10:23:57.292510   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.295145   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.295780   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.297628   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.298120   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.300416   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.306147   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.311231   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.314549   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	W0916 10:23:57.324739   12653 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0916 10:23:57.324769   12653 retry.go:31] will retry after 374.435778ms: ssh: handshake failed: EOF
	W0916 10:23:57.325602   12653 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0916 10:23:57.325619   12653 retry.go:31] will retry after 150.651165ms: ssh: handshake failed: EOF
	I0916 10:23:57.330682   12653 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:23:57.629690   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 10:23:57.729822   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:57.730227   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:23:57.742355   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:57.824974   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 10:23:57.842831   12653 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 10:23:57.842917   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0916 10:23:57.843332   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:23:57.921972   12653 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0916 10:23:57.922058   12653 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0916 10:23:57.922011   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0916 10:23:57.922034   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 10:23:57.922195   12653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0916 10:23:57.929874   12653 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 10:23:57.929901   12653 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0916 10:23:57.941141   12653 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0916 10:23:57.941166   12653 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0916 10:23:58.138273   12653 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0916 10:23:58.138369   12653 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0916 10:23:58.222261   12653 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:23:58.222352   12653 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0916 10:23:58.229572   12653 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 10:23:58.229660   12653 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0916 10:23:58.232627   12653 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0916 10:23:58.232698   12653 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0916 10:23:58.322393   12653 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 10:23:58.322420   12653 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 10:23:58.339998   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 10:23:58.435282   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 10:23:58.435313   12653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0916 10:23:58.435591   12653 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.15128486s)
	I0916 10:23:58.435618   12653 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0916 10:23:58.436958   12653 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.1062474s)
	I0916 10:23:58.437947   12653 node_ready.go:35] waiting up to 6m0s for node "addons-191972" to be "Ready" ...
	I0916 10:23:58.441471   12653 node_ready.go:49] node "addons-191972" has status "Ready":"True"
	I0916 10:23:58.441502   12653 node_ready.go:38] duration metric: took 3.529013ms for node "addons-191972" to be "Ready" ...
	I0916 10:23:58.441514   12653 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:23:58.442873   12653 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0916 10:23:58.442897   12653 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0916 10:23:58.534045   12653 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2l862" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:58.540468   12653 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:23:58.540496   12653 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 10:23:58.642810   12653 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:23:58.642885   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0916 10:23:58.728521   12653 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 10:23:58.728554   12653 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0916 10:23:58.840472   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:23:58.921026   12653 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0916 10:23:58.921059   12653 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0916 10:23:58.936525   12653 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0916 10:23:58.936552   12653 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0916 10:23:58.939212   12653 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-191972" context rescaled to 1 replicas
	I0916 10:23:59.131614   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:23:59.224079   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 10:23:59.224104   12653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0916 10:23:59.230203   12653 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0916 10:23:59.230238   12653 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0916 10:23:59.423686   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:23:59.430144   12653 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0916 10:23:59.430176   12653 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0916 10:23:59.433784   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 10:23:59.433810   12653 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0916 10:23:59.542608   12653 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:23:59.542635   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0916 10:23:59.630644   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 10:23:59.630734   12653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0916 10:23:59.840282   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:23:59.927613   12653 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:23:59.927705   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0916 10:24:00.030859   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 10:24:00.030936   12653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0916 10:24:00.034479   12653 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0916 10:24:00.034549   12653 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0916 10:24:00.038488   12653 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-2l862" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-2l862" not found
	I0916 10:24:00.038522   12653 pod_ready.go:82] duration metric: took 1.504385632s for pod "coredns-7c65d6cfc9-2l862" in "kube-system" namespace to be "Ready" ...
	E0916 10:24:00.038535   12653 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-2l862" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-2l862" not found
	I0916 10:24:00.038552   12653 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:00.333635   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:24:00.339910   12653 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 10:24:00.339994   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0916 10:24:00.627234   12653 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0916 10:24:00.627262   12653 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0916 10:24:00.929780   12653 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 10:24:00.929809   12653 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0916 10:24:01.128973   12653 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:24:01.129062   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0916 10:24:01.334031   12653 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 10:24:01.334116   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0916 10:24:01.525220   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:24:02.022039   12653 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 10:24:02.022114   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0916 10:24:02.136463   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:02.532736   12653 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:24:02.532829   12653 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0916 10:24:02.738986   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:24:04.426813   12653 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0916 10:24:04.426903   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:24:04.456284   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:24:04.624938   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:04.638370   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.008571899s)
	I0916 10:24:04.638414   12653 addons.go:475] Verifying addon ingress=true in "addons-191972"
	I0916 10:24:04.638488   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.908226437s)
	I0916 10:24:04.638570   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.908717103s)
	I0916 10:24:04.638623   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.896188028s)
	I0916 10:24:04.638699   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.81369606s)
	I0916 10:24:04.638718   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.795359026s)
	I0916 10:24:04.638742   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.716592394s)
	I0916 10:24:04.641681   12653 out.go:177] * Verifying ingress addon...
	I0916 10:24:04.644857   12653 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	W0916 10:24:04.722084   12653 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0916 10:24:04.723574   12653 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0916 10:24:04.723598   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:04.841083   12653 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0916 10:24:04.932849   12653 addons.go:234] Setting addon gcp-auth=true in "addons-191972"
	I0916 10:24:04.932903   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:24:04.933372   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:24:04.957393   12653 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0916 10:24:04.957464   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:24:04.975728   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:24:05.150342   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:05.650366   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:06.149809   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:06.649391   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:06.834167   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (8.494119031s)
	I0916 10:24:06.834259   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.993750099s)
	I0916 10:24:06.834355   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.702687859s)
	I0916 10:24:06.834379   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.410662864s)
	I0916 10:24:06.834381   12653 addons.go:475] Verifying addon metrics-server=true in "addons-191972"
	I0916 10:24:06.834394   12653 addons.go:475] Verifying addon registry=true in "addons-191972"
	I0916 10:24:06.834447   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.994082306s)
	I0916 10:24:06.834595   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.500877662s)
	W0916 10:24:06.834635   12653 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:24:06.834660   12653 retry.go:31] will retry after 180.492463ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:24:06.834694   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.309367322s)
	I0916 10:24:06.836029   12653 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-191972 service yakd-dashboard -n yakd-dashboard
	
	I0916 10:24:06.836032   12653 out.go:177] * Verifying registry addon...
	I0916 10:24:06.838577   12653 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0916 10:24:06.842659   12653 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 10:24:06.842681   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:07.016329   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:24:07.122253   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:07.229433   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:07.346049   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:07.428384   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.689342475s)
	I0916 10:24:07.428423   12653 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-191972"
	I0916 10:24:07.428557   12653 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.471115449s)
	I0916 10:24:07.430137   12653 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:24:07.430140   12653 out.go:177] * Verifying csi-hostpath-driver addon...
	I0916 10:24:07.432142   12653 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0916 10:24:07.433350   12653 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0916 10:24:07.433452   12653 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 10:24:07.433472   12653 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0916 10:24:07.446890   12653 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 10:24:07.446929   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:07.523198   12653 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 10:24:07.523247   12653 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0916 10:24:07.543809   12653 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:24:07.543877   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0916 10:24:07.627288   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:24:07.649744   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:07.842799   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:07.943700   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.149515   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:08.343117   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:08.438263   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.651360   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:08.739263   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.722876496s)
	I0916 10:24:08.739377   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.111993041s)
	I0916 10:24:08.740565   12653 addons.go:475] Verifying addon gcp-auth=true in "addons-191972"
	I0916 10:24:08.742658   12653 out.go:177] * Verifying gcp-auth addon...
	I0916 10:24:08.744959   12653 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0916 10:24:08.752275   12653 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 10:24:08.842486   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:08.937942   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.148485   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:09.342745   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:09.444884   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.544117   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:09.649057   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:09.850158   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:09.951607   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.149384   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:10.342403   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:10.437953   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.648926   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:10.842555   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:10.938628   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:11.149265   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:11.341824   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:11.438269   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:11.544664   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:11.649663   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:11.842706   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:11.938382   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:12.149747   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:12.341485   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:12.438115   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:12.649444   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:12.842417   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:12.937808   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:13.149247   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:13.342184   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:13.443397   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:13.544742   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:13.649342   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:13.842433   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:13.938156   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:14.148884   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:14.342230   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:14.437378   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:14.648929   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:14.841404   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:14.938373   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:15.148947   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:15.342062   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:15.437442   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:15.544833   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:15.649729   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:15.875330   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:16.063181   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:16.148410   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:16.342704   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:16.437759   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:16.649599   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:16.842196   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:16.937322   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:17.148973   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:17.342240   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:17.438331   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:17.649287   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:17.842346   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:17.937786   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:18.044459   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:18.148462   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:18.342098   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:18.438245   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:18.650618   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:18.842115   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:18.937393   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:19.148210   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:19.342331   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:19.437753   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:19.649206   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:19.841659   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:19.937929   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:20.149095   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:20.341559   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:20.437389   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:20.543697   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:20.649389   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:20.841724   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:20.939911   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:21.148803   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:21.341867   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:21.437743   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:21.649220   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:21.841636   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:21.937733   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:22.148853   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:22.341623   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:22.438291   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:22.544155   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:22.648789   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:22.842117   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:22.937569   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:23.148605   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:23.342228   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:23.437946   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:23.648725   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:23.848611   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:23.937702   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:24.148830   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:24.341472   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:24.437746   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:24.648857   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:24.841524   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:24.937579   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:25.043875   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:25.148986   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:25.341729   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:25.438614   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:25.648859   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:25.842571   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:25.937660   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:26.148067   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:26.342525   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:26.442495   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:26.649368   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:26.841986   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:26.937210   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:27.044290   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:27.148266   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:27.341925   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:27.437369   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:27.648710   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:27.842271   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:27.937289   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:28.149389   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:28.341712   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:28.437988   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:28.649507   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:28.841935   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:28.937651   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:29.148305   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:29.341758   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:29.437230   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:29.544648   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:29.648789   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:29.842453   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:29.937780   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:30.149144   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:30.341971   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:30.436935   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:30.648826   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:30.842241   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:30.937301   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:31.148532   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:31.342364   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:31.438028   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:31.649021   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:31.842529   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:31.938084   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:32.044452   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:32.148477   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:32.342165   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:32.437629   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:32.649007   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:32.841446   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:32.937583   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:33.148965   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:33.341801   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:33.437144   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:33.649484   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:33.842344   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:33.937348   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:34.148522   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:34.342404   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:34.438126   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:34.543640   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:34.649404   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:34.842417   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:34.937940   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:35.149191   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:35.341955   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:35.437296   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:35.649499   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:35.841951   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:35.937835   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:36.148878   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:36.342396   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:36.437451   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:36.648935   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:36.841429   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:36.937515   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:37.043652   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:37.148879   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:37.341650   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:37.438917   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:37.648863   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:37.843665   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:37.937755   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:38.148476   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:38.342129   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:38.437617   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:38.648850   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:38.842096   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:38.937210   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:39.044295   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:39.148546   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:39.342070   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:39.437434   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:39.649394   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:39.850992   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:39.937068   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:40.148412   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:40.342026   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:40.438818   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:40.648424   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:40.842673   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:40.937959   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:41.149077   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:41.341573   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:41.437823   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:41.544866   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:41.649385   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:41.842400   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:41.942736   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:42.148726   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:42.342124   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:42.438550   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:42.649404   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:42.841927   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:42.937808   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:43.149523   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:43.341957   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:43.437318   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:43.545247   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:43.648618   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:43.842970   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:43.938236   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:44.149170   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:44.342180   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:44.437399   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:44.649533   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:44.842942   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:44.937846   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:45.149581   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:45.342185   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:45.437873   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:45.649109   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:45.842031   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:45.937050   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:46.043865   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:46.149131   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:46.342272   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:46.437555   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:46.649645   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:46.850195   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:46.951731   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:47.044952   12653 pod_ready.go:93] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:47.044977   12653 pod_ready.go:82] duration metric: took 47.006412913s for pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.044991   12653 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.048830   12653 pod_ready.go:93] pod "etcd-addons-191972" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:47.048847   12653 pod_ready.go:82] duration metric: took 3.848159ms for pod "etcd-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.048861   12653 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.052536   12653 pod_ready.go:93] pod "kube-apiserver-addons-191972" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:47.052558   12653 pod_ready.go:82] duration metric: took 3.691187ms for pod "kube-apiserver-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.052566   12653 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.056167   12653 pod_ready.go:93] pod "kube-controller-manager-addons-191972" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:47.056192   12653 pod_ready.go:82] duration metric: took 3.620465ms for pod "kube-controller-manager-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.056201   12653 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fnr7f" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.060021   12653 pod_ready.go:93] pod "kube-proxy-fnr7f" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:47.060038   12653 pod_ready.go:82] duration metric: took 3.830746ms for pod "kube-proxy-fnr7f" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.060046   12653 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.149672   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:47.342533   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:47.437808   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:47.441161   12653 pod_ready.go:93] pod "kube-scheduler-addons-191972" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:47.441181   12653 pod_ready.go:82] duration metric: took 381.129532ms for pod "kube-scheduler-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.441188   12653 pod_ready.go:39] duration metric: took 48.999654984s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:24:47.441205   12653 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:24:47.441254   12653 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:24:47.453909   12653 api_server.go:72] duration metric: took 50.345260117s to wait for apiserver process to appear ...
	I0916 10:24:47.453935   12653 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:24:47.453960   12653 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 10:24:47.458673   12653 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 10:24:47.459648   12653 api_server.go:141] control plane version: v1.31.1
	I0916 10:24:47.459673   12653 api_server.go:131] duration metric: took 5.729621ms to wait for apiserver health ...
	I0916 10:24:47.459683   12653 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:24:47.648237   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:47.648583   12653 system_pods.go:59] 19 kube-system pods found
	I0916 10:24:47.648620   12653 system_pods.go:61] "coredns-7c65d6cfc9-9rccl" [f2ffddc5-3995-4d5a-8059-297b3859f9c5] Running
	I0916 10:24:47.648634   12653 system_pods.go:61] "csi-hostpath-attacher-0" [07b91be6-2f2b-441c-80ee-f832e9ee2e18] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 10:24:47.648642   12653 system_pods.go:61] "csi-hostpath-resizer-0" [311c306b-accd-4242-8415-81199a4ce054] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 10:24:47.648653   12653 system_pods.go:61] "csi-hostpathplugin-qdnbn" [8f5408a2-a7eb-4f76-8ce0-b885fb4b47e5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 10:24:47.648667   12653 system_pods.go:61] "etcd-addons-191972" [81af20b7-9b19-4723-9b92-0ded3d775cd3] Running
	I0916 10:24:47.648673   12653 system_pods.go:61] "kindnet-rxp8k" [02b143c0-bbb4-4f94-8448-9c3c4f248a87] Running
	I0916 10:24:47.648678   12653 system_pods.go:61] "kube-apiserver-addons-191972" [1aabf917-f381-4e69-8524-954958c99b7e] Running
	I0916 10:24:47.648684   12653 system_pods.go:61] "kube-controller-manager-addons-191972" [ee796e67-bd06-4d93-9d20-aabcbb395ba2] Running
	I0916 10:24:47.648690   12653 system_pods.go:61] "kube-ingress-dns-minikube" [0f28fa0b-84f1-4215-aa41-4596ab4cef8b] Running
	I0916 10:24:47.648696   12653 system_pods.go:61] "kube-proxy-fnr7f" [a9e53f94-30ad-4178-b9e2-3ba4354a5adf] Running
	I0916 10:24:47.648700   12653 system_pods.go:61] "kube-scheduler-addons-191972" [32bbb72d-e291-4c84-8709-6066cf10b0cf] Running
	I0916 10:24:47.648709   12653 system_pods.go:61] "metrics-server-84c5f94fbc-s7654" [14e280ea-8ba8-4805-844c-aeff8fb18ce0] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 10:24:47.648719   12653 system_pods.go:61] "nvidia-device-plugin-daemonset-vpb85" [14ca6c72-b73b-4254-910a-0b876ca73f90] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 10:24:47.648732   12653 system_pods.go:61] "registry-66c9cd494c-vsbgv" [bd70fcec-e032-4dbd-902c-a139ac179bbf] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 10:24:47.648740   12653 system_pods.go:61] "registry-proxy-6vsnj" [05d6014b-9706-4d7a-a816-dbc7f557cd15] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 10:24:47.648749   12653 system_pods.go:61] "snapshot-controller-56fcc65765-4g9w6" [ad55cf42-58df-4d10-aae8-7626a86e7e0d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:24:47.648760   12653 system_pods.go:61] "snapshot-controller-56fcc65765-htkmc" [70a5c810-f514-4b23-b3a3-7a4cc46c35e2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:24:47.648766   12653 system_pods.go:61] "storage-provisioner" [ca9a25b7-8324-4ea1-a525-24f8c308baea] Running
	I0916 10:24:47.648777   12653 system_pods.go:61] "tiller-deploy-b48cc5f79-ddkxz" [dfe534c4-9e29-4907-b8cc-1dd12fc52f45] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 10:24:47.648789   12653 system_pods.go:74] duration metric: took 189.097544ms to wait for pod list to return data ...
	I0916 10:24:47.648801   12653 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:24:47.841018   12653 default_sa.go:45] found service account: "default"
	I0916 10:24:47.841043   12653 default_sa.go:55] duration metric: took 192.233696ms for default service account to be created ...
	I0916 10:24:47.841053   12653 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:24:47.841394   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:47.937402   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:48.049475   12653 system_pods.go:86] 19 kube-system pods found
	I0916 10:24:48.049509   12653 system_pods.go:89] "coredns-7c65d6cfc9-9rccl" [f2ffddc5-3995-4d5a-8059-297b3859f9c5] Running
	I0916 10:24:48.049523   12653 system_pods.go:89] "csi-hostpath-attacher-0" [07b91be6-2f2b-441c-80ee-f832e9ee2e18] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 10:24:48.049533   12653 system_pods.go:89] "csi-hostpath-resizer-0" [311c306b-accd-4242-8415-81199a4ce054] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 10:24:48.049541   12653 system_pods.go:89] "csi-hostpathplugin-qdnbn" [8f5408a2-a7eb-4f76-8ce0-b885fb4b47e5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 10:24:48.049546   12653 system_pods.go:89] "etcd-addons-191972" [81af20b7-9b19-4723-9b92-0ded3d775cd3] Running
	I0916 10:24:48.049550   12653 system_pods.go:89] "kindnet-rxp8k" [02b143c0-bbb4-4f94-8448-9c3c4f248a87] Running
	I0916 10:24:48.049554   12653 system_pods.go:89] "kube-apiserver-addons-191972" [1aabf917-f381-4e69-8524-954958c99b7e] Running
	I0916 10:24:48.049560   12653 system_pods.go:89] "kube-controller-manager-addons-191972" [ee796e67-bd06-4d93-9d20-aabcbb395ba2] Running
	I0916 10:24:48.049569   12653 system_pods.go:89] "kube-ingress-dns-minikube" [0f28fa0b-84f1-4215-aa41-4596ab4cef8b] Running
	I0916 10:24:48.049572   12653 system_pods.go:89] "kube-proxy-fnr7f" [a9e53f94-30ad-4178-b9e2-3ba4354a5adf] Running
	I0916 10:24:48.049576   12653 system_pods.go:89] "kube-scheduler-addons-191972" [32bbb72d-e291-4c84-8709-6066cf10b0cf] Running
	I0916 10:24:48.049579   12653 system_pods.go:89] "metrics-server-84c5f94fbc-s7654" [14e280ea-8ba8-4805-844c-aeff8fb18ce0] Running
	I0916 10:24:48.049587   12653 system_pods.go:89] "nvidia-device-plugin-daemonset-vpb85" [14ca6c72-b73b-4254-910a-0b876ca73f90] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 10:24:48.049595   12653 system_pods.go:89] "registry-66c9cd494c-vsbgv" [bd70fcec-e032-4dbd-902c-a139ac179bbf] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 10:24:48.049600   12653 system_pods.go:89] "registry-proxy-6vsnj" [05d6014b-9706-4d7a-a816-dbc7f557cd15] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 10:24:48.049605   12653 system_pods.go:89] "snapshot-controller-56fcc65765-4g9w6" [ad55cf42-58df-4d10-aae8-7626a86e7e0d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:24:48.049613   12653 system_pods.go:89] "snapshot-controller-56fcc65765-htkmc" [70a5c810-f514-4b23-b3a3-7a4cc46c35e2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:24:48.049618   12653 system_pods.go:89] "storage-provisioner" [ca9a25b7-8324-4ea1-a525-24f8c308baea] Running
	I0916 10:24:48.049625   12653 system_pods.go:89] "tiller-deploy-b48cc5f79-ddkxz" [dfe534c4-9e29-4907-b8cc-1dd12fc52f45] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 10:24:48.049634   12653 system_pods.go:126] duration metric: took 208.573497ms to wait for k8s-apps to be running ...
	I0916 10:24:48.049644   12653 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:24:48.049682   12653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:24:48.060846   12653 system_svc.go:56] duration metric: took 11.19263ms WaitForService to wait for kubelet
	I0916 10:24:48.060871   12653 kubeadm.go:582] duration metric: took 50.952228588s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:24:48.060890   12653 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:24:48.148219   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:48.242671   12653 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:24:48.242705   12653 node_conditions.go:123] node cpu capacity is 8
	I0916 10:24:48.242718   12653 node_conditions.go:105] duration metric: took 181.823571ms to run NodePressure ...
	I0916 10:24:48.242730   12653 start.go:241] waiting for startup goroutines ...
	I0916 10:24:48.342074   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:48.437253   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:48.650425   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:48.850814   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:48.937328   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:49.149694   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:49.342609   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:49.438289   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:49.649584   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:49.842847   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:49.936933   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:50.149348   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:50.342164   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:50.438163   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:50.649197   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:50.853453   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:50.938034   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:51.148940   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:51.341925   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:51.437207   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:51.649501   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:51.841516   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:51.937843   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:52.148973   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:52.341463   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:52.437548   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:52.649904   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:52.842395   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:52.938876   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:53.150346   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:53.342226   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:53.437852   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:53.650214   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:53.841999   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:53.938041   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:54.149543   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:54.342470   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:54.438196   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:54.649301   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:54.842219   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:54.937405   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:55.148757   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:55.342352   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:55.437453   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:55.649467   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:55.842884   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:55.938335   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:56.149527   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:56.342461   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:56.438207   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:56.649107   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:56.841744   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:56.938316   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:57.150214   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:57.342941   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:57.438321   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:57.650060   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:57.841776   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:57.937801   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:58.148724   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:58.342609   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:58.437714   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:58.648506   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:58.842214   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:58.937202   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:59.149022   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:59.341924   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:59.437205   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:59.649919   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:59.842721   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:59.943895   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:00.148461   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:00.342965   12653 kapi.go:107] duration metric: took 53.504381408s to wait for kubernetes.io/minikube-addons=registry ...
	I0916 10:25:00.438324   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:00.649093   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:00.937839   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:01.148871   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:01.436988   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:01.649359   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:01.937842   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:02.149127   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:02.439235   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:02.648644   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:02.937625   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:03.148437   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:03.438471   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:03.649883   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:03.936881   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:04.149787   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:04.438325   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:04.649405   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:04.937307   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:05.148501   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:05.437162   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:05.649408   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:05.937329   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:06.148922   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:06.437615   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:06.648794   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:06.937817   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:07.149424   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:07.437622   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:07.648805   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:07.975373   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:08.148579   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:08.438130   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:08.649051   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:08.938155   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:09.241812   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:09.438112   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:09.649051   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:09.937597   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:10.148065   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:10.438452   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:10.649615   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:10.937657   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:11.150286   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:11.438138   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:11.648515   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:11.938254   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:12.148855   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:12.437045   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:12.648984   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:12.937480   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:13.149222   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:13.437879   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:13.648073   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:13.937714   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:14.148744   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:14.437856   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:14.648905   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:14.937125   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:15.149947   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:15.438534   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:15.649415   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:15.938563   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:16.148929   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:16.437971   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:16.649574   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:16.938374   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:17.149584   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:17.437332   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:17.649230   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:17.939095   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:18.148655   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:18.437781   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:18.648991   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:18.937887   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:19.149216   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:19.437411   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:19.649222   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:19.937654   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:20.149853   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:20.438168   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:20.648811   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:20.948409   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:21.172608   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:21.655855   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:21.656415   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:21.973917   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:22.149178   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:22.438576   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:22.649097   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:22.939034   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:23.149425   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:23.438124   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:23.650285   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:23.938421   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:24.148909   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:24.441944   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:24.649383   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:24.938850   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:25.149722   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:25.437832   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:25.649648   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:25.938500   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:26.149259   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:26.437884   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:26.649790   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:26.937641   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:27.149739   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:27.438223   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:27.648728   12653 kapi.go:107] duration metric: took 1m23.003864669s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0916 10:25:27.938153   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:28.438461   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:28.939228   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:29.438060   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:29.937952   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:30.438284   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:30.938383   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:31.437781   12653 kapi.go:107] duration metric: took 1m24.004430138s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0916 10:26:53.748019   12653 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 10:26:53.748042   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:54.248033   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:54.748085   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:55.248231   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:55.748800   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:56.251601   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:56.748202   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:57.248415   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:57.748866   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:58.248439   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:58.748615   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:59.248797   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:59.748674   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:00.248751   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:00.748977   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:01.247802   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:01.749050   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:02.247827   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:02.751439   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:03.248607   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:03.748774   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:04.248993   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:04.748179   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:05.248453   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:05.748269   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:06.248843   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:06.749191   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:07.248224   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:07.748003   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:08.248208   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:08.748339   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:09.248558   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:09.748890   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:10.247853   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:10.748462   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:11.248698   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:11.748605   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:12.249209   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:12.747956   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:13.247977   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:13.748012   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:14.248098   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:14.748444   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:15.248890   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:15.748752   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:16.248803   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:16.749124   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:17.248063   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:17.747865   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:18.247931   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:18.748279   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:19.248473   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:19.748289   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:20.248375   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:20.748484   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:21.248848   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:21.748816   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:22.247827   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:22.748462   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:23.248760   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:23.749167   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:24.248424   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:24.748963   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:25.248350   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:25.748222   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:26.248413   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:26.748789   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:27.247908   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:27.747837   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:28.248226   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:28.748371   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:29.249618   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:29.748597   12653 kapi.go:107] duration metric: took 3m21.003635946s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0916 10:27:29.750701   12653 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-191972 cluster.
	I0916 10:27:29.752412   12653 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0916 10:27:29.754028   12653 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0916 10:27:29.756074   12653 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, nvidia-device-plugin, cloud-spanner, storage-provisioner-rancher, volcano, helm-tiller, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0916 10:27:29.757930   12653 addons.go:510] duration metric: took 3m32.649258168s for enable addons: enabled=[storage-provisioner ingress-dns nvidia-device-plugin cloud-spanner storage-provisioner-rancher volcano helm-tiller metrics-server inspektor-gadget yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0916 10:27:29.758012   12653 start.go:246] waiting for cluster config update ...
	I0916 10:27:29.758039   12653 start.go:255] writing updated cluster config ...
	I0916 10:27:29.758383   12653 ssh_runner.go:195] Run: rm -f paused
	I0916 10:27:29.765351   12653 out.go:177] * Done! kubectl is now configured to use "addons-191972" cluster and "default" namespace by default
	E0916 10:27:29.767004   12653 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	cfade64badb92       db2fc13d44d50       11 minutes ago      Running             gcp-auth                                 0                   99d0fe27850b3       gcp-auth-89d5ffd79-6r2td
	df81f1fc28725       a876393c9504b       12 minutes ago      Running             admission                                0                   0aa4b1d0acb5a       volcano-admission-77d7d48b68-rcfsk
	9dd4a83ba6d70       6041e92ec449f       12 minutes ago      Running             volcano-scheduler                        1                   9564c1c96bcee       volcano-scheduler-576bc46687-jtz7f
	72101e37ab665       738351fd438f0       13 minutes ago      Running             csi-snapshotter                          0                   73e3689e18fe9       csi-hostpathplugin-qdnbn
	da8f6a34306e1       931dbfd16f87c       13 minutes ago      Running             csi-provisioner                          0                   73e3689e18fe9       csi-hostpathplugin-qdnbn
	1649420a66573       e899260153aed       13 minutes ago      Running             liveness-probe                           0                   73e3689e18fe9       csi-hostpathplugin-qdnbn
	e0e474b6d95e5       e255e073c508c       13 minutes ago      Running             hostpath                                 0                   73e3689e18fe9       csi-hostpathplugin-qdnbn
	d5fc898fd874b       a80c8fd6e5229       13 minutes ago      Running             controller                               0                   30db636a12234       ingress-nginx-controller-bc57996ff-lpb7q
	06d43e898075b       88ef14a257f42       13 minutes ago      Running             node-driver-registrar                    0                   73e3689e18fe9       csi-hostpathplugin-qdnbn
	39c5183f27011       ce263a8653f9c       13 minutes ago      Exited              patch                                    0                   589d98ccee909       ingress-nginx-admission-patch-8f8nz
	a8bb0086c52b5       6041e92ec449f       13 minutes ago      Exited              volcano-scheduler                        0                   9564c1c96bcee       volcano-scheduler-576bc46687-jtz7f
	ddf31d8b68bc1       a876393c9504b       13 minutes ago      Exited              main                                     0                   b49978f431ab4       volcano-admission-init-57gk4
	06cf11b7a83f9       ce263a8653f9c       13 minutes ago      Exited              create                                   0                   6301c91177942       ingress-nginx-admission-create-5rjsx
	1cd468b4437bd       a1ed5895ba635       13 minutes ago      Running             csi-external-health-monitor-controller   0                   73e3689e18fe9       csi-hostpathplugin-qdnbn
	79266075c79ff       59cbb42146a37       13 minutes ago      Running             csi-attacher                             0                   a4c401b363464       csi-hostpath-attacher-0
	c65d9de60c2d0       aa61ee9c70bc4       13 minutes ago      Running             volume-snapshot-controller               0                   dba5883c9dc9b       snapshot-controller-56fcc65765-4g9w6
	0c025c1b7dd4c       19a639eda60f0       13 minutes ago      Running             csi-resizer                              0                   176615116e8de       csi-hostpath-resizer-0
	c7d7b6bb58927       96e410111f023       13 minutes ago      Running             volcano-controllers                      0                   84cb34271a61b       volcano-controllers-56675bb4d5-hdpdb
	6819af68287c4       aa61ee9c70bc4       13 minutes ago      Running             volume-snapshot-controller               0                   bb404cbffba4e       snapshot-controller-56fcc65765-htkmc
	576d6c9483015       48d9cfaaf3904       14 minutes ago      Exited              metrics-server                           0                   debbe4f662687       metrics-server-84c5f94fbc-s7654
	3c2ba113f3a92       c69fa2e9cbf5f       14 minutes ago      Running             coredns                                  0                   e557eec597dbb       coredns-7c65d6cfc9-9rccl
	74825d98cba88       e16d1e3a10667       14 minutes ago      Running             local-path-provisioner                   0                   1e611781a41cb       local-path-provisioner-86d989889c-w6mf9
	dfe8c0b03e5c3       30dd67412fdea       14 minutes ago      Running             minikube-ingress-dns                     0                   6682d7fdc0949       kube-ingress-dns-minikube
	62a4b8c25074d       6e38f40d628db       14 minutes ago      Running             storage-provisioner                      0                   54247c11bac23       storage-provisioner
	4c4482bfa98cf       12968670680f4       14 minutes ago      Running             kindnet-cni                              0                   48c4106711b6e       kindnet-rxp8k
	d9d3353287790       60c005f310ff3       14 minutes ago      Running             kube-proxy                               0                   b70e27ed4bc15       kube-proxy-fnr7f
	6e4dbd39a8ef5       175ffd71cce3d       15 minutes ago      Running             kube-controller-manager                  0                   f593f7267aeda       kube-controller-manager-addons-191972
	c76b948fbd083       6bab7719df100       15 minutes ago      Running             kube-apiserver                           0                   a7eb33c199dbc       kube-apiserver-addons-191972
	0539bdd901d4a       9aa1fad941575       15 minutes ago      Running             kube-scheduler                           0                   3aba8d618e3fa       kube-scheduler-addons-191972
	92c65a04535dd       2e96e5913fc06       15 minutes ago      Running             etcd                                     0                   84fc0865b25fe       etcd-addons-191972
	
	
	==> containerd <==
	Sep 16 10:34:14 addons-191972 containerd[858]: time="2024-09-16T10:34:14.020766147Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 16 10:34:14 addons-191972 containerd[858]: time="2024-09-16T10:34:14.072500899Z" level=info msg="TearDown network for sandbox \"79bab02e559b8717ec0b0e5e6dd4571fe23f373c4108f24ed2348682765448ec\" successfully"
	Sep 16 10:34:14 addons-191972 containerd[858]: time="2024-09-16T10:34:14.072542928Z" level=info msg="StopPodSandbox for \"79bab02e559b8717ec0b0e5e6dd4571fe23f373c4108f24ed2348682765448ec\" returns successfully"
	Sep 16 10:34:14 addons-191972 containerd[858]: time="2024-09-16T10:34:14.555396151Z" level=info msg="RemoveContainer for \"89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f\""
	Sep 16 10:34:14 addons-191972 containerd[858]: time="2024-09-16T10:34:14.564554463Z" level=info msg="RemoveContainer for \"89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f\" returns successfully"
	Sep 16 10:34:14 addons-191972 containerd[858]: time="2024-09-16T10:34:14.565133715Z" level=error msg="ContainerStatus for \"89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f\": not found"
	Sep 16 10:34:51 addons-191972 containerd[858]: time="2024-09-16T10:34:51.722950975Z" level=info msg="StopPodSandbox for \"79bab02e559b8717ec0b0e5e6dd4571fe23f373c4108f24ed2348682765448ec\""
	Sep 16 10:34:51 addons-191972 containerd[858]: time="2024-09-16T10:34:51.735667444Z" level=info msg="TearDown network for sandbox \"79bab02e559b8717ec0b0e5e6dd4571fe23f373c4108f24ed2348682765448ec\" successfully"
	Sep 16 10:34:51 addons-191972 containerd[858]: time="2024-09-16T10:34:51.735697631Z" level=info msg="StopPodSandbox for \"79bab02e559b8717ec0b0e5e6dd4571fe23f373c4108f24ed2348682765448ec\" returns successfully"
	Sep 16 10:34:51 addons-191972 containerd[858]: time="2024-09-16T10:34:51.736003533Z" level=info msg="RemovePodSandbox for \"79bab02e559b8717ec0b0e5e6dd4571fe23f373c4108f24ed2348682765448ec\""
	Sep 16 10:34:51 addons-191972 containerd[858]: time="2024-09-16T10:34:51.736041465Z" level=info msg="Forcibly stopping sandbox \"79bab02e559b8717ec0b0e5e6dd4571fe23f373c4108f24ed2348682765448ec\""
	Sep 16 10:34:51 addons-191972 containerd[858]: time="2024-09-16T10:34:51.743227381Z" level=info msg="TearDown network for sandbox \"79bab02e559b8717ec0b0e5e6dd4571fe23f373c4108f24ed2348682765448ec\" successfully"
	Sep 16 10:34:51 addons-191972 containerd[858]: time="2024-09-16T10:34:51.747713672Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"79bab02e559b8717ec0b0e5e6dd4571fe23f373c4108f24ed2348682765448ec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 16 10:34:51 addons-191972 containerd[858]: time="2024-09-16T10:34:51.747853738Z" level=info msg="RemovePodSandbox \"79bab02e559b8717ec0b0e5e6dd4571fe23f373c4108f24ed2348682765448ec\" returns successfully"
	Sep 16 10:38:49 addons-191972 containerd[858]: time="2024-09-16T10:38:49.875612287Z" level=info msg="StopContainer for \"576d6c948301581ae3690232578354f8473469b1d1154b30f446708f2ba3e7fe\" with timeout 30 (s)"
	Sep 16 10:38:49 addons-191972 containerd[858]: time="2024-09-16T10:38:49.877558974Z" level=info msg="Stop container \"576d6c948301581ae3690232578354f8473469b1d1154b30f446708f2ba3e7fe\" with signal terminated"
	Sep 16 10:38:51 addons-191972 containerd[858]: time="2024-09-16T10:38:51.022375388Z" level=info msg="shim disconnected" id=576d6c948301581ae3690232578354f8473469b1d1154b30f446708f2ba3e7fe namespace=k8s.io
	Sep 16 10:38:51 addons-191972 containerd[858]: time="2024-09-16T10:38:51.022444142Z" level=warning msg="cleaning up after shim disconnected" id=576d6c948301581ae3690232578354f8473469b1d1154b30f446708f2ba3e7fe namespace=k8s.io
	Sep 16 10:38:51 addons-191972 containerd[858]: time="2024-09-16T10:38:51.022457129Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 16 10:38:51 addons-191972 containerd[858]: time="2024-09-16T10:38:51.038406437Z" level=info msg="StopContainer for \"576d6c948301581ae3690232578354f8473469b1d1154b30f446708f2ba3e7fe\" returns successfully"
	Sep 16 10:38:51 addons-191972 containerd[858]: time="2024-09-16T10:38:51.038949166Z" level=info msg="StopPodSandbox for \"debbe4f662687a79c092424e2ea577d8c1c14be643658a6679e9278bd25b5fc9\""
	Sep 16 10:38:51 addons-191972 containerd[858]: time="2024-09-16T10:38:51.039010439Z" level=info msg="Container to stop \"576d6c948301581ae3690232578354f8473469b1d1154b30f446708f2ba3e7fe\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Sep 16 10:38:51 addons-191972 containerd[858]: time="2024-09-16T10:38:51.064502887Z" level=info msg="shim disconnected" id=debbe4f662687a79c092424e2ea577d8c1c14be643658a6679e9278bd25b5fc9 namespace=k8s.io
	Sep 16 10:38:51 addons-191972 containerd[858]: time="2024-09-16T10:38:51.064704869Z" level=warning msg="cleaning up after shim disconnected" id=debbe4f662687a79c092424e2ea577d8c1c14be643658a6679e9278bd25b5fc9 namespace=k8s.io
	Sep 16 10:38:51 addons-191972 containerd[858]: time="2024-09-16T10:38:51.064727001Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	
	
	==> coredns [3c2ba113f3a928b6de94c4ca0bf607534ff798f3d85ffd2a7685ed6dacc00744] <==
	[INFO] 10.244.0.3:34722 - 16813 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000126799s
	[INFO] 10.244.0.3:47807 - 19593 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000078163s
	[INFO] 10.244.0.3:47807 - 48005 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00012131s
	[INFO] 10.244.0.3:52137 - 389 "AAAA IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.004304691s
	[INFO] 10.244.0.3:52137 - 40577 "A IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.004777432s
	[INFO] 10.244.0.3:37044 - 23366 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.003875752s
	[INFO] 10.244.0.3:37044 - 14153 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004520489s
	[INFO] 10.244.0.3:37775 - 29429 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003806717s
	[INFO] 10.244.0.3:37775 - 41674 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003872738s
	[INFO] 10.244.0.3:58704 - 7476 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000090446s
	[INFO] 10.244.0.3:58704 - 1849 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000134094s
	[INFO] 10.244.0.25:38825 - 37363 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000216144s
	[INFO] 10.244.0.25:38931 - 39307 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000245831s
	[INFO] 10.244.0.25:50024 - 16483 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000164924s
	[INFO] 10.244.0.25:42236 - 32299 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000196632s
	[INFO] 10.244.0.25:49331 - 38072 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000114124s
	[INFO] 10.244.0.25:36861 - 61813 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000164666s
	[INFO] 10.244.0.25:33081 - 5019 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.00927584s
	[INFO] 10.244.0.25:32825 - 10257 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.009718235s
	[INFO] 10.244.0.25:50215 - 44243 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007980557s
	[INFO] 10.244.0.25:46089 - 36172 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008374403s
	[INFO] 10.244.0.25:60708 - 60516 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00523636s
	[INFO] 10.244.0.25:53932 - 3930 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005436837s
	[INFO] 10.244.0.25:33968 - 30856 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.002295196s
	[INFO] 10.244.0.25:51453 - 49493 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002387298s
	
	
	==> describe nodes <==
	Name:               addons-191972
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-191972
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=addons-191972
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_23_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-191972
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-191972"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:23:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-191972
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:38:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:37:58 +0000   Mon, 16 Sep 2024 10:23:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:37:58 +0000   Mon, 16 Sep 2024 10:23:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:37:58 +0000   Mon, 16 Sep 2024 10:23:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:37:58 +0000   Mon, 16 Sep 2024 10:23:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-191972
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 0263fbb37d3545b09ff38a7b68907e4c
	  System UUID:                45c87f39-d597-4b0c-a097-439ebdb945ff
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (20 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-89d5ffd79-6r2td                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-lpb7q    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         14m
	  kube-system                 coredns-7c65d6cfc9-9rccl                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     14m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 csi-hostpathplugin-qdnbn                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 etcd-addons-191972                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         15m
	  kube-system                 kindnet-rxp8k                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-addons-191972                250m (3%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-addons-191972       200m (2%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-fnr7f                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-addons-191972                100m (1%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 snapshot-controller-56fcc65765-4g9w6        0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 snapshot-controller-56fcc65765-htkmc        0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  local-path-storage          local-path-provisioner-86d989889c-w6mf9     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  volcano-system              volcano-admission-77d7d48b68-rcfsk          0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  volcano-system              volcano-controllers-56675bb4d5-hdpdb        0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  volcano-system              volcano-scheduler-576bc46687-jtz7f          0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 14m   kube-proxy       
	  Normal   Starting                 15m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 15m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  15m   kubelet          Node addons-191972 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m   kubelet          Node addons-191972 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15m   kubelet          Node addons-191972 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m   node-controller  Node addons-191972 event: Registered Node addons-191972 in Controller
	
	
	==> dmesg <==
	[Sep16 10:17]  #2
	[  +0.001391]  #3
	[  +0.000000]  #4
	[  +0.003060] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.003238] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.002037] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.002135]  #5
	[  +0.000696]  #6
	[  +0.003195]  #7
	[  +0.058540] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.423486] i8042: Warning: Keylock active
	[  +0.007424] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.002994] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000654] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000607] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000622] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000638] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000657] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000607] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000608] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.588189] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +7.251152] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [92c65a04535ddef6879f2eb4260843c6961d1fb2395f595b3a5665263c562002] <==
	{"level":"warn","ts":"2024-09-16T10:24:16.060589Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.178284ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:24:16.060680Z","caller":"traceutil/trace.go:171","msg":"trace[2127996318] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:986; }","duration":"125.313412ms","start":"2024-09-16T10:24:15.935346Z","end":"2024-09-16T10:24:16.060659Z","steps":["trace[2127996318] 'range keys from in-memory index tree'  (duration: 125.097316ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:07.796336Z","caller":"traceutil/trace.go:171","msg":"trace[28147226] transaction","detail":"{read_only:false; response_revision:1242; number_of_response:1; }","duration":"128.826483ms","start":"2024-09-16T10:25:07.667485Z","end":"2024-09-16T10:25:07.796311Z","steps":["trace[28147226] 'process raft request'  (duration: 41.106171ms)","trace[28147226] 'compare'  (duration: 87.53434ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:25:21.424319Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.488522ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128031931970271159 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/volcano-system/volcano-scheduler-576bc46687-jtz7f\" mod_revision:812 > success:<request_put:<key:\"/registry/pods/volcano-system/volcano-scheduler-576bc46687-jtz7f\" value_size:4029 >> failure:<request_range:<key:\"/registry/pods/volcano-system/volcano-scheduler-576bc46687-jtz7f\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-09-16T10:25:21.424401Z","caller":"traceutil/trace.go:171","msg":"trace[1168470588] linearizableReadLoop","detail":"{readStateIndex:1335; appliedIndex:1334; }","duration":"177.395065ms","start":"2024-09-16T10:25:21.246995Z","end":"2024-09-16T10:25:21.424390Z","steps":["trace[1168470588] 'read index received'  (duration: 48.427907ms)","trace[1168470588] 'applied index is now lower than readState.Index'  (duration: 128.965162ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:25:21.424444Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.446761ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:21.424466Z","caller":"traceutil/trace.go:171","msg":"trace[1171179904] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1302; }","duration":"177.469291ms","start":"2024-09-16T10:25:21.246991Z","end":"2024-09-16T10:25:21.424460Z","steps":["trace[1171179904] 'agreement among raft nodes before linearized reading'  (duration: 177.429463ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:21.424486Z","caller":"traceutil/trace.go:171","msg":"trace[1930200040] transaction","detail":"{read_only:false; response_revision:1302; number_of_response:1; }","duration":"247.357795ms","start":"2024-09-16T10:25:21.177107Z","end":"2024-09-16T10:25:21.424464Z","steps":["trace[1930200040] 'process raft request'  (duration: 118.297085ms)","trace[1930200040] 'compare'  (duration: 128.26971ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T10:25:21.652910Z","caller":"traceutil/trace.go:171","msg":"trace[1856019889] linearizableReadLoop","detail":"{readStateIndex:1338; appliedIndex:1335; }","duration":"218.326846ms","start":"2024-09-16T10:25:21.434567Z","end":"2024-09-16T10:25:21.652894Z","steps":["trace[1856019889] 'read index received'  (duration: 55.93458ms)","trace[1856019889] 'applied index is now lower than readState.Index'  (duration: 162.391571ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T10:25:21.652969Z","caller":"traceutil/trace.go:171","msg":"trace[1279722024] transaction","detail":"{read_only:false; response_revision:1305; number_of_response:1; }","duration":"224.683287ms","start":"2024-09-16T10:25:21.428268Z","end":"2024-09-16T10:25:21.652951Z","steps":["trace[1279722024] 'process raft request'  (duration: 224.540452ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:21.653003Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"218.415614ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:21.653027Z","caller":"traceutil/trace.go:171","msg":"trace[1008371896] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1305; }","duration":"218.457307ms","start":"2024-09-16T10:25:21.434563Z","end":"2024-09-16T10:25:21.653020Z","steps":["trace[1008371896] 'agreement among raft nodes before linearized reading'  (duration: 218.392253ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:21.652921Z","caller":"traceutil/trace.go:171","msg":"trace[1132385399] transaction","detail":"{read_only:false; response_revision:1304; number_of_response:1; }","duration":"225.049342ms","start":"2024-09-16T10:25:21.427850Z","end":"2024-09-16T10:25:21.652899Z","steps":["trace[1132385399] 'process raft request'  (duration: 131.625555ms)","trace[1132385399] 'compare'  (duration: 93.227933ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T10:25:21.868227Z","caller":"traceutil/trace.go:171","msg":"trace[1246984751] linearizableReadLoop","detail":"{readStateIndex:1339; appliedIndex:1338; }","duration":"139.924393ms","start":"2024-09-16T10:25:21.728284Z","end":"2024-09-16T10:25:21.868208Z","steps":["trace[1246984751] 'read index received'  (duration: 63.202511ms)","trace[1246984751] 'applied index is now lower than readState.Index'  (duration: 76.72121ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T10:25:21.868259Z","caller":"traceutil/trace.go:171","msg":"trace[501466804] transaction","detail":"{read_only:false; response_revision:1306; number_of_response:1; }","duration":"210.400699ms","start":"2024-09-16T10:25:21.657832Z","end":"2024-09-16T10:25:21.868233Z","steps":["trace[501466804] 'process raft request'  (duration: 133.673421ms)","trace[501466804] 'compare'  (duration: 76.618072ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:25:21.868373Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.878283ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:21.868410Z","caller":"traceutil/trace.go:171","msg":"trace[1169815467] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1306; }","duration":"121.931335ms","start":"2024-09-16T10:25:21.746471Z","end":"2024-09-16T10:25:21.868402Z","steps":["trace[1169815467] 'agreement among raft nodes before linearized reading'  (duration: 121.861476ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:21.868538Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.236255ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-09-16T10:25:21.868579Z","caller":"traceutil/trace.go:171","msg":"trace[344111638] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1306; }","duration":"140.292497ms","start":"2024-09-16T10:25:21.728276Z","end":"2024-09-16T10:25:21.868569Z","steps":["trace[344111638] 'agreement among raft nodes before linearized reading'  (duration: 140.016451ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:33:47.645977Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1761}
	{"level":"info","ts":"2024-09-16T10:33:47.672836Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1761,"took":"26.323299ms","hash":3150463749,"current-db-size-bytes":9527296,"current-db-size":"9.5 MB","current-db-size-in-use-bytes":5414912,"current-db-size-in-use":"5.4 MB"}
	{"level":"info","ts":"2024-09-16T10:33:47.672899Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3150463749,"revision":1761,"compact-revision":-1}
	{"level":"info","ts":"2024-09-16T10:38:47.650538Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2338}
	{"level":"info","ts":"2024-09-16T10:38:47.668882Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":2338,"took":"17.740947ms","hash":2930144945,"current-db-size-bytes":9527296,"current-db-size":"9.5 MB","current-db-size-in-use-bytes":3915776,"current-db-size-in-use":"3.9 MB"}
	{"level":"info","ts":"2024-09-16T10:38:47.668932Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2930144945,"revision":2338,"compact-revision":1761}
	
	
	==> gcp-auth [cfade64badb92dacf9d0c56d24c0fb7e95088f5abf7a814ef4801971e4b26216] <==
	2024/09/16 10:27:29 GCP Auth Webhook started!
	2024/09/16 10:32:45 Ready to marshal response ...
	2024/09/16 10:32:45 Ready to write response ...
	2024/09/16 10:32:45 Ready to marshal response ...
	2024/09/16 10:32:45 Ready to write response ...
	2024/09/16 10:32:45 Ready to marshal response ...
	2024/09/16 10:32:45 Ready to write response ...
	
	
	==> kernel <==
	 10:38:51 up 21 min,  0 users,  load average: 0.72, 0.44, 0.37
	Linux addons-191972 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [4c4482bfa98cf1024c4b123130c5a320a891204919b9a1459b6f3269e1e7d29d] <==
	I0916 10:36:49.442901       1 main.go:299] handling current node
	I0916 10:36:59.441892       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:36:59.441924       1 main.go:299] handling current node
	I0916 10:37:09.442842       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:37:09.442902       1 main.go:299] handling current node
	I0916 10:37:19.441102       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:37:19.441140       1 main.go:299] handling current node
	I0916 10:37:29.444620       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:37:29.444680       1 main.go:299] handling current node
	I0916 10:37:39.441366       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:37:39.441408       1 main.go:299] handling current node
	I0916 10:37:49.448997       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:37:49.449030       1 main.go:299] handling current node
	I0916 10:37:59.441595       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:37:59.441635       1 main.go:299] handling current node
	I0916 10:38:09.443872       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:38:09.443917       1 main.go:299] handling current node
	I0916 10:38:19.448505       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:38:19.448544       1 main.go:299] handling current node
	I0916 10:38:29.441704       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:38:29.441735       1 main.go:299] handling current node
	I0916 10:38:39.448772       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:38:39.448805       1 main.go:299] handling current node
	I0916 10:38:49.449580       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:38:49.449611       1 main.go:299] handling current node
	
	
	==> kube-apiserver [c76b948fbd083e0e5229c3ac96548e67224afd5a037343a2b118da9b9ae5ad3a] <==
	W0916 10:26:15.413935       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:16.459096       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:17.509475       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:18.532761       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:19.545400       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:20.553347       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:21.640741       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:22.735942       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:24.007851       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:25.084707       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:26.137166       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:27.215912       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:28.269709       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:29.285978       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:30.385745       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:31.389520       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:53.671732       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.114.210:443: connect: connection refused
	E0916 10:26:53.671804       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.114.210:443: connect: connection refused" logger="UnhandledError"
	W0916 10:27:11.712823       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.114.210:443: connect: connection refused
	E0916 10:27:11.712858       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.114.210:443: connect: connection refused" logger="UnhandledError"
	W0916 10:27:11.785537       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.114.210:443: connect: connection refused
	E0916 10:27:11.785576       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.114.210:443: connect: connection refused" logger="UnhandledError"
	I0916 10:32:45.560480       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.245.36"}
	I0916 10:33:06.754025       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0916 10:33:07.773034       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	
	
	==> kube-controller-manager [6e4dbd39a8ef56c5a753071ab0489111fcbcaac9f7cbe3b4fdf88030aa41c77b] <==
	I0916 10:33:26.294136       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0916 10:33:26.294178       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:33:26.604903       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0916 10:33:26.604950       1 shared_informer.go:320] Caches are synced for garbage collector
	W0916 10:33:28.016022       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:33:28.016059       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:33:43.495209       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:33:43.495252       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:34:13.882297       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/tiller-deploy-b48cc5f79" duration="6.965µs"
	W0916 10:34:32.902333       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:34:32.902376       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:35:11.270373       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:35:11.270415       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:35:54.708226       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:35:54.708272       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:36:25.735577       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:36:25.735622       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:36:56.645729       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:36:56.645783       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:37:39.901634       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:37:39.901675       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:37:58.599537       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-191972"
	W0916 10:38:33.684006       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:38:33.684058       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:38:49.860701       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="21.172µs"
	
	
	==> kube-proxy [d9d335328779062c055353442bb9ca0c1e2fef63bc1c598650e6ea25604013a5] <==
	I0916 10:23:59.129562       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:23:59.824945       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:23:59.825067       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:24:00.037013       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:24:00.040602       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:24:00.135054       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:24:00.135450       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:24:00.135471       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:24:00.237323       1 config.go:199] "Starting service config controller"
	I0916 10:24:00.237372       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:24:00.237410       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:24:00.237416       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:24:00.237471       1 config.go:328] "Starting node config controller"
	I0916 10:24:00.237491       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:24:00.337642       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:24:00.337724       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:24:00.337829       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0539bdd901d4af068b2160b27df45018e72113a7a75c6a082ae7e2f64f3f908b] <==
	W0916 10:23:49.138663       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0916 10:23:49.138662       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:23:49.138689       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0916 10:23:49.138707       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:49.138696       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0916 10:23:49.138760       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0916 10:23:49.138769       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 10:23:49.138774       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 10:23:49.138779       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 10:23:49.138787       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:49.139877       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:49.139916       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:50.064082       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 10:23:50.064133       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:50.118512       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:23:50.118558       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:50.132045       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:23:50.132096       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:50.175403       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:50.175438       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:50.199805       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:23:50.199848       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:50.241540       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:50.241599       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0916 10:23:50.633994       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.976546    1565 reconciler_common.go:288] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-run\") on node \"addons-191972\" DevicePath \"\""
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.978118    1565 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62b2176c-9dcb-4741-bd18-81ab2a2303f2-kube-api-access-5jwxh" (OuterVolumeSpecName: "kube-api-access-5jwxh") pod "62b2176c-9dcb-4741-bd18-81ab2a2303f2" (UID: "62b2176c-9dcb-4741-bd18-81ab2a2303f2"). InnerVolumeSpecName "kube-api-access-5jwxh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:33:07 addons-191972 kubelet[1565]: I0916 10:33:07.076713    1565 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5jwxh\" (UniqueName: \"kubernetes.io/projected/62b2176c-9dcb-4741-bd18-81ab2a2303f2-kube-api-access-5jwxh\") on node \"addons-191972\" DevicePath \"\""
	Sep 16 10:33:07 addons-191972 kubelet[1565]: I0916 10:33:07.076783    1565 reconciler_common.go:288] "Volume detached for volume \"debugfs\" (UniqueName: \"kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-debugfs\") on node \"addons-191972\" DevicePath \"\""
	Sep 16 10:33:07 addons-191972 kubelet[1565]: I0916 10:33:07.076797    1565 reconciler_common.go:288] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-host\") on node \"addons-191972\" DevicePath \"\""
	Sep 16 10:33:07 addons-191972 kubelet[1565]: I0916 10:33:07.398404    1565 scope.go:117] "RemoveContainer" containerID="85bcbbfdfc074366faf8d70de5e5b0ae05b2c86caf5118e07c5f5779a11f6f09"
	Sep 16 10:33:07 addons-191972 kubelet[1565]: I0916 10:33:07.474491    1565 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62b2176c-9dcb-4741-bd18-81ab2a2303f2" path="/var/lib/kubelet/pods/62b2176c-9dcb-4741-bd18-81ab2a2303f2/volumes"
	Sep 16 10:34:14 addons-191972 kubelet[1565]: I0916 10:34:14.233694    1565 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fvwn\" (UniqueName: \"kubernetes.io/projected/dfe534c4-9e29-4907-b8cc-1dd12fc52f45-kube-api-access-4fvwn\") pod \"dfe534c4-9e29-4907-b8cc-1dd12fc52f45\" (UID: \"dfe534c4-9e29-4907-b8cc-1dd12fc52f45\") "
	Sep 16 10:34:14 addons-191972 kubelet[1565]: I0916 10:34:14.236128    1565 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfe534c4-9e29-4907-b8cc-1dd12fc52f45-kube-api-access-4fvwn" (OuterVolumeSpecName: "kube-api-access-4fvwn") pod "dfe534c4-9e29-4907-b8cc-1dd12fc52f45" (UID: "dfe534c4-9e29-4907-b8cc-1dd12fc52f45"). InnerVolumeSpecName "kube-api-access-4fvwn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:34:14 addons-191972 kubelet[1565]: I0916 10:34:14.334770    1565 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-4fvwn\" (UniqueName: \"kubernetes.io/projected/dfe534c4-9e29-4907-b8cc-1dd12fc52f45-kube-api-access-4fvwn\") on node \"addons-191972\" DevicePath \"\""
	Sep 16 10:34:14 addons-191972 kubelet[1565]: I0916 10:34:14.553835    1565 scope.go:117] "RemoveContainer" containerID="89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f"
	Sep 16 10:34:14 addons-191972 kubelet[1565]: I0916 10:34:14.564810    1565 scope.go:117] "RemoveContainer" containerID="89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f"
	Sep 16 10:34:14 addons-191972 kubelet[1565]: E0916 10:34:14.565324    1565 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f\": not found" containerID="89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f"
	Sep 16 10:34:14 addons-191972 kubelet[1565]: I0916 10:34:14.565368    1565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f"} err="failed to get container status \"89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f\": rpc error: code = NotFound desc = an error occurred when try to find container \"89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f\": not found"
	Sep 16 10:34:15 addons-191972 kubelet[1565]: I0916 10:34:15.475017    1565 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfe534c4-9e29-4907-b8cc-1dd12fc52f45" path="/var/lib/kubelet/pods/dfe534c4-9e29-4907-b8cc-1dd12fc52f45/volumes"
	Sep 16 10:38:51 addons-191972 kubelet[1565]: I0916 10:38:51.186218    1565 scope.go:117] "RemoveContainer" containerID="576d6c948301581ae3690232578354f8473469b1d1154b30f446708f2ba3e7fe"
	Sep 16 10:38:51 addons-191972 kubelet[1565]: I0916 10:38:51.194282    1565 scope.go:117] "RemoveContainer" containerID="576d6c948301581ae3690232578354f8473469b1d1154b30f446708f2ba3e7fe"
	Sep 16 10:38:51 addons-191972 kubelet[1565]: E0916 10:38:51.194718    1565 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"576d6c948301581ae3690232578354f8473469b1d1154b30f446708f2ba3e7fe\": not found" containerID="576d6c948301581ae3690232578354f8473469b1d1154b30f446708f2ba3e7fe"
	Sep 16 10:38:51 addons-191972 kubelet[1565]: I0916 10:38:51.194758    1565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"576d6c948301581ae3690232578354f8473469b1d1154b30f446708f2ba3e7fe"} err="failed to get container status \"576d6c948301581ae3690232578354f8473469b1d1154b30f446708f2ba3e7fe\": rpc error: code = NotFound desc = an error occurred when try to find container \"576d6c948301581ae3690232578354f8473469b1d1154b30f446708f2ba3e7fe\": not found"
	Sep 16 10:38:51 addons-191972 kubelet[1565]: I0916 10:38:51.327277    1565 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxhjs\" (UniqueName: \"kubernetes.io/projected/14e280ea-8ba8-4805-844c-aeff8fb18ce0-kube-api-access-sxhjs\") pod \"14e280ea-8ba8-4805-844c-aeff8fb18ce0\" (UID: \"14e280ea-8ba8-4805-844c-aeff8fb18ce0\") "
	Sep 16 10:38:51 addons-191972 kubelet[1565]: I0916 10:38:51.327378    1565 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/14e280ea-8ba8-4805-844c-aeff8fb18ce0-tmp-dir\") pod \"14e280ea-8ba8-4805-844c-aeff8fb18ce0\" (UID: \"14e280ea-8ba8-4805-844c-aeff8fb18ce0\") "
	Sep 16 10:38:51 addons-191972 kubelet[1565]: I0916 10:38:51.327695    1565 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14e280ea-8ba8-4805-844c-aeff8fb18ce0-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "14e280ea-8ba8-4805-844c-aeff8fb18ce0" (UID: "14e280ea-8ba8-4805-844c-aeff8fb18ce0"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 16 10:38:51 addons-191972 kubelet[1565]: I0916 10:38:51.329277    1565 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14e280ea-8ba8-4805-844c-aeff8fb18ce0-kube-api-access-sxhjs" (OuterVolumeSpecName: "kube-api-access-sxhjs") pod "14e280ea-8ba8-4805-844c-aeff8fb18ce0" (UID: "14e280ea-8ba8-4805-844c-aeff8fb18ce0"). InnerVolumeSpecName "kube-api-access-sxhjs". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:38:51 addons-191972 kubelet[1565]: I0916 10:38:51.428186    1565 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/14e280ea-8ba8-4805-844c-aeff8fb18ce0-tmp-dir\") on node \"addons-191972\" DevicePath \"\""
	Sep 16 10:38:51 addons-191972 kubelet[1565]: I0916 10:38:51.428217    1565 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-sxhjs\" (UniqueName: \"kubernetes.io/projected/14e280ea-8ba8-4805-844c-aeff8fb18ce0-kube-api-access-sxhjs\") on node \"addons-191972\" DevicePath \"\""
	
	
	==> storage-provisioner [62a4b8c25074dcef9656a9b6e749de86b5f7c97f45a25cd328153d14be1d5a78] <==
	I0916 10:24:03.139108       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:24:03.230289       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:24:03.230361       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:24:03.238016       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:24:03.238457       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ff346362-6d54-491c-b142-6d85e8abf2d5", APIVersion:"v1", ResourceVersion:"576", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-191972_e8089787-9f1d-4116-8123-a579d9482714 became leader
	I0916 10:24:03.238505       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-191972_e8089787-9f1d-4116-8123-a579d9482714!
	I0916 10:24:03.339118       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-191972_e8089787-9f1d-4116-8123-a579d9482714!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-191972 -n addons-191972
helpers_test.go:261: (dbg) Run:  kubectl --context addons-191972 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context addons-191972 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (429.097µs)
helpers_test.go:263: kubectl --context addons-191972 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestAddons/parallel/MetricsServer (367.04s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (89s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 1.948871ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-ddkxz" [dfe534c4-9e29-4907-b8cc-1dd12fc52f45] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.002977734s
addons_test.go:475: (dbg) Run:  kubectl --context addons-191972 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-191972 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (431.906µs)
addons_test.go:475: (dbg) Run:  kubectl --context addons-191972 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-191972 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (459.081µs)
addons_test.go:475: (dbg) Run:  kubectl --context addons-191972 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-191972 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (412.051µs)
addons_test.go:475: (dbg) Run:  kubectl --context addons-191972 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-191972 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (360.134µs)
addons_test.go:475: (dbg) Run:  kubectl --context addons-191972 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-191972 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (20.776038ms)
addons_test.go:475: (dbg) Run:  kubectl --context addons-191972 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-191972 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (484.295µs)
addons_test.go:475: (dbg) Run:  kubectl --context addons-191972 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-191972 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (395.988µs)
addons_test.go:475: (dbg) Run:  kubectl --context addons-191972 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-191972 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (433.226µs)
addons_test.go:475: (dbg) Run:  kubectl --context addons-191972 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-191972 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (386.727µs)
addons_test.go:475: (dbg) Run:  kubectl --context addons-191972 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-191972 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (402.917µs)
addons_test.go:475: (dbg) Run:  kubectl --context addons-191972 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-191972 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (402.615µs)
addons_test.go:475: (dbg) Run:  kubectl --context addons-191972 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Non-zero exit: kubectl --context addons-191972 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: fork/exec /usr/local/bin/kubectl: exec format error (401.978µs)
addons_test.go:489: failed checking helm tiller: fork/exec /usr/local/bin/kubectl: exec format error
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-191972 addons disable helm-tiller --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/HelmTiller]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-191972
helpers_test.go:235: (dbg) docker inspect addons-191972:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "49285aed0ac6b4add7b1c7856bcff882cf5b64bc1fd5779afefda3979360aedd",
	        "Created": "2024-09-16T10:23:37.048894749Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 13416,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:23:37.183215602Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/49285aed0ac6b4add7b1c7856bcff882cf5b64bc1fd5779afefda3979360aedd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/49285aed0ac6b4add7b1c7856bcff882cf5b64bc1fd5779afefda3979360aedd/hostname",
	        "HostsPath": "/var/lib/docker/containers/49285aed0ac6b4add7b1c7856bcff882cf5b64bc1fd5779afefda3979360aedd/hosts",
	        "LogPath": "/var/lib/docker/containers/49285aed0ac6b4add7b1c7856bcff882cf5b64bc1fd5779afefda3979360aedd/49285aed0ac6b4add7b1c7856bcff882cf5b64bc1fd5779afefda3979360aedd-json.log",
	        "Name": "/addons-191972",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-191972:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-191972",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2254b5220082ec2e338390341321e26cfa70d77e8e8e98f86dc832205812162a-init/diff:/var/lib/docker/overlay2/e391a31d00d345931d18da68f8a7d7338830f6d9b98fe203461225d03a65d594/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2254b5220082ec2e338390341321e26cfa70d77e8e8e98f86dc832205812162a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2254b5220082ec2e338390341321e26cfa70d77e8e8e98f86dc832205812162a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2254b5220082ec2e338390341321e26cfa70d77e8e8e98f86dc832205812162a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-191972",
	                "Source": "/var/lib/docker/volumes/addons-191972/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-191972",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-191972",
	                "name.minikube.sigs.k8s.io": "addons-191972",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b247e3d2e57f223fa64fb9fece255c3b6a0f61eb064ba71e6e8c51f7e6b8590a",
	            "SandboxKey": "/var/run/docker/netns/b247e3d2e57f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-191972": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "aac8db9a46c7b7c219b85113240d1d4a2ee20d1c156fb7315fdf6aa5e797f6a8",
	                    "EndpointID": "ab683490c93590fb0411cd607b8ad8f3100f7ae01f11dd3e855f6321d940faae",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-191972",
	                        "49285aed0ac6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-191972 -n addons-191972
helpers_test.go:244: <<< TestAddons/parallel/HelmTiller FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/HelmTiller]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-191972 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-191972 logs -n 25: (1.182969869s)
helpers_test.go:252: TestAddons/parallel/HelmTiller logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-297488   | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | -p download-only-297488              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| delete  | -p download-only-297488              | download-only-297488   | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| start   | -o=json --download-only              | download-only-024449   | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | -p download-only-024449              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| delete  | -p download-only-024449              | download-only-024449   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| delete  | -p download-only-297488              | download-only-297488   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| delete  | -p download-only-024449              | download-only-024449   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| start   | --download-only -p                   | download-docker-065822 | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | download-docker-065822               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-065822            | download-docker-065822 | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| start   | --download-only -p                   | binary-mirror-727123   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | binary-mirror-727123                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:34779               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-727123              | binary-mirror-727123   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| addons  | enable dashboard -p                  | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | addons-191972                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | addons-191972                        |                        |         |         |                     |                     |
	| start   | -p addons-191972 --wait=true         | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:27 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                        |         |         |                     |                     |
	| addons  | addons-191972 addons disable         | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | yakd --alsologtostderr -v=1          |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | -p addons-191972                     |                        |         |         |                     |                     |
	| ip      | addons-191972 ip                     | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	| addons  | addons-191972 addons disable         | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p             | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | addons-191972                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | -p addons-191972                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-191972 addons disable         | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:33 UTC |
	|         | headlamp --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC | 16 Sep 24 10:33 UTC |
	|         | addons-191972                        |                        |         |         |                     |                     |
	| addons  | addons-191972 addons disable         | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:34 UTC | 16 Sep 24 10:34 UTC |
	|         | helm-tiller --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:23:15
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:23:15.015457   12653 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:23:15.015610   12653 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:23:15.015623   12653 out.go:358] Setting ErrFile to fd 2...
	I0916 10:23:15.015629   12653 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:23:15.015835   12653 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 10:23:15.016423   12653 out.go:352] Setting JSON to false
	I0916 10:23:15.017221   12653 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":339,"bootTime":1726481856,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:23:15.017316   12653 start.go:139] virtualization: kvm guest
	I0916 10:23:15.019468   12653 out.go:177] * [addons-191972] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:23:15.020856   12653 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:23:15.020860   12653 notify.go:220] Checking for updates...
	I0916 10:23:15.023158   12653 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:23:15.024282   12653 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:23:15.025336   12653 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 10:23:15.026362   12653 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:23:15.027468   12653 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:23:15.028714   12653 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:23:15.049632   12653 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:23:15.049710   12653 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:23:15.095467   12653 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-16 10:23:15.085826834 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:23:15.095614   12653 docker.go:318] overlay module found
	I0916 10:23:15.097552   12653 out.go:177] * Using the docker driver based on user configuration
	I0916 10:23:15.098917   12653 start.go:297] selected driver: docker
	I0916 10:23:15.098932   12653 start.go:901] validating driver "docker" against <nil>
	I0916 10:23:15.098957   12653 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:23:15.099817   12653 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:23:15.144749   12653 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-16 10:23:15.136589077 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:23:15.144922   12653 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:23:15.145171   12653 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:23:15.147081   12653 out.go:177] * Using Docker driver with root privileges
	I0916 10:23:15.148504   12653 cni.go:84] Creating CNI manager for ""
	I0916 10:23:15.148563   12653 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 10:23:15.148575   12653 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 10:23:15.148632   12653 start.go:340] cluster config:
	{Name:addons-191972 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-191972 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:23:15.149981   12653 out.go:177] * Starting "addons-191972" primary control-plane node in "addons-191972" cluster
	I0916 10:23:15.151239   12653 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 10:23:15.152375   12653 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:23:15.153439   12653 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:23:15.153479   12653 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4
	I0916 10:23:15.153492   12653 cache.go:56] Caching tarball of preloaded images
	I0916 10:23:15.153495   12653 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:23:15.153601   12653 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 10:23:15.153613   12653 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 10:23:15.153950   12653 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/config.json ...
	I0916 10:23:15.153974   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/config.json: {Name:mk77e04db13eac753d69895eba14a3f7223b28d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:15.169560   12653 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:23:15.169666   12653 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:23:15.169681   12653 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:23:15.169685   12653 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:23:15.169694   12653 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:23:15.169701   12653 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:23:27.861517   12653 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:23:27.861553   12653 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:23:27.861589   12653 start.go:360] acquireMachinesLock for addons-191972: {Name:mk1204ee6335c794af5ff39cd93a214e3c1d654b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:23:27.861691   12653 start.go:364] duration metric: took 80.959µs to acquireMachinesLock for "addons-191972"
	I0916 10:23:27.861720   12653 start.go:93] Provisioning new machine with config: &{Name:addons-191972 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-191972 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 10:23:27.861797   12653 start.go:125] createHost starting for "" (driver="docker")
	I0916 10:23:27.864363   12653 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0916 10:23:27.864609   12653 start.go:159] libmachine.API.Create for "addons-191972" (driver="docker")
	I0916 10:23:27.864644   12653 client.go:168] LocalClient.Create starting
	I0916 10:23:27.864787   12653 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem
	I0916 10:23:28.100386   12653 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem
	I0916 10:23:28.472961   12653 cli_runner.go:164] Run: docker network inspect addons-191972 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 10:23:28.488573   12653 cli_runner.go:211] docker network inspect addons-191972 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 10:23:28.488653   12653 network_create.go:284] running [docker network inspect addons-191972] to gather additional debugging logs...
	I0916 10:23:28.488675   12653 cli_runner.go:164] Run: docker network inspect addons-191972
	W0916 10:23:28.503724   12653 cli_runner.go:211] docker network inspect addons-191972 returned with exit code 1
	I0916 10:23:28.503773   12653 network_create.go:287] error running [docker network inspect addons-191972]: docker network inspect addons-191972: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-191972 not found
	I0916 10:23:28.503790   12653 network_create.go:289] output of [docker network inspect addons-191972]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-191972 not found
	
	** /stderr **
	I0916 10:23:28.503874   12653 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:23:28.520445   12653 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ac6790}
	I0916 10:23:28.520486   12653 network_create.go:124] attempt to create docker network addons-191972 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0916 10:23:28.520531   12653 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-191972 addons-191972
	I0916 10:23:28.578324   12653 network_create.go:108] docker network addons-191972 192.168.49.0/24 created
	I0916 10:23:28.578353   12653 kic.go:121] calculated static IP "192.168.49.2" for the "addons-191972" container
	I0916 10:23:28.578405   12653 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 10:23:28.593459   12653 cli_runner.go:164] Run: docker volume create addons-191972 --label name.minikube.sigs.k8s.io=addons-191972 --label created_by.minikube.sigs.k8s.io=true
	I0916 10:23:28.611104   12653 oci.go:103] Successfully created a docker volume addons-191972
	I0916 10:23:28.611189   12653 cli_runner.go:164] Run: docker run --rm --name addons-191972-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-191972 --entrypoint /usr/bin/test -v addons-191972:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 10:23:32.566442   12653 cli_runner.go:217] Completed: docker run --rm --name addons-191972-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-191972 --entrypoint /usr/bin/test -v addons-191972:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib: (3.955205965s)
	I0916 10:23:32.566475   12653 oci.go:107] Successfully prepared a docker volume addons-191972
	I0916 10:23:32.566499   12653 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:23:32.566524   12653 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 10:23:32.566588   12653 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-191972:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 10:23:36.989473   12653 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-191972:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.422844639s)
	I0916 10:23:36.989499   12653 kic.go:203] duration metric: took 4.422974303s to extract preloaded images to volume ...
	W0916 10:23:36.989616   12653 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 10:23:36.989704   12653 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 10:23:37.034645   12653 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-191972 --name addons-191972 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-191972 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-191972 --network addons-191972 --ip 192.168.49.2 --volume addons-191972:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 10:23:37.351088   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Running}}
	I0916 10:23:37.369798   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:37.389505   12653 cli_runner.go:164] Run: docker exec addons-191972 stat /var/lib/dpkg/alternatives/iptables
	I0916 10:23:37.432507   12653 oci.go:144] the created container "addons-191972" has a running status.
	I0916 10:23:37.432542   12653 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa...
	I0916 10:23:37.512853   12653 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 10:23:37.532177   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:37.549342   12653 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 10:23:37.549361   12653 kic_runner.go:114] Args: [docker exec --privileged addons-191972 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 10:23:37.594990   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:37.611429   12653 machine.go:93] provisionDockerMachine start ...
	I0916 10:23:37.611513   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:37.628951   12653 main.go:141] libmachine: Using SSH client type: native
	I0916 10:23:37.629230   12653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 10:23:37.629249   12653 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:23:37.630101   12653 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54456->127.0.0.1:32768: read: connection reset by peer
	I0916 10:23:40.759062   12653 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-191972
	
	I0916 10:23:40.759087   12653 ubuntu.go:169] provisioning hostname "addons-191972"
	I0916 10:23:40.759139   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:40.776123   12653 main.go:141] libmachine: Using SSH client type: native
	I0916 10:23:40.776294   12653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 10:23:40.776306   12653 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-191972 && echo "addons-191972" | sudo tee /etc/hostname
	I0916 10:23:40.917999   12653 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-191972
	
	I0916 10:23:40.918073   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:40.934369   12653 main.go:141] libmachine: Using SSH client type: native
	I0916 10:23:40.934536   12653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 10:23:40.934552   12653 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-191972' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-191972/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-191972' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:23:41.063670   12653 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:23:41.063696   12653 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 10:23:41.063755   12653 ubuntu.go:177] setting up certificates
	I0916 10:23:41.063769   12653 provision.go:84] configureAuth start
	I0916 10:23:41.063821   12653 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-191972
	I0916 10:23:41.080185   12653 provision.go:143] copyHostCerts
	I0916 10:23:41.080289   12653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 10:23:41.080452   12653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 10:23:41.080539   12653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 10:23:41.080607   12653 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.addons-191972 san=[127.0.0.1 192.168.49.2 addons-191972 localhost minikube]
	I0916 10:23:41.189624   12653 provision.go:177] copyRemoteCerts
	I0916 10:23:41.189685   12653 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:23:41.189718   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:41.206072   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:41.299940   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 10:23:41.321259   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:23:41.342100   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:23:41.362764   12653 provision.go:87] duration metric: took 298.977855ms to configureAuth
	I0916 10:23:41.362793   12653 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:23:41.362955   12653 config.go:182] Loaded profile config "addons-191972": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:23:41.362966   12653 machine.go:96] duration metric: took 3.751519266s to provisionDockerMachine
	I0916 10:23:41.362991   12653 client.go:171] duration metric: took 13.498318264s to LocalClient.Create
	I0916 10:23:41.363014   12653 start.go:167] duration metric: took 13.498406844s to libmachine.API.Create "addons-191972"
	I0916 10:23:41.363024   12653 start.go:293] postStartSetup for "addons-191972" (driver="docker")
	I0916 10:23:41.363035   12653 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:23:41.363112   12653 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:23:41.363159   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:41.379631   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:41.472315   12653 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:23:41.475416   12653 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:23:41.475455   12653 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:23:41.475469   12653 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:23:41.475477   12653 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:23:41.475490   12653 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 10:23:41.475562   12653 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 10:23:41.475593   12653 start.go:296] duration metric: took 112.560003ms for postStartSetup
	I0916 10:23:41.475953   12653 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-191972
	I0916 10:23:41.491831   12653 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/config.json ...
	I0916 10:23:41.492098   12653 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:23:41.492159   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:41.508709   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:41.604422   12653 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:23:41.608355   12653 start.go:128] duration metric: took 13.746544864s to createHost
	I0916 10:23:41.608378   12653 start.go:83] releasing machines lock for "addons-191972", held for 13.74667303s
	I0916 10:23:41.608449   12653 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-191972
	I0916 10:23:41.624552   12653 ssh_runner.go:195] Run: cat /version.json
	I0916 10:23:41.624594   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:41.624666   12653 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:23:41.624742   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:41.640830   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:41.641558   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:41.811513   12653 ssh_runner.go:195] Run: systemctl --version
	I0916 10:23:41.816090   12653 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:23:41.820031   12653 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 10:23:41.841966   12653 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:23:41.842040   12653 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:23:41.867614   12653 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 10:23:41.867637   12653 start.go:495] detecting cgroup driver to use...
	I0916 10:23:41.867665   12653 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:23:41.867707   12653 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 10:23:41.878761   12653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 10:23:41.889209   12653 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:23:41.889272   12653 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:23:41.901658   12653 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:23:41.914376   12653 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:23:41.989625   12653 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:23:42.064036   12653 docker.go:233] disabling docker service ...
	I0916 10:23:42.064087   12653 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:23:42.082378   12653 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:23:42.092694   12653 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:23:42.163431   12653 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:23:42.235566   12653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:23:42.245920   12653 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:23:42.260071   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 10:23:42.268844   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:23:42.277914   12653 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:23:42.277973   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:23:42.287090   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:23:42.295426   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:23:42.303716   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:23:42.312468   12653 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:23:42.320449   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:23:42.328970   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:23:42.337386   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:23:42.345791   12653 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:23:42.352855   12653 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:23:42.359971   12653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:23:42.438798   12653 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 10:23:42.548862   12653 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 10:23:42.548940   12653 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 10:23:42.552403   12653 start.go:563] Will wait 60s for crictl version
	I0916 10:23:42.552460   12653 ssh_runner.go:195] Run: which crictl
	I0916 10:23:42.555471   12653 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:23:42.586679   12653 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 10:23:42.586752   12653 ssh_runner.go:195] Run: containerd --version
	I0916 10:23:42.608454   12653 ssh_runner.go:195] Run: containerd --version
	I0916 10:23:42.632432   12653 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 10:23:42.633762   12653 cli_runner.go:164] Run: docker network inspect addons-191972 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:23:42.650400   12653 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:23:42.653892   12653 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:23:42.664053   12653 kubeadm.go:883] updating cluster {Name:addons-191972 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-191972 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:23:42.664154   12653 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:23:42.664195   12653 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:23:42.695688   12653 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 10:23:42.695710   12653 containerd.go:534] Images already preloaded, skipping extraction
	I0916 10:23:42.695778   12653 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:23:42.727148   12653 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 10:23:42.727166   12653 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:23:42.727174   12653 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I0916 10:23:42.727255   12653 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-191972 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-191972 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:23:42.727302   12653 ssh_runner.go:195] Run: sudo crictl info
	I0916 10:23:42.757474   12653 cni.go:84] Creating CNI manager for ""
	I0916 10:23:42.757493   12653 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 10:23:42.757502   12653 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:23:42.757520   12653 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-191972 NodeName:addons-191972 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:23:42.757633   12653 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-191972"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:23:42.757684   12653 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:23:42.765604   12653 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:23:42.765672   12653 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:23:42.773363   12653 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0916 10:23:42.789280   12653 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:23:42.805100   12653 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0916 10:23:42.820420   12653 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0916 10:23:42.823264   12653 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:23:42.832700   12653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:23:42.907069   12653 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:23:42.919246   12653 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972 for IP: 192.168.49.2
	I0916 10:23:42.919266   12653 certs.go:194] generating shared ca certs ...
	I0916 10:23:42.919279   12653 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:42.919399   12653 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 10:23:43.054784   12653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt ...
	I0916 10:23:43.054815   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt: {Name:mkf05eaa3032985e939bd1a93aa36a6d50242974 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.055008   12653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key ...
	I0916 10:23:43.055031   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key: {Name:mk4cf19316dad04ab708c5c17e172ec92fc35230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.055134   12653 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 10:23:43.268289   12653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt ...
	I0916 10:23:43.268318   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt: {Name:mk68da284b9ad8d396a1f11e7cfb94cc6f208c5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.268510   12653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key ...
	I0916 10:23:43.268532   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key: {Name:mkdf8c5da2a6d70c9ece2277843ebe69f9105c4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.268626   12653 certs.go:256] generating profile certs ...
	I0916 10:23:43.268694   12653 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.key
	I0916 10:23:43.268720   12653 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt with IP's: []
	I0916 10:23:43.341520   12653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt ...
	I0916 10:23:43.341551   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: {Name:mke3c2895145f9c692cb1e6451d9766499ccc877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.341738   12653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.key ...
	I0916 10:23:43.341755   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.key: {Name:mkd6237ae8ebf429452ae0c60cea457b1f9cff1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.341855   12653 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.key.ac265369
	I0916 10:23:43.341882   12653 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.crt.ac265369 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0916 10:23:43.403750   12653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.crt.ac265369 ...
	I0916 10:23:43.403775   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.crt.ac265369: {Name:mk72db26b8519849abdf811ed93be5caeac2267d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.403951   12653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.key.ac265369 ...
	I0916 10:23:43.403973   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.key.ac265369: {Name:mk4b11dab0a085e395344dc35616a0c16f298191 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.404065   12653 certs.go:381] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.crt.ac265369 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.crt
	I0916 10:23:43.404155   12653 certs.go:385] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.key.ac265369 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.key
	I0916 10:23:43.404230   12653 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.key
	I0916 10:23:43.404250   12653 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.crt with IP's: []
	I0916 10:23:43.488130   12653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.crt ...
	I0916 10:23:43.488160   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.crt: {Name:mk11d8f9c437e5586897185f4551df7594041471 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.488342   12653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.key ...
	I0916 10:23:43.488360   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.key: {Name:mk18734ee357c50ce0ff509ffb1c7e42743fa1b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.488577   12653 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:23:43.488617   12653 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 10:23:43.488652   12653 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:23:43.488682   12653 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 10:23:43.489279   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:23:43.511557   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:23:43.532934   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:23:43.553377   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:23:43.575078   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0916 10:23:43.595868   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:23:43.616905   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:23:43.637839   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:23:43.658915   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:23:43.680485   12653 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:23:43.696295   12653 ssh_runner.go:195] Run: openssl version
	I0916 10:23:43.701282   12653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:23:43.709681   12653 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:43.712715   12653 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:43.712762   12653 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:43.718832   12653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:23:43.727190   12653 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:23:43.730247   12653 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:23:43.730290   12653 kubeadm.go:392] StartCluster: {Name:addons-191972 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-191972 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:23:43.730356   12653 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 10:23:43.730405   12653 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:23:43.761830   12653 cri.go:89] found id: ""
	I0916 10:23:43.761893   12653 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:23:43.770086   12653 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:23:43.778465   12653 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 10:23:43.778522   12653 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:23:43.786355   12653 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:23:43.786373   12653 kubeadm.go:157] found existing configuration files:
	
	I0916 10:23:43.786419   12653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 10:23:43.794471   12653 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:23:43.794519   12653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:23:43.802487   12653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 10:23:43.810401   12653 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:23:43.810451   12653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:23:43.817541   12653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 10:23:43.824799   12653 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:23:43.824842   12653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:23:43.832032   12653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 10:23:43.839239   12653 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:23:43.839298   12653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:23:43.847649   12653 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 10:23:43.880192   12653 kubeadm.go:310] W0916 10:23:43.879583    1109 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:23:43.880773   12653 kubeadm.go:310] W0916 10:23:43.880291    1109 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:23:43.896580   12653 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 10:23:43.944226   12653 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:23:52.227261   12653 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 10:23:52.227338   12653 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 10:23:52.227418   12653 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 10:23:52.227466   12653 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 10:23:52.227501   12653 kubeadm.go:310] OS: Linux
	I0916 10:23:52.227541   12653 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 10:23:52.227584   12653 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 10:23:52.227625   12653 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 10:23:52.227670   12653 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 10:23:52.227711   12653 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 10:23:52.227786   12653 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 10:23:52.227872   12653 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 10:23:52.227947   12653 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 10:23:52.227994   12653 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 10:23:52.228098   12653 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:23:52.228218   12653 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:23:52.228360   12653 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 10:23:52.228491   12653 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:23:52.230143   12653 out.go:235]   - Generating certificates and keys ...
	I0916 10:23:52.230239   12653 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 10:23:52.230328   12653 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 10:23:52.230422   12653 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 10:23:52.230504   12653 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 10:23:52.230596   12653 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 10:23:52.230685   12653 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 10:23:52.230768   12653 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 10:23:52.230910   12653 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-191972 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 10:23:52.230984   12653 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 10:23:52.231130   12653 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-191972 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 10:23:52.231228   12653 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 10:23:52.231331   12653 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 10:23:52.231395   12653 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 10:23:52.231471   12653 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:23:52.231543   12653 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:23:52.231622   12653 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 10:23:52.231683   12653 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:23:52.231759   12653 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:23:52.231871   12653 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:23:52.231979   12653 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:23:52.232069   12653 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:23:52.233407   12653 out.go:235]   - Booting up control plane ...
	I0916 10:23:52.233500   12653 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:23:52.233589   12653 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:23:52.233654   12653 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:23:52.233747   12653 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:23:52.233846   12653 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:23:52.233895   12653 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 10:23:52.234011   12653 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 10:23:52.234102   12653 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:23:52.234155   12653 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.63037ms
	I0916 10:23:52.234224   12653 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 10:23:52.234282   12653 kubeadm.go:310] [api-check] The API server is healthy after 4.501222011s
	I0916 10:23:52.234402   12653 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:23:52.234544   12653 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:23:52.234625   12653 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:23:52.234780   12653 kubeadm.go:310] [mark-control-plane] Marking the node addons-191972 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:23:52.234830   12653 kubeadm.go:310] [bootstrap-token] Using token: fe3fo6.40ynbll2pbwpp3it
	I0916 10:23:52.236918   12653 out.go:235]   - Configuring RBAC rules ...
	I0916 10:23:52.237043   12653 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:23:52.237118   12653 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:23:52.237261   12653 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:23:52.237418   12653 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:23:52.237547   12653 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:23:52.237659   12653 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:23:52.237791   12653 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:23:52.237856   12653 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 10:23:52.237898   12653 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 10:23:52.237904   12653 kubeadm.go:310] 
	I0916 10:23:52.237963   12653 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 10:23:52.237971   12653 kubeadm.go:310] 
	I0916 10:23:52.238040   12653 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 10:23:52.238046   12653 kubeadm.go:310] 
	I0916 10:23:52.238070   12653 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 10:23:52.238123   12653 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:23:52.238167   12653 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:23:52.238173   12653 kubeadm.go:310] 
	I0916 10:23:52.238218   12653 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 10:23:52.238223   12653 kubeadm.go:310] 
	I0916 10:23:52.238268   12653 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:23:52.238274   12653 kubeadm.go:310] 
	I0916 10:23:52.238329   12653 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 10:23:52.238418   12653 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:23:52.238507   12653 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:23:52.238515   12653 kubeadm.go:310] 
	I0916 10:23:52.238598   12653 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:23:52.238681   12653 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 10:23:52.238690   12653 kubeadm.go:310] 
	I0916 10:23:52.238801   12653 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fe3fo6.40ynbll2pbwpp3it \
	I0916 10:23:52.238908   12653 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 \
	I0916 10:23:52.238933   12653 kubeadm.go:310] 	--control-plane 
	I0916 10:23:52.238939   12653 kubeadm.go:310] 
	I0916 10:23:52.239012   12653 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:23:52.239020   12653 kubeadm.go:310] 
	I0916 10:23:52.239095   12653 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fe3fo6.40ynbll2pbwpp3it \
	I0916 10:23:52.239199   12653 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 
	I0916 10:23:52.239210   12653 cni.go:84] Creating CNI manager for ""
	I0916 10:23:52.239215   12653 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 10:23:52.240733   12653 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 10:23:52.241980   12653 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 10:23:52.245609   12653 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 10:23:52.245625   12653 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 10:23:52.261912   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 10:23:52.447057   12653 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:23:52.447144   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:52.447165   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-191972 minikube.k8s.io/updated_at=2024_09_16T10_23_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=addons-191972 minikube.k8s.io/primary=true
	I0916 10:23:52.543497   12653 ops.go:34] apiserver oom_adj: -16
	I0916 10:23:52.543643   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:53.044491   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:53.543770   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:54.044061   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:54.544691   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:55.044249   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:55.543918   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:56.043679   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:56.543717   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:57.044619   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:57.107839   12653 kubeadm.go:1113] duration metric: took 4.660750668s to wait for elevateKubeSystemPrivileges
	I0916 10:23:57.107871   12653 kubeadm.go:394] duration metric: took 13.37758355s to StartCluster
	I0916 10:23:57.107890   12653 settings.go:142] acquiring lock: {Name:mk5f140da961bb985c566f732d611f3e238f1f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:57.107998   12653 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:23:57.108383   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:57.108581   12653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 10:23:57.108610   12653 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 10:23:57.108666   12653 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0916 10:23:57.108789   12653 addons.go:69] Setting yakd=true in profile "addons-191972"
	I0916 10:23:57.108813   12653 addons.go:234] Setting addon yakd=true in "addons-191972"
	I0916 10:23:57.108830   12653 config.go:182] Loaded profile config "addons-191972": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:23:57.108844   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.108885   12653 addons.go:69] Setting inspektor-gadget=true in profile "addons-191972"
	I0916 10:23:57.108900   12653 addons.go:234] Setting addon inspektor-gadget=true in "addons-191972"
	I0916 10:23:57.108928   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.109000   12653 addons.go:69] Setting gcp-auth=true in profile "addons-191972"
	I0916 10:23:57.109025   12653 mustload.go:65] Loading cluster: addons-191972
	I0916 10:23:57.109143   12653 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-191972"
	I0916 10:23:57.109187   12653 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-191972"
	I0916 10:23:57.109185   12653 addons.go:69] Setting default-storageclass=true in profile "addons-191972"
	I0916 10:23:57.109211   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.109225   12653 config.go:182] Loaded profile config "addons-191972": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:23:57.109232   12653 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-191972"
	I0916 10:23:57.109216   12653 addons.go:69] Setting cloud-spanner=true in profile "addons-191972"
	I0916 10:23:57.109259   12653 addons.go:69] Setting storage-provisioner=true in profile "addons-191972"
	I0916 10:23:57.109265   12653 addons.go:234] Setting addon cloud-spanner=true in "addons-191972"
	I0916 10:23:57.109274   12653 addons.go:234] Setting addon storage-provisioner=true in "addons-191972"
	I0916 10:23:57.109308   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.109323   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.109407   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.109485   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.109507   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.109547   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.109684   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.109757   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.109825   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.110167   12653 addons.go:69] Setting ingress-dns=true in profile "addons-191972"
	I0916 10:23:57.110372   12653 addons.go:234] Setting addon ingress-dns=true in "addons-191972"
	I0916 10:23:57.110546   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.111202   12653 addons.go:69] Setting helm-tiller=true in profile "addons-191972"
	I0916 10:23:57.111255   12653 addons.go:234] Setting addon helm-tiller=true in "addons-191972"
	I0916 10:23:57.111282   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.111445   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.111484   12653 addons.go:69] Setting ingress=true in profile "addons-191972"
	I0916 10:23:57.111498   12653 addons.go:234] Setting addon ingress=true in "addons-191972"
	I0916 10:23:57.111527   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.111731   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.110913   12653 addons.go:69] Setting metrics-server=true in profile "addons-191972"
	I0916 10:23:57.111983   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.111987   12653 addons.go:234] Setting addon metrics-server=true in "addons-191972"
	I0916 10:23:57.112171   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.110926   12653 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-191972"
	I0916 10:23:57.113223   12653 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-191972"
	I0916 10:23:57.113258   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.113700   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.115817   12653 out.go:177] * Verifying Kubernetes components...
	I0916 10:23:57.116675   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.110938   12653 addons.go:69] Setting registry=true in profile "addons-191972"
	I0916 10:23:57.116963   12653 addons.go:234] Setting addon registry=true in "addons-191972"
	I0916 10:23:57.117093   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.110938   12653 addons.go:69] Setting volcano=true in profile "addons-191972"
	I0916 10:23:57.117245   12653 addons.go:234] Setting addon volcano=true in "addons-191972"
	I0916 10:23:57.117313   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.110949   12653 addons.go:69] Setting volumesnapshots=true in profile "addons-191972"
	I0916 10:23:57.117350   12653 addons.go:234] Setting addon volumesnapshots=true in "addons-191972"
	I0916 10:23:57.117397   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.117799   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.117919   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.118954   12653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:23:57.110924   12653 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-191972"
	I0916 10:23:57.120855   12653 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-191972"
	I0916 10:23:57.121186   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.148826   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.156121   12653 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:23:57.158094   12653 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0916 10:23:57.160078   12653 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0916 10:23:57.160230   12653 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:23:57.163394   12653 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:23:57.163405   12653 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0916 10:23:57.163428   12653 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0916 10:23:57.163491   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.163933   12653 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 10:23:57.163952   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0916 10:23:57.163999   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.166339   12653 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0916 10:23:57.166352   12653 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0916 10:23:57.166505   12653 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:57.166525   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:23:57.166591   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.176509   12653 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:23:57.176539   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0916 10:23:57.176597   12653 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0916 10:23:57.176613   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0916 10:23:57.176614   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.176667   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.176871   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.184510   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0916 10:23:57.184923   12653 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0916 10:23:57.187620   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0916 10:23:57.187908   12653 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 10:23:57.187925   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0916 10:23:57.188005   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.190192   12653 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0916 10:23:57.190888   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0916 10:23:57.191984   12653 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0916 10:23:57.192004   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0916 10:23:57.192062   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.192462   12653 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-191972"
	I0916 10:23:57.192519   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.193186   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.195485   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0916 10:23:57.196395   12653 addons.go:234] Setting addon default-storageclass=true in "addons-191972"
	I0916 10:23:57.196441   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.197033   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.200024   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0916 10:23:57.200756   12653 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0916 10:23:57.202388   12653 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 10:23:57.202409   12653 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 10:23:57.202572   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.204739   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0916 10:23:57.206967   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0916 10:23:57.217725   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0916 10:23:57.217900   12653 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0916 10:23:57.219581   12653 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0916 10:23:57.219714   12653 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0916 10:23:57.219798   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.219620   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 10:23:57.220511   12653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0916 10:23:57.221727   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.235796   12653 out.go:177]   - Using image docker.io/registry:2.8.3
	I0916 10:23:57.237579   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0916 10:23:57.239326   12653 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 10:23:57.239350   12653 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0916 10:23:57.239411   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.239514   12653 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0916 10:23:57.241480   12653 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0916 10:23:57.241502   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0916 10:23:57.241555   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.243883   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.255850   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.256610   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.261965   12653 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0916 10:23:57.263559   12653 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0916 10:23:57.265255   12653 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0916 10:23:57.266412   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.267838   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.268005   12653 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 10:23:57.268022   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0916 10:23:57.268074   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.269050   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.276483   12653 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:57.276507   12653 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:23:57.276573   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.283025   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.284257   12653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 10:23:57.288880   12653 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0916 10:23:57.290776   12653 out.go:177]   - Using image docker.io/busybox:stable
	I0916 10:23:57.292419   12653 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:23:57.292444   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0916 10:23:57.292510   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.295145   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.295780   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.297628   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.298120   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.300416   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.306147   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.311231   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.314549   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	W0916 10:23:57.324739   12653 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0916 10:23:57.324769   12653 retry.go:31] will retry after 374.435778ms: ssh: handshake failed: EOF
	W0916 10:23:57.325602   12653 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0916 10:23:57.325619   12653 retry.go:31] will retry after 150.651165ms: ssh: handshake failed: EOF
	I0916 10:23:57.330682   12653 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:23:57.629690   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 10:23:57.729822   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:57.730227   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:23:57.742355   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:57.824974   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 10:23:57.842831   12653 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 10:23:57.842917   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0916 10:23:57.843332   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:23:57.921972   12653 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0916 10:23:57.922058   12653 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0916 10:23:57.922011   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0916 10:23:57.922034   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 10:23:57.922195   12653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0916 10:23:57.929874   12653 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 10:23:57.929901   12653 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0916 10:23:57.941141   12653 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0916 10:23:57.941166   12653 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0916 10:23:58.138273   12653 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0916 10:23:58.138369   12653 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0916 10:23:58.222261   12653 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:23:58.222352   12653 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0916 10:23:58.229572   12653 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 10:23:58.229660   12653 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0916 10:23:58.232627   12653 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0916 10:23:58.232698   12653 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0916 10:23:58.322393   12653 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 10:23:58.322420   12653 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 10:23:58.339998   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 10:23:58.435282   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 10:23:58.435313   12653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0916 10:23:58.435591   12653 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.15128486s)
	I0916 10:23:58.435618   12653 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0916 10:23:58.436958   12653 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.1062474s)
	I0916 10:23:58.437947   12653 node_ready.go:35] waiting up to 6m0s for node "addons-191972" to be "Ready" ...
	I0916 10:23:58.441471   12653 node_ready.go:49] node "addons-191972" has status "Ready":"True"
	I0916 10:23:58.441502   12653 node_ready.go:38] duration metric: took 3.529013ms for node "addons-191972" to be "Ready" ...
	I0916 10:23:58.441514   12653 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:23:58.442873   12653 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0916 10:23:58.442897   12653 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0916 10:23:58.534045   12653 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2l862" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:58.540468   12653 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:23:58.540496   12653 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 10:23:58.642810   12653 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:23:58.642885   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0916 10:23:58.728521   12653 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 10:23:58.728554   12653 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0916 10:23:58.840472   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:23:58.921026   12653 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0916 10:23:58.921059   12653 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0916 10:23:58.936525   12653 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0916 10:23:58.936552   12653 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0916 10:23:58.939212   12653 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-191972" context rescaled to 1 replicas
	I0916 10:23:59.131614   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:23:59.224079   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 10:23:59.224104   12653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0916 10:23:59.230203   12653 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0916 10:23:59.230238   12653 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0916 10:23:59.423686   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:23:59.430144   12653 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0916 10:23:59.430176   12653 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0916 10:23:59.433784   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 10:23:59.433810   12653 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0916 10:23:59.542608   12653 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:23:59.542635   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0916 10:23:59.630644   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 10:23:59.630734   12653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0916 10:23:59.840282   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:23:59.927613   12653 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:23:59.927705   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0916 10:24:00.030859   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 10:24:00.030936   12653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0916 10:24:00.034479   12653 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0916 10:24:00.034549   12653 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0916 10:24:00.038488   12653 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-2l862" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-2l862" not found
	I0916 10:24:00.038522   12653 pod_ready.go:82] duration metric: took 1.504385632s for pod "coredns-7c65d6cfc9-2l862" in "kube-system" namespace to be "Ready" ...
	E0916 10:24:00.038535   12653 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-2l862" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-2l862" not found
	I0916 10:24:00.038552   12653 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:00.333635   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:24:00.339910   12653 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 10:24:00.339994   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0916 10:24:00.627234   12653 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0916 10:24:00.627262   12653 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0916 10:24:00.929780   12653 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 10:24:00.929809   12653 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0916 10:24:01.128973   12653 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:24:01.129062   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0916 10:24:01.334031   12653 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 10:24:01.334116   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0916 10:24:01.525220   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:24:02.022039   12653 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 10:24:02.022114   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0916 10:24:02.136463   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:02.532736   12653 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:24:02.532829   12653 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0916 10:24:02.738986   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:24:04.426813   12653 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0916 10:24:04.426903   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:24:04.456284   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:24:04.624938   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:04.638370   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.008571899s)
	I0916 10:24:04.638414   12653 addons.go:475] Verifying addon ingress=true in "addons-191972"
	I0916 10:24:04.638488   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.908226437s)
	I0916 10:24:04.638570   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.908717103s)
	I0916 10:24:04.638623   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.896188028s)
	I0916 10:24:04.638699   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.81369606s)
	I0916 10:24:04.638718   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.795359026s)
	I0916 10:24:04.638742   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.716592394s)
	I0916 10:24:04.641681   12653 out.go:177] * Verifying ingress addon...
	I0916 10:24:04.644857   12653 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	W0916 10:24:04.722084   12653 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0916 10:24:04.723574   12653 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0916 10:24:04.723598   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:04.841083   12653 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0916 10:24:04.932849   12653 addons.go:234] Setting addon gcp-auth=true in "addons-191972"
	I0916 10:24:04.932903   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:24:04.933372   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:24:04.957393   12653 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0916 10:24:04.957464   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:24:04.975728   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:24:05.150342   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:05.650366   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:06.149809   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:06.649391   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:06.834167   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (8.494119031s)
	I0916 10:24:06.834259   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.993750099s)
	I0916 10:24:06.834355   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.702687859s)
	I0916 10:24:06.834379   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.410662864s)
	I0916 10:24:06.834381   12653 addons.go:475] Verifying addon metrics-server=true in "addons-191972"
	I0916 10:24:06.834394   12653 addons.go:475] Verifying addon registry=true in "addons-191972"
	I0916 10:24:06.834447   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.994082306s)
	I0916 10:24:06.834595   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.500877662s)
	W0916 10:24:06.834635   12653 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:24:06.834660   12653 retry.go:31] will retry after 180.492463ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:24:06.834694   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.309367322s)
	I0916 10:24:06.836029   12653 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-191972 service yakd-dashboard -n yakd-dashboard
	
	I0916 10:24:06.836032   12653 out.go:177] * Verifying registry addon...
	I0916 10:24:06.838577   12653 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0916 10:24:06.842659   12653 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 10:24:06.842681   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:07.016329   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:24:07.122253   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:07.229433   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:07.346049   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:07.428384   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.689342475s)
	I0916 10:24:07.428423   12653 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-191972"
	I0916 10:24:07.428557   12653 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.471115449s)
	I0916 10:24:07.430137   12653 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:24:07.430140   12653 out.go:177] * Verifying csi-hostpath-driver addon...
	I0916 10:24:07.432142   12653 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0916 10:24:07.433350   12653 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0916 10:24:07.433452   12653 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 10:24:07.433472   12653 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0916 10:24:07.446890   12653 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 10:24:07.446929   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:07.523198   12653 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 10:24:07.523247   12653 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0916 10:24:07.543809   12653 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:24:07.543877   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0916 10:24:07.627288   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:24:07.649744   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:07.842799   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:07.943700   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.149515   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:08.343117   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:08.438263   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.651360   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:08.739263   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.722876496s)
	I0916 10:24:08.739377   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.111993041s)
	I0916 10:24:08.740565   12653 addons.go:475] Verifying addon gcp-auth=true in "addons-191972"
	I0916 10:24:08.742658   12653 out.go:177] * Verifying gcp-auth addon...
	I0916 10:24:08.744959   12653 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0916 10:24:08.752275   12653 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 10:24:08.842486   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:08.937942   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.148485   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:09.342745   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:09.444884   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.544117   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:09.649057   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:09.850158   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:09.951607   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.149384   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:10.342403   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:10.437953   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.648926   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:10.842555   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:10.938628   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:11.149265   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:11.341824   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:11.438269   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:11.544664   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:11.649663   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:11.842706   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:11.938382   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:12.149747   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:12.341485   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:12.438115   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:12.649444   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:12.842417   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:12.937808   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:13.149247   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:13.342184   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:13.443397   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:13.544742   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:13.649342   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:13.842433   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:13.938156   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:14.148884   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:14.342230   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:14.437378   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:14.648929   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:14.841404   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:14.938373   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:15.148947   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:15.342062   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:15.437442   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:15.544833   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:15.649729   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:15.875330   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:16.063181   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:16.148410   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:16.342704   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:16.437759   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:16.649599   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:16.842196   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:16.937322   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:17.148973   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:17.342240   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:17.438331   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:17.649287   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:17.842346   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:17.937786   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:18.044459   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:18.148462   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:18.342098   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:18.438245   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:18.650618   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:18.842115   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:18.937393   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:19.148210   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:19.342331   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:19.437753   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:19.649206   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:19.841659   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:19.937929   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:20.149095   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:20.341559   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:20.437389   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:20.543697   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:20.649389   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:20.841724   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:20.939911   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:21.148803   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:21.341867   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:21.437743   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:21.649220   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:21.841636   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:21.937733   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:22.148853   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:22.341623   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:22.438291   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:22.544155   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:22.648789   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:22.842117   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:22.937569   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:23.148605   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:23.342228   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:23.437946   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:23.648725   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:23.848611   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:23.937702   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:24.148830   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:24.341472   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:24.437746   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:24.648857   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:24.841524   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:24.937579   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:25.043875   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:25.148986   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:25.341729   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:25.438614   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:25.648859   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:25.842571   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:25.937660   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:26.148067   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:26.342525   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:26.442495   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:26.649368   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:26.841986   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:26.937210   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:27.044290   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:27.148266   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:27.341925   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:27.437369   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:27.648710   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:27.842271   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:27.937289   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:28.149389   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:28.341712   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:28.437988   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:28.649507   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:28.841935   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:28.937651   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:29.148305   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:29.341758   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:29.437230   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:29.544648   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:29.648789   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:29.842453   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:29.937780   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:30.149144   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:30.341971   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:30.436935   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:30.648826   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:30.842241   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:30.937301   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:31.148532   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:31.342364   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:31.438028   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:31.649021   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:31.842529   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:31.938084   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:32.044452   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:32.148477   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:32.342165   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:32.437629   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:32.649007   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:32.841446   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:32.937583   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:33.148965   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:33.341801   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:33.437144   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:33.649484   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:33.842344   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:33.937348   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:34.148522   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:34.342404   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:34.438126   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:34.543640   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:34.649404   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:34.842417   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:34.937940   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:35.149191   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:35.341955   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:35.437296   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:35.649499   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:35.841951   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:35.937835   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:36.148878   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:36.342396   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:36.437451   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:36.648935   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:36.841429   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:36.937515   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:37.043652   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:37.148879   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:37.341650   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:37.438917   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:37.648863   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:37.843665   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:37.937755   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:38.148476   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:38.342129   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:38.437617   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:38.648850   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:38.842096   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:38.937210   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:39.044295   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:39.148546   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:39.342070   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:39.437434   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:39.649394   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:39.850992   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:39.937068   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:40.148412   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:40.342026   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:40.438818   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:40.648424   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:40.842673   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:40.937959   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:41.149077   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:41.341573   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:41.437823   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:41.544866   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:41.649385   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:41.842400   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:41.942736   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:42.148726   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:42.342124   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:42.438550   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:42.649404   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:42.841927   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:42.937808   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:43.149523   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:43.341957   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:43.437318   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:43.545247   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:43.648618   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:43.842970   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:43.938236   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:44.149170   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:44.342180   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:44.437399   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:44.649533   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:44.842942   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:44.937846   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:45.149581   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:45.342185   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:45.437873   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:45.649109   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:45.842031   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:45.937050   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:46.043865   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:46.149131   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:46.342272   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:46.437555   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:46.649645   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:46.850195   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:46.951731   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:47.044952   12653 pod_ready.go:93] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:47.044977   12653 pod_ready.go:82] duration metric: took 47.006412913s for pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.044991   12653 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.048830   12653 pod_ready.go:93] pod "etcd-addons-191972" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:47.048847   12653 pod_ready.go:82] duration metric: took 3.848159ms for pod "etcd-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.048861   12653 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.052536   12653 pod_ready.go:93] pod "kube-apiserver-addons-191972" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:47.052558   12653 pod_ready.go:82] duration metric: took 3.691187ms for pod "kube-apiserver-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.052566   12653 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.056167   12653 pod_ready.go:93] pod "kube-controller-manager-addons-191972" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:47.056192   12653 pod_ready.go:82] duration metric: took 3.620465ms for pod "kube-controller-manager-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.056201   12653 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fnr7f" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.060021   12653 pod_ready.go:93] pod "kube-proxy-fnr7f" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:47.060038   12653 pod_ready.go:82] duration metric: took 3.830746ms for pod "kube-proxy-fnr7f" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.060046   12653 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.149672   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:47.342533   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:47.437808   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:47.441161   12653 pod_ready.go:93] pod "kube-scheduler-addons-191972" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:47.441181   12653 pod_ready.go:82] duration metric: took 381.129532ms for pod "kube-scheduler-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.441188   12653 pod_ready.go:39] duration metric: took 48.999654984s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:24:47.441205   12653 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:24:47.441254   12653 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:24:47.453909   12653 api_server.go:72] duration metric: took 50.345260117s to wait for apiserver process to appear ...
	I0916 10:24:47.453935   12653 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:24:47.453960   12653 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 10:24:47.458673   12653 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 10:24:47.459648   12653 api_server.go:141] control plane version: v1.31.1
	I0916 10:24:47.459673   12653 api_server.go:131] duration metric: took 5.729621ms to wait for apiserver health ...
	I0916 10:24:47.459683   12653 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:24:47.648237   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:47.648583   12653 system_pods.go:59] 19 kube-system pods found
	I0916 10:24:47.648620   12653 system_pods.go:61] "coredns-7c65d6cfc9-9rccl" [f2ffddc5-3995-4d5a-8059-297b3859f9c5] Running
	I0916 10:24:47.648634   12653 system_pods.go:61] "csi-hostpath-attacher-0" [07b91be6-2f2b-441c-80ee-f832e9ee2e18] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 10:24:47.648642   12653 system_pods.go:61] "csi-hostpath-resizer-0" [311c306b-accd-4242-8415-81199a4ce054] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 10:24:47.648653   12653 system_pods.go:61] "csi-hostpathplugin-qdnbn" [8f5408a2-a7eb-4f76-8ce0-b885fb4b47e5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 10:24:47.648667   12653 system_pods.go:61] "etcd-addons-191972" [81af20b7-9b19-4723-9b92-0ded3d775cd3] Running
	I0916 10:24:47.648673   12653 system_pods.go:61] "kindnet-rxp8k" [02b143c0-bbb4-4f94-8448-9c3c4f248a87] Running
	I0916 10:24:47.648678   12653 system_pods.go:61] "kube-apiserver-addons-191972" [1aabf917-f381-4e69-8524-954958c99b7e] Running
	I0916 10:24:47.648684   12653 system_pods.go:61] "kube-controller-manager-addons-191972" [ee796e67-bd06-4d93-9d20-aabcbb395ba2] Running
	I0916 10:24:47.648690   12653 system_pods.go:61] "kube-ingress-dns-minikube" [0f28fa0b-84f1-4215-aa41-4596ab4cef8b] Running
	I0916 10:24:47.648696   12653 system_pods.go:61] "kube-proxy-fnr7f" [a9e53f94-30ad-4178-b9e2-3ba4354a5adf] Running
	I0916 10:24:47.648700   12653 system_pods.go:61] "kube-scheduler-addons-191972" [32bbb72d-e291-4c84-8709-6066cf10b0cf] Running
	I0916 10:24:47.648709   12653 system_pods.go:61] "metrics-server-84c5f94fbc-s7654" [14e280ea-8ba8-4805-844c-aeff8fb18ce0] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 10:24:47.648719   12653 system_pods.go:61] "nvidia-device-plugin-daemonset-vpb85" [14ca6c72-b73b-4254-910a-0b876ca73f90] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 10:24:47.648732   12653 system_pods.go:61] "registry-66c9cd494c-vsbgv" [bd70fcec-e032-4dbd-902c-a139ac179bbf] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 10:24:47.648740   12653 system_pods.go:61] "registry-proxy-6vsnj" [05d6014b-9706-4d7a-a816-dbc7f557cd15] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 10:24:47.648749   12653 system_pods.go:61] "snapshot-controller-56fcc65765-4g9w6" [ad55cf42-58df-4d10-aae8-7626a86e7e0d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:24:47.648760   12653 system_pods.go:61] "snapshot-controller-56fcc65765-htkmc" [70a5c810-f514-4b23-b3a3-7a4cc46c35e2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:24:47.648766   12653 system_pods.go:61] "storage-provisioner" [ca9a25b7-8324-4ea1-a525-24f8c308baea] Running
	I0916 10:24:47.648777   12653 system_pods.go:61] "tiller-deploy-b48cc5f79-ddkxz" [dfe534c4-9e29-4907-b8cc-1dd12fc52f45] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 10:24:47.648789   12653 system_pods.go:74] duration metric: took 189.097544ms to wait for pod list to return data ...
	I0916 10:24:47.648801   12653 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:24:47.841018   12653 default_sa.go:45] found service account: "default"
	I0916 10:24:47.841043   12653 default_sa.go:55] duration metric: took 192.233696ms for default service account to be created ...
	I0916 10:24:47.841053   12653 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:24:47.841394   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:47.937402   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:48.049475   12653 system_pods.go:86] 19 kube-system pods found
	I0916 10:24:48.049509   12653 system_pods.go:89] "coredns-7c65d6cfc9-9rccl" [f2ffddc5-3995-4d5a-8059-297b3859f9c5] Running
	I0916 10:24:48.049523   12653 system_pods.go:89] "csi-hostpath-attacher-0" [07b91be6-2f2b-441c-80ee-f832e9ee2e18] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 10:24:48.049533   12653 system_pods.go:89] "csi-hostpath-resizer-0" [311c306b-accd-4242-8415-81199a4ce054] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 10:24:48.049541   12653 system_pods.go:89] "csi-hostpathplugin-qdnbn" [8f5408a2-a7eb-4f76-8ce0-b885fb4b47e5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 10:24:48.049546   12653 system_pods.go:89] "etcd-addons-191972" [81af20b7-9b19-4723-9b92-0ded3d775cd3] Running
	I0916 10:24:48.049550   12653 system_pods.go:89] "kindnet-rxp8k" [02b143c0-bbb4-4f94-8448-9c3c4f248a87] Running
	I0916 10:24:48.049554   12653 system_pods.go:89] "kube-apiserver-addons-191972" [1aabf917-f381-4e69-8524-954958c99b7e] Running
	I0916 10:24:48.049560   12653 system_pods.go:89] "kube-controller-manager-addons-191972" [ee796e67-bd06-4d93-9d20-aabcbb395ba2] Running
	I0916 10:24:48.049569   12653 system_pods.go:89] "kube-ingress-dns-minikube" [0f28fa0b-84f1-4215-aa41-4596ab4cef8b] Running
	I0916 10:24:48.049572   12653 system_pods.go:89] "kube-proxy-fnr7f" [a9e53f94-30ad-4178-b9e2-3ba4354a5adf] Running
	I0916 10:24:48.049576   12653 system_pods.go:89] "kube-scheduler-addons-191972" [32bbb72d-e291-4c84-8709-6066cf10b0cf] Running
	I0916 10:24:48.049579   12653 system_pods.go:89] "metrics-server-84c5f94fbc-s7654" [14e280ea-8ba8-4805-844c-aeff8fb18ce0] Running
	I0916 10:24:48.049587   12653 system_pods.go:89] "nvidia-device-plugin-daemonset-vpb85" [14ca6c72-b73b-4254-910a-0b876ca73f90] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 10:24:48.049595   12653 system_pods.go:89] "registry-66c9cd494c-vsbgv" [bd70fcec-e032-4dbd-902c-a139ac179bbf] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 10:24:48.049600   12653 system_pods.go:89] "registry-proxy-6vsnj" [05d6014b-9706-4d7a-a816-dbc7f557cd15] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 10:24:48.049605   12653 system_pods.go:89] "snapshot-controller-56fcc65765-4g9w6" [ad55cf42-58df-4d10-aae8-7626a86e7e0d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:24:48.049613   12653 system_pods.go:89] "snapshot-controller-56fcc65765-htkmc" [70a5c810-f514-4b23-b3a3-7a4cc46c35e2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:24:48.049618   12653 system_pods.go:89] "storage-provisioner" [ca9a25b7-8324-4ea1-a525-24f8c308baea] Running
	I0916 10:24:48.049625   12653 system_pods.go:89] "tiller-deploy-b48cc5f79-ddkxz" [dfe534c4-9e29-4907-b8cc-1dd12fc52f45] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 10:24:48.049634   12653 system_pods.go:126] duration metric: took 208.573497ms to wait for k8s-apps to be running ...
	I0916 10:24:48.049644   12653 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:24:48.049682   12653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:24:48.060846   12653 system_svc.go:56] duration metric: took 11.19263ms WaitForService to wait for kubelet
	I0916 10:24:48.060871   12653 kubeadm.go:582] duration metric: took 50.952228588s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:24:48.060890   12653 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:24:48.148219   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:48.242671   12653 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:24:48.242705   12653 node_conditions.go:123] node cpu capacity is 8
	I0916 10:24:48.242718   12653 node_conditions.go:105] duration metric: took 181.823571ms to run NodePressure ...
	I0916 10:24:48.242730   12653 start.go:241] waiting for startup goroutines ...
	I0916 10:24:48.342074   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:48.437253   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:48.650425   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:48.850814   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:48.937328   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:49.149694   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:49.342609   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:49.438289   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:49.649584   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:49.842847   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:49.936933   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:50.149348   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:50.342164   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:50.438163   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:50.649197   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:50.853453   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:50.938034   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:51.148940   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:51.341925   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:51.437207   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:51.649501   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:51.841516   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:51.937843   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:52.148973   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:52.341463   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:52.437548   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:52.649904   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:52.842395   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:52.938876   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:53.150346   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:53.342226   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:53.437852   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:53.650214   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:53.841999   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:53.938041   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:54.149543   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:54.342470   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:54.438196   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:54.649301   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:54.842219   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:54.937405   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:55.148757   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:55.342352   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:55.437453   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:55.649467   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:55.842884   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:55.938335   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:56.149527   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:56.342461   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:56.438207   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:56.649107   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:56.841744   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:56.938316   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:57.150214   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:57.342941   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:57.438321   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:57.650060   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:57.841776   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:57.937801   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:58.148724   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:58.342609   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:58.437714   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:58.648506   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:58.842214   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:58.937202   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:59.149022   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:59.341924   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:59.437205   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:59.649919   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:59.842721   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:59.943895   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:00.148461   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:00.342965   12653 kapi.go:107] duration metric: took 53.504381408s to wait for kubernetes.io/minikube-addons=registry ...
	I0916 10:25:00.438324   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:00.649093   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:00.937839   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:01.148871   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:01.436988   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:01.649359   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:01.937842   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:02.149127   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:02.439235   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:02.648644   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:02.937625   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:03.148437   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:03.438471   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:03.649883   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:03.936881   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:04.149787   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:04.438325   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:04.649405   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:04.937307   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:05.148501   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:05.437162   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:05.649408   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:05.937329   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:06.148922   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:06.437615   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:06.648794   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:06.937817   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:07.149424   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:07.437622   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:07.648805   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:07.975373   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:08.148579   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:08.438130   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:08.649051   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:08.938155   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:09.241812   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:09.438112   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:09.649051   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:09.937597   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:10.148065   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:10.438452   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:10.649615   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:10.937657   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:11.150286   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:11.438138   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:11.648515   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:11.938254   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:12.148855   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:12.437045   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:12.648984   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:12.937480   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:13.149222   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:13.437879   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:13.648073   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:13.937714   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:14.148744   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:14.437856   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:14.648905   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:14.937125   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:15.149947   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:15.438534   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:15.649415   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:15.938563   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:16.148929   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:16.437971   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:16.649574   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:16.938374   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:17.149584   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:17.437332   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:17.649230   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:17.939095   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:18.148655   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:18.437781   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:18.648991   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:18.937887   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:19.149216   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:19.437411   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:19.649222   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:19.937654   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:20.149853   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:20.438168   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:20.648811   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:20.948409   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:21.172608   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:21.655855   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:21.656415   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:21.973917   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:22.149178   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:22.438576   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:22.649097   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:22.939034   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:23.149425   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:23.438124   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:23.650285   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:23.938421   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:24.148909   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:24.441944   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:24.649383   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:24.938850   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:25.149722   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:25.437832   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:25.649648   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:25.938500   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:26.149259   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:26.437884   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:26.649790   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:26.937641   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:27.149739   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:27.438223   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:27.648728   12653 kapi.go:107] duration metric: took 1m23.003864669s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0916 10:25:27.938153   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:28.438461   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:28.939228   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:29.438060   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:29.937952   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:30.438284   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:30.938383   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:31.437781   12653 kapi.go:107] duration metric: took 1m24.004430138s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0916 10:26:53.748019   12653 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 10:26:53.748042   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:54.248033   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:54.748085   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:55.248231   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:55.748800   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:56.251601   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:56.748202   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:57.248415   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:57.748866   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:58.248439   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:58.748615   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:59.248797   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:59.748674   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:00.248751   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:00.748977   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:01.247802   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:01.749050   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:02.247827   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:02.751439   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:03.248607   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:03.748774   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:04.248993   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:04.748179   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:05.248453   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:05.748269   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:06.248843   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:06.749191   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:07.248224   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:07.748003   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:08.248208   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:08.748339   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:09.248558   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:09.748890   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:10.247853   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:10.748462   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:11.248698   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:11.748605   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:12.249209   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:12.747956   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:13.247977   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:13.748012   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:14.248098   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:14.748444   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:15.248890   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:15.748752   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:16.248803   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:16.749124   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:17.248063   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:17.747865   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:18.247931   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:18.748279   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:19.248473   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:19.748289   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:20.248375   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:20.748484   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:21.248848   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:21.748816   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:22.247827   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:22.748462   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:23.248760   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:23.749167   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:24.248424   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:24.748963   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:25.248350   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:25.748222   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:26.248413   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:26.748789   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:27.247908   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:27.747837   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:28.248226   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:28.748371   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:29.249618   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:29.748597   12653 kapi.go:107] duration metric: took 3m21.003635946s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0916 10:27:29.750701   12653 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-191972 cluster.
	I0916 10:27:29.752412   12653 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0916 10:27:29.754028   12653 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0916 10:27:29.756074   12653 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, nvidia-device-plugin, cloud-spanner, storage-provisioner-rancher, volcano, helm-tiller, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0916 10:27:29.757930   12653 addons.go:510] duration metric: took 3m32.649258168s for enable addons: enabled=[storage-provisioner ingress-dns nvidia-device-plugin cloud-spanner storage-provisioner-rancher volcano helm-tiller metrics-server inspektor-gadget yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0916 10:27:29.758012   12653 start.go:246] waiting for cluster config update ...
	I0916 10:27:29.758039   12653 start.go:255] writing updated cluster config ...
	I0916 10:27:29.758383   12653 ssh_runner.go:195] Run: rm -f paused
	I0916 10:27:29.765351   12653 out.go:177] * Done! kubectl is now configured to use "addons-191972" cluster and "default" namespace by default
	E0916 10:27:29.767004   12653 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	cfade64badb92       db2fc13d44d50       6 minutes ago       Running             gcp-auth                                 0                   99d0fe27850b3       gcp-auth-89d5ffd79-6r2td
	df81f1fc28725       a876393c9504b       7 minutes ago       Running             admission                                0                   0aa4b1d0acb5a       volcano-admission-77d7d48b68-rcfsk
	9dd4a83ba6d70       6041e92ec449f       7 minutes ago       Running             volcano-scheduler                        1                   9564c1c96bcee       volcano-scheduler-576bc46687-jtz7f
	72101e37ab665       738351fd438f0       8 minutes ago       Running             csi-snapshotter                          0                   73e3689e18fe9       csi-hostpathplugin-qdnbn
	da8f6a34306e1       931dbfd16f87c       8 minutes ago       Running             csi-provisioner                          0                   73e3689e18fe9       csi-hostpathplugin-qdnbn
	1649420a66573       e899260153aed       8 minutes ago       Running             liveness-probe                           0                   73e3689e18fe9       csi-hostpathplugin-qdnbn
	e0e474b6d95e5       e255e073c508c       8 minutes ago       Running             hostpath                                 0                   73e3689e18fe9       csi-hostpathplugin-qdnbn
	d5fc898fd874b       a80c8fd6e5229       8 minutes ago       Running             controller                               0                   30db636a12234       ingress-nginx-controller-bc57996ff-lpb7q
	06d43e898075b       88ef14a257f42       8 minutes ago       Running             node-driver-registrar                    0                   73e3689e18fe9       csi-hostpathplugin-qdnbn
	39c5183f27011       ce263a8653f9c       8 minutes ago       Exited              patch                                    0                   589d98ccee909       ingress-nginx-admission-patch-8f8nz
	a8bb0086c52b5       6041e92ec449f       8 minutes ago       Exited              volcano-scheduler                        0                   9564c1c96bcee       volcano-scheduler-576bc46687-jtz7f
	ddf31d8b68bc1       a876393c9504b       9 minutes ago       Exited              main                                     0                   b49978f431ab4       volcano-admission-init-57gk4
	06cf11b7a83f9       ce263a8653f9c       9 minutes ago       Exited              create                                   0                   6301c91177942       ingress-nginx-admission-create-5rjsx
	1cd468b4437bd       a1ed5895ba635       9 minutes ago       Running             csi-external-health-monitor-controller   0                   73e3689e18fe9       csi-hostpathplugin-qdnbn
	79266075c79ff       59cbb42146a37       9 minutes ago       Running             csi-attacher                             0                   a4c401b363464       csi-hostpath-attacher-0
	c65d9de60c2d0       aa61ee9c70bc4       9 minutes ago       Running             volume-snapshot-controller               0                   dba5883c9dc9b       snapshot-controller-56fcc65765-4g9w6
	0c025c1b7dd4c       19a639eda60f0       9 minutes ago       Running             csi-resizer                              0                   176615116e8de       csi-hostpath-resizer-0
	c7d7b6bb58927       96e410111f023       9 minutes ago       Running             volcano-controllers                      0                   84cb34271a61b       volcano-controllers-56675bb4d5-hdpdb
	6819af68287c4       aa61ee9c70bc4       9 minutes ago       Running             volume-snapshot-controller               0                   bb404cbffba4e       snapshot-controller-56fcc65765-htkmc
	576d6c9483015       48d9cfaaf3904       9 minutes ago       Running             metrics-server                           0                   debbe4f662687       metrics-server-84c5f94fbc-s7654
	3c2ba113f3a92       c69fa2e9cbf5f       9 minutes ago       Running             coredns                                  0                   e557eec597dbb       coredns-7c65d6cfc9-9rccl
	74825d98cba88       e16d1e3a10667       9 minutes ago       Running             local-path-provisioner                   0                   1e611781a41cb       local-path-provisioner-86d989889c-w6mf9
	dfe8c0b03e5c3       30dd67412fdea       10 minutes ago      Running             minikube-ingress-dns                     0                   6682d7fdc0949       kube-ingress-dns-minikube
	62a4b8c25074d       6e38f40d628db       10 minutes ago      Running             storage-provisioner                      0                   54247c11bac23       storage-provisioner
	4c4482bfa98cf       12968670680f4       10 minutes ago      Running             kindnet-cni                              0                   48c4106711b6e       kindnet-rxp8k
	d9d3353287790       60c005f310ff3       10 minutes ago      Running             kube-proxy                               0                   b70e27ed4bc15       kube-proxy-fnr7f
	6e4dbd39a8ef5       175ffd71cce3d       10 minutes ago      Running             kube-controller-manager                  0                   f593f7267aeda       kube-controller-manager-addons-191972
	c76b948fbd083       6bab7719df100       10 minutes ago      Running             kube-apiserver                           0                   a7eb33c199dbc       kube-apiserver-addons-191972
	0539bdd901d4a       9aa1fad941575       10 minutes ago      Running             kube-scheduler                           0                   3aba8d618e3fa       kube-scheduler-addons-191972
	92c65a04535dd       2e96e5913fc06       10 minutes ago      Running             etcd                                     0                   84fc0865b25fe       etcd-addons-191972
	
	
	==> containerd <==
	Sep 16 10:33:51 addons-191972 containerd[858]: time="2024-09-16T10:33:51.698393350Z" level=info msg="RemovePodSandbox \"d4feba9de8c251ddcedb6bf5e748a13f7a0bf0cb99f6be81820752487f60aa7e\" returns successfully"
	Sep 16 10:33:51 addons-191972 containerd[858]: time="2024-09-16T10:33:51.698762619Z" level=info msg="StopPodSandbox for \"e900b36241c1f65531303fa71becfdd0cc9f3b3a9824c2167224d1221a60bba1\""
	Sep 16 10:33:51 addons-191972 containerd[858]: time="2024-09-16T10:33:51.706017983Z" level=info msg="TearDown network for sandbox \"e900b36241c1f65531303fa71becfdd0cc9f3b3a9824c2167224d1221a60bba1\" successfully"
	Sep 16 10:33:51 addons-191972 containerd[858]: time="2024-09-16T10:33:51.706044620Z" level=info msg="StopPodSandbox for \"e900b36241c1f65531303fa71becfdd0cc9f3b3a9824c2167224d1221a60bba1\" returns successfully"
	Sep 16 10:33:51 addons-191972 containerd[858]: time="2024-09-16T10:33:51.706392734Z" level=info msg="RemovePodSandbox for \"e900b36241c1f65531303fa71becfdd0cc9f3b3a9824c2167224d1221a60bba1\""
	Sep 16 10:33:51 addons-191972 containerd[858]: time="2024-09-16T10:33:51.706430748Z" level=info msg="Forcibly stopping sandbox \"e900b36241c1f65531303fa71becfdd0cc9f3b3a9824c2167224d1221a60bba1\""
	Sep 16 10:33:51 addons-191972 containerd[858]: time="2024-09-16T10:33:51.713453798Z" level=info msg="TearDown network for sandbox \"e900b36241c1f65531303fa71becfdd0cc9f3b3a9824c2167224d1221a60bba1\" successfully"
	Sep 16 10:33:51 addons-191972 containerd[858]: time="2024-09-16T10:33:51.718641911Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e900b36241c1f65531303fa71becfdd0cc9f3b3a9824c2167224d1221a60bba1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 16 10:33:51 addons-191972 containerd[858]: time="2024-09-16T10:33:51.718739801Z" level=info msg="RemovePodSandbox \"e900b36241c1f65531303fa71becfdd0cc9f3b3a9824c2167224d1221a60bba1\" returns successfully"
	Sep 16 10:34:13 addons-191972 containerd[858]: time="2024-09-16T10:34:13.893088028Z" level=info msg="StopContainer for \"89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f\" with timeout 30 (s)"
	Sep 16 10:34:13 addons-191972 containerd[858]: time="2024-09-16T10:34:13.893678641Z" level=info msg="Stop container \"89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f\" with signal terminated"
	Sep 16 10:34:13 addons-191972 containerd[858]: time="2024-09-16T10:34:13.949783990Z" level=info msg="shim disconnected" id=89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f namespace=k8s.io
	Sep 16 10:34:13 addons-191972 containerd[858]: time="2024-09-16T10:34:13.949848132Z" level=warning msg="cleaning up after shim disconnected" id=89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f namespace=k8s.io
	Sep 16 10:34:13 addons-191972 containerd[858]: time="2024-09-16T10:34:13.949861213Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 16 10:34:13 addons-191972 containerd[858]: time="2024-09-16T10:34:13.966111874Z" level=info msg="StopContainer for \"89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f\" returns successfully"
	Sep 16 10:34:13 addons-191972 containerd[858]: time="2024-09-16T10:34:13.966683146Z" level=info msg="StopPodSandbox for \"79bab02e559b8717ec0b0e5e6dd4571fe23f373c4108f24ed2348682765448ec\""
	Sep 16 10:34:13 addons-191972 containerd[858]: time="2024-09-16T10:34:13.966753968Z" level=info msg="Container to stop \"89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Sep 16 10:34:14 addons-191972 containerd[858]: time="2024-09-16T10:34:14.020680304Z" level=info msg="shim disconnected" id=79bab02e559b8717ec0b0e5e6dd4571fe23f373c4108f24ed2348682765448ec namespace=k8s.io
	Sep 16 10:34:14 addons-191972 containerd[858]: time="2024-09-16T10:34:14.020753568Z" level=warning msg="cleaning up after shim disconnected" id=79bab02e559b8717ec0b0e5e6dd4571fe23f373c4108f24ed2348682765448ec namespace=k8s.io
	Sep 16 10:34:14 addons-191972 containerd[858]: time="2024-09-16T10:34:14.020766147Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 16 10:34:14 addons-191972 containerd[858]: time="2024-09-16T10:34:14.072500899Z" level=info msg="TearDown network for sandbox \"79bab02e559b8717ec0b0e5e6dd4571fe23f373c4108f24ed2348682765448ec\" successfully"
	Sep 16 10:34:14 addons-191972 containerd[858]: time="2024-09-16T10:34:14.072542928Z" level=info msg="StopPodSandbox for \"79bab02e559b8717ec0b0e5e6dd4571fe23f373c4108f24ed2348682765448ec\" returns successfully"
	Sep 16 10:34:14 addons-191972 containerd[858]: time="2024-09-16T10:34:14.555396151Z" level=info msg="RemoveContainer for \"89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f\""
	Sep 16 10:34:14 addons-191972 containerd[858]: time="2024-09-16T10:34:14.564554463Z" level=info msg="RemoveContainer for \"89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f\" returns successfully"
	Sep 16 10:34:14 addons-191972 containerd[858]: time="2024-09-16T10:34:14.565133715Z" level=error msg="ContainerStatus for \"89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f\": not found"
	
	
	==> coredns [3c2ba113f3a928b6de94c4ca0bf607534ff798f3d85ffd2a7685ed6dacc00744] <==
	[INFO] 10.244.0.3:34722 - 16813 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000126799s
	[INFO] 10.244.0.3:47807 - 19593 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000078163s
	[INFO] 10.244.0.3:47807 - 48005 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00012131s
	[INFO] 10.244.0.3:52137 - 389 "AAAA IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.004304691s
	[INFO] 10.244.0.3:52137 - 40577 "A IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.004777432s
	[INFO] 10.244.0.3:37044 - 23366 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.003875752s
	[INFO] 10.244.0.3:37044 - 14153 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004520489s
	[INFO] 10.244.0.3:37775 - 29429 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003806717s
	[INFO] 10.244.0.3:37775 - 41674 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003872738s
	[INFO] 10.244.0.3:58704 - 7476 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000090446s
	[INFO] 10.244.0.3:58704 - 1849 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000134094s
	[INFO] 10.244.0.25:38825 - 37363 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000216144s
	[INFO] 10.244.0.25:38931 - 39307 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000245831s
	[INFO] 10.244.0.25:50024 - 16483 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000164924s
	[INFO] 10.244.0.25:42236 - 32299 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000196632s
	[INFO] 10.244.0.25:49331 - 38072 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000114124s
	[INFO] 10.244.0.25:36861 - 61813 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000164666s
	[INFO] 10.244.0.25:33081 - 5019 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.00927584s
	[INFO] 10.244.0.25:32825 - 10257 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.009718235s
	[INFO] 10.244.0.25:50215 - 44243 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007980557s
	[INFO] 10.244.0.25:46089 - 36172 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008374403s
	[INFO] 10.244.0.25:60708 - 60516 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00523636s
	[INFO] 10.244.0.25:53932 - 3930 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005436837s
	[INFO] 10.244.0.25:33968 - 30856 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.002295196s
	[INFO] 10.244.0.25:51453 - 49493 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002387298s
	
	
	==> describe nodes <==
	Name:               addons-191972
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-191972
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=addons-191972
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_23_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-191972
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-191972"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:23:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-191972
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:34:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:32:52 +0000   Mon, 16 Sep 2024 10:23:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:32:52 +0000   Mon, 16 Sep 2024 10:23:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:32:52 +0000   Mon, 16 Sep 2024 10:23:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:32:52 +0000   Mon, 16 Sep 2024 10:23:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-191972
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 0263fbb37d3545b09ff38a7b68907e4c
	  System UUID:                45c87f39-d597-4b0c-a097-439ebdb945ff
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-89d5ffd79-6r2td                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m22s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-lpb7q    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-9rccl                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 csi-hostpathplugin-qdnbn                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-addons-191972                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kindnet-rxp8k                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-addons-191972                250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-addons-191972       200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-fnr7f                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-addons-191972                100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-84c5f94fbc-s7654             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         10m
	  kube-system                 snapshot-controller-56fcc65765-4g9w6        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 snapshot-controller-56fcc65765-htkmc        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  local-path-storage          local-path-provisioner-86d989889c-w6mf9     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  volcano-system              volcano-admission-77d7d48b68-rcfsk          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  volcano-system              volcano-controllers-56675bb4d5-hdpdb        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  volcano-system              volcano-scheduler-576bc46687-jtz7f          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             510Mi (1%)   220Mi (0%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 10m   kube-proxy       
	  Normal   Starting                 10m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  10m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  10m   kubelet          Node addons-191972 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m   kubelet          Node addons-191972 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m   kubelet          Node addons-191972 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m   node-controller  Node addons-191972 event: Registered Node addons-191972 in Controller
	
	
	==> dmesg <==
	[Sep16 10:17]  #2
	[  +0.001391]  #3
	[  +0.000000]  #4
	[  +0.003060] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.003238] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.002037] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.002135]  #5
	[  +0.000696]  #6
	[  +0.003195]  #7
	[  +0.058540] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.423486] i8042: Warning: Keylock active
	[  +0.007424] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.002994] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000654] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000607] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000622] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000638] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000657] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000607] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000608] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.588189] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +7.251152] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [92c65a04535ddef6879f2eb4260843c6961d1fb2395f595b3a5665263c562002] <==
	{"level":"info","ts":"2024-09-16T10:23:47.262322Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-16T10:23:47.262576Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:24:15.873285Z","caller":"traceutil/trace.go:171","msg":"trace[187537689] transaction","detail":"{read_only:false; response_revision:986; number_of_response:1; }","duration":"119.841789ms","start":"2024-09-16T10:24:15.753419Z","end":"2024-09-16T10:24:15.873261Z","steps":["trace[187537689] 'process raft request'  (duration: 119.705144ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:24:16.060589Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.178284ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:24:16.060680Z","caller":"traceutil/trace.go:171","msg":"trace[2127996318] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:986; }","duration":"125.313412ms","start":"2024-09-16T10:24:15.935346Z","end":"2024-09-16T10:24:16.060659Z","steps":["trace[2127996318] 'range keys from in-memory index tree'  (duration: 125.097316ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:07.796336Z","caller":"traceutil/trace.go:171","msg":"trace[28147226] transaction","detail":"{read_only:false; response_revision:1242; number_of_response:1; }","duration":"128.826483ms","start":"2024-09-16T10:25:07.667485Z","end":"2024-09-16T10:25:07.796311Z","steps":["trace[28147226] 'process raft request'  (duration: 41.106171ms)","trace[28147226] 'compare'  (duration: 87.53434ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:25:21.424319Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.488522ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128031931970271159 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/volcano-system/volcano-scheduler-576bc46687-jtz7f\" mod_revision:812 > success:<request_put:<key:\"/registry/pods/volcano-system/volcano-scheduler-576bc46687-jtz7f\" value_size:4029 >> failure:<request_range:<key:\"/registry/pods/volcano-system/volcano-scheduler-576bc46687-jtz7f\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-09-16T10:25:21.424401Z","caller":"traceutil/trace.go:171","msg":"trace[1168470588] linearizableReadLoop","detail":"{readStateIndex:1335; appliedIndex:1334; }","duration":"177.395065ms","start":"2024-09-16T10:25:21.246995Z","end":"2024-09-16T10:25:21.424390Z","steps":["trace[1168470588] 'read index received'  (duration: 48.427907ms)","trace[1168470588] 'applied index is now lower than readState.Index'  (duration: 128.965162ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:25:21.424444Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.446761ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:21.424466Z","caller":"traceutil/trace.go:171","msg":"trace[1171179904] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1302; }","duration":"177.469291ms","start":"2024-09-16T10:25:21.246991Z","end":"2024-09-16T10:25:21.424460Z","steps":["trace[1171179904] 'agreement among raft nodes before linearized reading'  (duration: 177.429463ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:21.424486Z","caller":"traceutil/trace.go:171","msg":"trace[1930200040] transaction","detail":"{read_only:false; response_revision:1302; number_of_response:1; }","duration":"247.357795ms","start":"2024-09-16T10:25:21.177107Z","end":"2024-09-16T10:25:21.424464Z","steps":["trace[1930200040] 'process raft request'  (duration: 118.297085ms)","trace[1930200040] 'compare'  (duration: 128.26971ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T10:25:21.652910Z","caller":"traceutil/trace.go:171","msg":"trace[1856019889] linearizableReadLoop","detail":"{readStateIndex:1338; appliedIndex:1335; }","duration":"218.326846ms","start":"2024-09-16T10:25:21.434567Z","end":"2024-09-16T10:25:21.652894Z","steps":["trace[1856019889] 'read index received'  (duration: 55.93458ms)","trace[1856019889] 'applied index is now lower than readState.Index'  (duration: 162.391571ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T10:25:21.652969Z","caller":"traceutil/trace.go:171","msg":"trace[1279722024] transaction","detail":"{read_only:false; response_revision:1305; number_of_response:1; }","duration":"224.683287ms","start":"2024-09-16T10:25:21.428268Z","end":"2024-09-16T10:25:21.652951Z","steps":["trace[1279722024] 'process raft request'  (duration: 224.540452ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:21.653003Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"218.415614ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:21.653027Z","caller":"traceutil/trace.go:171","msg":"trace[1008371896] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1305; }","duration":"218.457307ms","start":"2024-09-16T10:25:21.434563Z","end":"2024-09-16T10:25:21.653020Z","steps":["trace[1008371896] 'agreement among raft nodes before linearized reading'  (duration: 218.392253ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:21.652921Z","caller":"traceutil/trace.go:171","msg":"trace[1132385399] transaction","detail":"{read_only:false; response_revision:1304; number_of_response:1; }","duration":"225.049342ms","start":"2024-09-16T10:25:21.427850Z","end":"2024-09-16T10:25:21.652899Z","steps":["trace[1132385399] 'process raft request'  (duration: 131.625555ms)","trace[1132385399] 'compare'  (duration: 93.227933ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T10:25:21.868227Z","caller":"traceutil/trace.go:171","msg":"trace[1246984751] linearizableReadLoop","detail":"{readStateIndex:1339; appliedIndex:1338; }","duration":"139.924393ms","start":"2024-09-16T10:25:21.728284Z","end":"2024-09-16T10:25:21.868208Z","steps":["trace[1246984751] 'read index received'  (duration: 63.202511ms)","trace[1246984751] 'applied index is now lower than readState.Index'  (duration: 76.72121ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T10:25:21.868259Z","caller":"traceutil/trace.go:171","msg":"trace[501466804] transaction","detail":"{read_only:false; response_revision:1306; number_of_response:1; }","duration":"210.400699ms","start":"2024-09-16T10:25:21.657832Z","end":"2024-09-16T10:25:21.868233Z","steps":["trace[501466804] 'process raft request'  (duration: 133.673421ms)","trace[501466804] 'compare'  (duration: 76.618072ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:25:21.868373Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.878283ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:21.868410Z","caller":"traceutil/trace.go:171","msg":"trace[1169815467] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1306; }","duration":"121.931335ms","start":"2024-09-16T10:25:21.746471Z","end":"2024-09-16T10:25:21.868402Z","steps":["trace[1169815467] 'agreement among raft nodes before linearized reading'  (duration: 121.861476ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:21.868538Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.236255ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-09-16T10:25:21.868579Z","caller":"traceutil/trace.go:171","msg":"trace[344111638] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1306; }","duration":"140.292497ms","start":"2024-09-16T10:25:21.728276Z","end":"2024-09-16T10:25:21.868569Z","steps":["trace[344111638] 'agreement among raft nodes before linearized reading'  (duration: 140.016451ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:33:47.645977Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1761}
	{"level":"info","ts":"2024-09-16T10:33:47.672836Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1761,"took":"26.323299ms","hash":3150463749,"current-db-size-bytes":9527296,"current-db-size":"9.5 MB","current-db-size-in-use-bytes":5414912,"current-db-size-in-use":"5.4 MB"}
	{"level":"info","ts":"2024-09-16T10:33:47.672899Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3150463749,"revision":1761,"compact-revision":-1}
	
	
	==> gcp-auth [cfade64badb92dacf9d0c56d24c0fb7e95088f5abf7a814ef4801971e4b26216] <==
	2024/09/16 10:27:29 GCP Auth Webhook started!
	2024/09/16 10:32:45 Ready to marshal response ...
	2024/09/16 10:32:45 Ready to write response ...
	2024/09/16 10:32:45 Ready to marshal response ...
	2024/09/16 10:32:45 Ready to write response ...
	2024/09/16 10:32:45 Ready to marshal response ...
	2024/09/16 10:32:45 Ready to write response ...
	
	
	==> kernel <==
	 10:34:15 up 16 min,  0 users,  load average: 0.45, 0.53, 0.40
	Linux addons-191972 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [4c4482bfa98cf1024c4b123130c5a320a891204919b9a1459b6f3269e1e7d29d] <==
	I0916 10:32:09.447865       1 main.go:299] handling current node
	I0916 10:32:19.448134       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:32:19.448165       1 main.go:299] handling current node
	I0916 10:32:29.443818       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:32:29.443852       1 main.go:299] handling current node
	I0916 10:32:39.441647       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:32:39.441692       1 main.go:299] handling current node
	I0916 10:32:49.441742       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:32:49.441771       1 main.go:299] handling current node
	I0916 10:32:59.441556       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:32:59.441597       1 main.go:299] handling current node
	I0916 10:33:09.442476       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:33:09.442533       1 main.go:299] handling current node
	I0916 10:33:19.447820       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:33:19.447857       1 main.go:299] handling current node
	I0916 10:33:29.441045       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:33:29.441075       1 main.go:299] handling current node
	I0916 10:33:39.448670       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:33:39.448719       1 main.go:299] handling current node
	I0916 10:33:49.443878       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:33:49.443913       1 main.go:299] handling current node
	I0916 10:33:59.441504       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:33:59.441535       1 main.go:299] handling current node
	I0916 10:34:09.444782       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:34:09.444821       1 main.go:299] handling current node
	
	
	==> kube-apiserver [c76b948fbd083e0e5229c3ac96548e67224afd5a037343a2b118da9b9ae5ad3a] <==
	W0916 10:26:15.413935       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:16.459096       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:17.509475       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:18.532761       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:19.545400       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:20.553347       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:21.640741       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:22.735942       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:24.007851       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:25.084707       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:26.137166       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:27.215912       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:28.269709       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:29.285978       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:30.385745       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:31.389520       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:53.671732       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.114.210:443: connect: connection refused
	E0916 10:26:53.671804       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.114.210:443: connect: connection refused" logger="UnhandledError"
	W0916 10:27:11.712823       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.114.210:443: connect: connection refused
	E0916 10:27:11.712858       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.114.210:443: connect: connection refused" logger="UnhandledError"
	W0916 10:27:11.785537       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.114.210:443: connect: connection refused
	E0916 10:27:11.785576       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.114.210:443: connect: connection refused" logger="UnhandledError"
	I0916 10:32:45.560480       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.245.36"}
	I0916 10:33:06.754025       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0916 10:33:07.773034       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	
	
	==> kube-controller-manager [6e4dbd39a8ef56c5a753071ab0489111fcbcaac9f7cbe3b4fdf88030aa41c77b] <==
	I0916 10:32:45.732102       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="86.759µs"
	I0916 10:32:49.188017       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	I0916 10:32:49.362646       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="72.728µs"
	I0916 10:32:49.382396       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="5.571798ms"
	I0916 10:32:49.382492       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="50.896µs"
	I0916 10:32:52.645155       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-191972"
	I0916 10:32:56.120354       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-57fb76fcdb" duration="11.207µs"
	I0916 10:33:06.234578       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	E0916 10:33:07.774278       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:33:08.763057       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:33:08.763095       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:33:10.493315       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:33:10.493378       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:33:15.787594       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:33:15.787632       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:33:16.875487       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	I0916 10:33:26.294136       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0916 10:33:26.294178       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:33:26.604903       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0916 10:33:26.604950       1 shared_informer.go:320] Caches are synced for garbage collector
	W0916 10:33:28.016022       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:33:28.016059       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:33:43.495209       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:33:43.495252       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:34:13.882297       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/tiller-deploy-b48cc5f79" duration="6.965µs"
	
	
	==> kube-proxy [d9d335328779062c055353442bb9ca0c1e2fef63bc1c598650e6ea25604013a5] <==
	I0916 10:23:59.129562       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:23:59.824945       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:23:59.825067       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:24:00.037013       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:24:00.040602       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:24:00.135054       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:24:00.135450       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:24:00.135471       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:24:00.237323       1 config.go:199] "Starting service config controller"
	I0916 10:24:00.237372       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:24:00.237410       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:24:00.237416       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:24:00.237471       1 config.go:328] "Starting node config controller"
	I0916 10:24:00.237491       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:24:00.337642       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:24:00.337724       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:24:00.337829       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0539bdd901d4af068b2160b27df45018e72113a7a75c6a082ae7e2f64f3f908b] <==
	W0916 10:23:49.138663       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0916 10:23:49.138662       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:23:49.138689       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0916 10:23:49.138707       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:49.138696       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0916 10:23:49.138760       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0916 10:23:49.138769       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 10:23:49.138774       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 10:23:49.138779       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 10:23:49.138787       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:49.139877       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:49.139916       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:50.064082       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 10:23:50.064133       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:50.118512       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:23:50.118558       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:50.132045       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:23:50.132096       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:50.175403       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:50.175438       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:50.199805       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:23:50.199848       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:50.241540       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:50.241599       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0916 10:23:50.633994       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.976358    1565 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-run\") pod \"62b2176c-9dcb-4741-bd18-81ab2a2303f2\" (UID: \"62b2176c-9dcb-4741-bd18-81ab2a2303f2\") "
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.976354    1565 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-bpffs" (OuterVolumeSpecName: "bpffs") pod "62b2176c-9dcb-4741-bd18-81ab2a2303f2" (UID: "62b2176c-9dcb-4741-bd18-81ab2a2303f2"). InnerVolumeSpecName "bpffs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.976380    1565 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"debugfs\" (UniqueName: \"kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-debugfs\") pod \"62b2176c-9dcb-4741-bd18-81ab2a2303f2\" (UID: \"62b2176c-9dcb-4741-bd18-81ab2a2303f2\") "
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.976380    1565 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-cgroup" (OuterVolumeSpecName: "cgroup") pod "62b2176c-9dcb-4741-bd18-81ab2a2303f2" (UID: "62b2176c-9dcb-4741-bd18-81ab2a2303f2"). InnerVolumeSpecName "cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.976353    1565 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-modules" (OuterVolumeSpecName: "modules") pod "62b2176c-9dcb-4741-bd18-81ab2a2303f2" (UID: "62b2176c-9dcb-4741-bd18-81ab2a2303f2"). InnerVolumeSpecName "modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.976356    1565 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-host" (OuterVolumeSpecName: "host") pod "62b2176c-9dcb-4741-bd18-81ab2a2303f2" (UID: "62b2176c-9dcb-4741-bd18-81ab2a2303f2"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.976396    1565 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-run" (OuterVolumeSpecName: "run") pod "62b2176c-9dcb-4741-bd18-81ab2a2303f2" (UID: "62b2176c-9dcb-4741-bd18-81ab2a2303f2"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.976402    1565 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-debugfs" (OuterVolumeSpecName: "debugfs") pod "62b2176c-9dcb-4741-bd18-81ab2a2303f2" (UID: "62b2176c-9dcb-4741-bd18-81ab2a2303f2"). InnerVolumeSpecName "debugfs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.976506    1565 reconciler_common.go:288] "Volume detached for volume \"modules\" (UniqueName: \"kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-modules\") on node \"addons-191972\" DevicePath \"\""
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.976522    1565 reconciler_common.go:288] "Volume detached for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-bpffs\") on node \"addons-191972\" DevicePath \"\""
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.976533    1565 reconciler_common.go:288] "Volume detached for volume \"cgroup\" (UniqueName: \"kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-cgroup\") on node \"addons-191972\" DevicePath \"\""
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.976546    1565 reconciler_common.go:288] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-run\") on node \"addons-191972\" DevicePath \"\""
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.978118    1565 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62b2176c-9dcb-4741-bd18-81ab2a2303f2-kube-api-access-5jwxh" (OuterVolumeSpecName: "kube-api-access-5jwxh") pod "62b2176c-9dcb-4741-bd18-81ab2a2303f2" (UID: "62b2176c-9dcb-4741-bd18-81ab2a2303f2"). InnerVolumeSpecName "kube-api-access-5jwxh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:33:07 addons-191972 kubelet[1565]: I0916 10:33:07.076713    1565 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5jwxh\" (UniqueName: \"kubernetes.io/projected/62b2176c-9dcb-4741-bd18-81ab2a2303f2-kube-api-access-5jwxh\") on node \"addons-191972\" DevicePath \"\""
	Sep 16 10:33:07 addons-191972 kubelet[1565]: I0916 10:33:07.076783    1565 reconciler_common.go:288] "Volume detached for volume \"debugfs\" (UniqueName: \"kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-debugfs\") on node \"addons-191972\" DevicePath \"\""
	Sep 16 10:33:07 addons-191972 kubelet[1565]: I0916 10:33:07.076797    1565 reconciler_common.go:288] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-host\") on node \"addons-191972\" DevicePath \"\""
	Sep 16 10:33:07 addons-191972 kubelet[1565]: I0916 10:33:07.398404    1565 scope.go:117] "RemoveContainer" containerID="85bcbbfdfc074366faf8d70de5e5b0ae05b2c86caf5118e07c5f5779a11f6f09"
	Sep 16 10:33:07 addons-191972 kubelet[1565]: I0916 10:33:07.474491    1565 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62b2176c-9dcb-4741-bd18-81ab2a2303f2" path="/var/lib/kubelet/pods/62b2176c-9dcb-4741-bd18-81ab2a2303f2/volumes"
	Sep 16 10:34:14 addons-191972 kubelet[1565]: I0916 10:34:14.233694    1565 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fvwn\" (UniqueName: \"kubernetes.io/projected/dfe534c4-9e29-4907-b8cc-1dd12fc52f45-kube-api-access-4fvwn\") pod \"dfe534c4-9e29-4907-b8cc-1dd12fc52f45\" (UID: \"dfe534c4-9e29-4907-b8cc-1dd12fc52f45\") "
	Sep 16 10:34:14 addons-191972 kubelet[1565]: I0916 10:34:14.236128    1565 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfe534c4-9e29-4907-b8cc-1dd12fc52f45-kube-api-access-4fvwn" (OuterVolumeSpecName: "kube-api-access-4fvwn") pod "dfe534c4-9e29-4907-b8cc-1dd12fc52f45" (UID: "dfe534c4-9e29-4907-b8cc-1dd12fc52f45"). InnerVolumeSpecName "kube-api-access-4fvwn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:34:14 addons-191972 kubelet[1565]: I0916 10:34:14.334770    1565 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-4fvwn\" (UniqueName: \"kubernetes.io/projected/dfe534c4-9e29-4907-b8cc-1dd12fc52f45-kube-api-access-4fvwn\") on node \"addons-191972\" DevicePath \"\""
	Sep 16 10:34:14 addons-191972 kubelet[1565]: I0916 10:34:14.553835    1565 scope.go:117] "RemoveContainer" containerID="89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f"
	Sep 16 10:34:14 addons-191972 kubelet[1565]: I0916 10:34:14.564810    1565 scope.go:117] "RemoveContainer" containerID="89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f"
	Sep 16 10:34:14 addons-191972 kubelet[1565]: E0916 10:34:14.565324    1565 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f\": not found" containerID="89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f"
	Sep 16 10:34:14 addons-191972 kubelet[1565]: I0916 10:34:14.565368    1565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f"} err="failed to get container status \"89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f\": rpc error: code = NotFound desc = an error occurred when try to find container \"89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f\": not found"
	
	
	==> storage-provisioner [62a4b8c25074dcef9656a9b6e749de86b5f7c97f45a25cd328153d14be1d5a78] <==
	I0916 10:24:03.139108       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:24:03.230289       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:24:03.230361       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:24:03.238016       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:24:03.238457       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ff346362-6d54-491c-b142-6d85e8abf2d5", APIVersion:"v1", ResourceVersion:"576", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-191972_e8089787-9f1d-4116-8123-a579d9482714 became leader
	I0916 10:24:03.238505       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-191972_e8089787-9f1d-4116-8123-a579d9482714!
	I0916 10:24:03.339118       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-191972_e8089787-9f1d-4116-8123-a579d9482714!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-191972 -n addons-191972
helpers_test.go:261: (dbg) Run:  kubectl --context addons-191972 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context addons-191972 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (432.426µs)
helpers_test.go:263: kubectl --context addons-191972 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestAddons/parallel/HelmTiller (89.00s)

                                                
                                    
x
+
TestAddons/parallel/CSI (361.86s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 5.477259ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-191972 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:570: (dbg) Non-zero exit: kubectl --context addons-191972 create -f testdata/csi-hostpath-driver/pvc.yaml: fork/exec /usr/local/bin/kubectl: exec format error (409.576µs)
addons_test.go:572: creating sample PVC with kubectl --context addons-191972 create -f testdata/csi-hostpath-driver/pvc.yaml failed: fork/exec /usr/local/bin/kubectl: exec format error
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (275.205µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (417.599µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (435.209µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (423.095µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (448.002µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (423.684µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (354.497µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (465.846µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (447.062µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (439.86µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (524.122µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (391.941µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (351.639µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (615.369µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (447.222µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (391.455µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (384.499µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (396.656µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (405.975µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (269.218µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (377.969µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (390.939µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (393.971µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (445.444µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (402.906µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (400.875µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (415.358µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (431.458µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (512.292µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (375.312µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (446.148µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (431.383µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (375.82µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (436.705µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (398.964µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (425.19µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (409.875µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (480.368µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (490.413µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (441.71µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (530.551µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (424.35µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (548.561µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (480.786µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (346.185µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (393.508µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (452.826µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (442.322µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (478.613µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (399.28µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (514.07µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (396.96µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (395.227µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (401.077µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (425.725µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (414.743µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (405.051µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (419.239µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (374.993µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (414.118µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (375.005µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (458.849µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (417.518µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (411.269µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (405.811µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (460.341µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (528.267µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (426.996µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (408.241µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (391.749µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (450.9µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (430.727µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (441.763µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (445.158µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (462.311µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (450.826µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (420.388µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (23.027397ms)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (448.476µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (385.543µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (390.687µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (399.046µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (406.462µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (369.867µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (451.545µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (396.641µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (477.912µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (438.125µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (483.3µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (453.19µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (399.501µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (442.562µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (445.791µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (479.8µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (462.601µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (456.94µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (448.557µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (398.296µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (427.236µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (377.333µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (454.197µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (435.542µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (444.474µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (522.928µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (458.404µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (409.142µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (393.112µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (435.309µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (459.841µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (417.385µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (449.413µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (447.06µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (438.974µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (440.299µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (437.427µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (430.826µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (441.343µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (448.667µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (520.846µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (453.622µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (411.073µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (478.903µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (498.718µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (439.519µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (424.295µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (451.435µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (451.491µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (450.292µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (568.211µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (441.882µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (502.406µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (433.9µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (495.761µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (425.898µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (500.615µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (466.741µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (533.078µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (424.334µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (529.114µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (474.547µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (435.41µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (445.416µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (502.913µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (409.781µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (450.072µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (404.893µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (444.433µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (455.127µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (424.752µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (433.791µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (452.272µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (471.099µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (409.56µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (436.946µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (423.128µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (421.801µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (415.455µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (436.878µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (425.147µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (452.347µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (435.766µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (421.522µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (449.177µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (412.222µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (440.751µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (473.338µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (487.668µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (433.908µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (428.356µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (456.752µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (508.256µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (423.84µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (414.757µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (513.816µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (470.533µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (425.369µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (449.126µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (463.356µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (470.654µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (447.45µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (486.252µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (444.367µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (458.485µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (537.034µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (455.641µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (524.916µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (474.086µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (483.217µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (441.338µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (507.755µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (397.922µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (488.997µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (509.788µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (515.528µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (436.304µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (454.099µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (450.698µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (460.027µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (486.538µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (481.58µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (19.135159ms)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (481.377µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (501.387µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (469.979µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (482.653µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (470.141µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (501.151µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (462.921µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (540.858µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (490.694µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (448.692µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (507.476µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (459.173µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (574.38µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (529.59µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (452.587µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (531.989µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (497.708µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (446.866µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (455.763µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (463.907µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (552.1µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (531.857µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (615.527µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (424.986µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (432.453µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (470.796µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (484.458µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (473.946µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (466.042µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (478.637µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (480.052µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (449.701µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (497.554µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (512.298µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (442.095µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (490.28µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (472.955µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (475.649µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (455.529µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (481.251µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (517.354µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (440.236µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (531.012µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (454.581µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (490.528µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (488.283µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (449.897µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (499.848µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (471.607µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (507.6µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (524.458µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (497.504µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (429.086µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (484.234µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (490.301µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (411.312µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (511.068µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (488.464µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (506.3µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (438.079µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (501.57µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (467.082µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (461.315µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (467.305µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (530.252µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (482.88µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (514.639µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (518.042µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (624.624µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (489.578µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (435.437µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (436.601µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (454.863µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (475.359µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (492.956µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (458.563µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (497.142µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (458.884µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (445.143µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (488.381µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (465.351µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (473.257µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (500.671µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (504.08µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (452.822µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (495.406µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (502.384µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (547.719µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (506.226µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (471.544µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (469.345µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (567.851µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (556.844µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (450.967µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (475.069µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (562.781µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (485.007µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (538.388µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (468.06µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (524.181µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (511.684µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (460.241µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (576.602µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (449.048µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (466.381µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (456.87µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (487.64µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (484.022µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (445.632µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (461.932µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (474.635µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (536.967µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (506.963µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (488.644µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (467.09µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (471.644µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (462.995µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (447.581µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (500.512µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (484.093µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (459.085µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (480.452µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (474.397µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (459.224µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (450.051µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (465.952µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (481.587µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (465.408µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (477.221µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (485.153µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (588.029µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (473.416µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (555.454µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (467.784µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (500.038µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (563.129µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (509.932µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (484.841µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (580.28µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (574.374µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (605.443µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (486.795µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (484.528µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (595.46µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (514.416µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (473.549µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (463.218µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (507.864µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (476.734µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (479.932µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (537.849µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (520.583µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (517.78µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (517.395µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (538.401µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (498.001µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (506.359µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (491.628µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: fork/exec /usr/local/bin/kubectl: exec format error (482.192µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:394: (dbg) Run:  kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-191972 get pvc hpvc -o jsonpath={.status.phase} -n default: context deadline exceeded (1.073µs)
helpers_test.go:396: TestAddons/parallel/CSI: WARNING: PVC get for "default" "hpvc" returned: context deadline exceeded
addons_test.go:576: failed waiting for PVC hpvc: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/CSI]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-191972
helpers_test.go:235: (dbg) docker inspect addons-191972:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "49285aed0ac6b4add7b1c7856bcff882cf5b64bc1fd5779afefda3979360aedd",
	        "Created": "2024-09-16T10:23:37.048894749Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 13416,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:23:37.183215602Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/49285aed0ac6b4add7b1c7856bcff882cf5b64bc1fd5779afefda3979360aedd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/49285aed0ac6b4add7b1c7856bcff882cf5b64bc1fd5779afefda3979360aedd/hostname",
	        "HostsPath": "/var/lib/docker/containers/49285aed0ac6b4add7b1c7856bcff882cf5b64bc1fd5779afefda3979360aedd/hosts",
	        "LogPath": "/var/lib/docker/containers/49285aed0ac6b4add7b1c7856bcff882cf5b64bc1fd5779afefda3979360aedd/49285aed0ac6b4add7b1c7856bcff882cf5b64bc1fd5779afefda3979360aedd-json.log",
	        "Name": "/addons-191972",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-191972:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-191972",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2254b5220082ec2e338390341321e26cfa70d77e8e8e98f86dc832205812162a-init/diff:/var/lib/docker/overlay2/e391a31d00d345931d18da68f8a7d7338830f6d9b98fe203461225d03a65d594/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2254b5220082ec2e338390341321e26cfa70d77e8e8e98f86dc832205812162a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2254b5220082ec2e338390341321e26cfa70d77e8e8e98f86dc832205812162a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2254b5220082ec2e338390341321e26cfa70d77e8e8e98f86dc832205812162a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-191972",
	                "Source": "/var/lib/docker/volumes/addons-191972/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-191972",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-191972",
	                "name.minikube.sigs.k8s.io": "addons-191972",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b247e3d2e57f223fa64fb9fece255c3b6a0f61eb064ba71e6e8c51f7e6b8590a",
	            "SandboxKey": "/var/run/docker/netns/b247e3d2e57f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-191972": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "aac8db9a46c7b7c219b85113240d1d4a2ee20d1c156fb7315fdf6aa5e797f6a8",
	                    "EndpointID": "ab683490c93590fb0411cd607b8ad8f3100f7ae01f11dd3e855f6321d940faae",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-191972",
	                        "49285aed0ac6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-191972 -n addons-191972
helpers_test.go:244: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-191972 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-191972 logs -n 25: (1.161769398s)
helpers_test.go:252: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-297488   | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | -p download-only-297488              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| delete  | -p download-only-297488              | download-only-297488   | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| start   | -o=json --download-only              | download-only-024449   | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | -p download-only-024449              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| delete  | -p download-only-024449              | download-only-024449   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| delete  | -p download-only-297488              | download-only-297488   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| delete  | -p download-only-024449              | download-only-024449   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| start   | --download-only -p                   | download-docker-065822 | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | download-docker-065822               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-065822            | download-docker-065822 | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| start   | --download-only -p                   | binary-mirror-727123   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | binary-mirror-727123                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:34779               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-727123              | binary-mirror-727123   | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:23 UTC |
	| addons  | enable dashboard -p                  | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | addons-191972                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC |                     |
	|         | addons-191972                        |                        |         |         |                     |                     |
	| start   | -p addons-191972 --wait=true         | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:23 UTC | 16 Sep 24 10:27 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=helm-tiller                 |                        |         |         |                     |                     |
	| addons  | addons-191972 addons disable         | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | yakd --alsologtostderr -v=1          |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin         | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | -p addons-191972                     |                        |         |         |                     |                     |
	| ip      | addons-191972 ip                     | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	| addons  | addons-191972 addons disable         | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p             | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | addons-191972                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:32 UTC |
	|         | -p addons-191972                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-191972 addons disable         | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:32 UTC | 16 Sep 24 10:33 UTC |
	|         | headlamp --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:33 UTC | 16 Sep 24 10:33 UTC |
	|         | addons-191972                        |                        |         |         |                     |                     |
	| addons  | addons-191972 addons disable         | addons-191972          | jenkins | v1.34.0 | 16 Sep 24 10:34 UTC | 16 Sep 24 10:34 UTC |
	|         | helm-tiller --alsologtostderr        |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:23:15
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:23:15.015457   12653 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:23:15.015610   12653 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:23:15.015623   12653 out.go:358] Setting ErrFile to fd 2...
	I0916 10:23:15.015629   12653 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:23:15.015835   12653 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 10:23:15.016423   12653 out.go:352] Setting JSON to false
	I0916 10:23:15.017221   12653 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":339,"bootTime":1726481856,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:23:15.017316   12653 start.go:139] virtualization: kvm guest
	I0916 10:23:15.019468   12653 out.go:177] * [addons-191972] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:23:15.020856   12653 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:23:15.020860   12653 notify.go:220] Checking for updates...
	I0916 10:23:15.023158   12653 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:23:15.024282   12653 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:23:15.025336   12653 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 10:23:15.026362   12653 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:23:15.027468   12653 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:23:15.028714   12653 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:23:15.049632   12653 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:23:15.049710   12653 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:23:15.095467   12653 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-16 10:23:15.085826834 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:23:15.095614   12653 docker.go:318] overlay module found
	I0916 10:23:15.097552   12653 out.go:177] * Using the docker driver based on user configuration
	I0916 10:23:15.098917   12653 start.go:297] selected driver: docker
	I0916 10:23:15.098932   12653 start.go:901] validating driver "docker" against <nil>
	I0916 10:23:15.098957   12653 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:23:15.099817   12653 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:23:15.144749   12653 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-16 10:23:15.136589077 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:23:15.144922   12653 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:23:15.145171   12653 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:23:15.147081   12653 out.go:177] * Using Docker driver with root privileges
	I0916 10:23:15.148504   12653 cni.go:84] Creating CNI manager for ""
	I0916 10:23:15.148563   12653 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 10:23:15.148575   12653 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 10:23:15.148632   12653 start.go:340] cluster config:
	{Name:addons-191972 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-191972 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:23:15.149981   12653 out.go:177] * Starting "addons-191972" primary control-plane node in "addons-191972" cluster
	I0916 10:23:15.151239   12653 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 10:23:15.152375   12653 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:23:15.153439   12653 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:23:15.153479   12653 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4
	I0916 10:23:15.153492   12653 cache.go:56] Caching tarball of preloaded images
	I0916 10:23:15.153495   12653 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:23:15.153601   12653 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 10:23:15.153613   12653 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 10:23:15.153950   12653 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/config.json ...
	I0916 10:23:15.153974   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/config.json: {Name:mk77e04db13eac753d69895eba14a3f7223b28d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:15.169560   12653 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:23:15.169666   12653 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:23:15.169681   12653 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:23:15.169685   12653 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:23:15.169694   12653 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:23:15.169701   12653 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:23:27.861517   12653 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:23:27.861553   12653 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:23:27.861589   12653 start.go:360] acquireMachinesLock for addons-191972: {Name:mk1204ee6335c794af5ff39cd93a214e3c1d654b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:23:27.861691   12653 start.go:364] duration metric: took 80.959µs to acquireMachinesLock for "addons-191972"
	I0916 10:23:27.861720   12653 start.go:93] Provisioning new machine with config: &{Name:addons-191972 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-191972 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 10:23:27.861797   12653 start.go:125] createHost starting for "" (driver="docker")
	I0916 10:23:27.864363   12653 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0916 10:23:27.864609   12653 start.go:159] libmachine.API.Create for "addons-191972" (driver="docker")
	I0916 10:23:27.864644   12653 client.go:168] LocalClient.Create starting
	I0916 10:23:27.864787   12653 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem
	I0916 10:23:28.100386   12653 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem
	I0916 10:23:28.472961   12653 cli_runner.go:164] Run: docker network inspect addons-191972 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 10:23:28.488573   12653 cli_runner.go:211] docker network inspect addons-191972 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 10:23:28.488653   12653 network_create.go:284] running [docker network inspect addons-191972] to gather additional debugging logs...
	I0916 10:23:28.488675   12653 cli_runner.go:164] Run: docker network inspect addons-191972
	W0916 10:23:28.503724   12653 cli_runner.go:211] docker network inspect addons-191972 returned with exit code 1
	I0916 10:23:28.503773   12653 network_create.go:287] error running [docker network inspect addons-191972]: docker network inspect addons-191972: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-191972 not found
	I0916 10:23:28.503790   12653 network_create.go:289] output of [docker network inspect addons-191972]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-191972 not found
	
	** /stderr **
	I0916 10:23:28.503874   12653 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:23:28.520445   12653 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ac6790}
	I0916 10:23:28.520486   12653 network_create.go:124] attempt to create docker network addons-191972 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0916 10:23:28.520531   12653 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-191972 addons-191972
	I0916 10:23:28.578324   12653 network_create.go:108] docker network addons-191972 192.168.49.0/24 created
	I0916 10:23:28.578353   12653 kic.go:121] calculated static IP "192.168.49.2" for the "addons-191972" container
	I0916 10:23:28.578405   12653 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 10:23:28.593459   12653 cli_runner.go:164] Run: docker volume create addons-191972 --label name.minikube.sigs.k8s.io=addons-191972 --label created_by.minikube.sigs.k8s.io=true
	I0916 10:23:28.611104   12653 oci.go:103] Successfully created a docker volume addons-191972
	I0916 10:23:28.611189   12653 cli_runner.go:164] Run: docker run --rm --name addons-191972-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-191972 --entrypoint /usr/bin/test -v addons-191972:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 10:23:32.566442   12653 cli_runner.go:217] Completed: docker run --rm --name addons-191972-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-191972 --entrypoint /usr/bin/test -v addons-191972:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib: (3.955205965s)
	I0916 10:23:32.566475   12653 oci.go:107] Successfully prepared a docker volume addons-191972
	I0916 10:23:32.566499   12653 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:23:32.566524   12653 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 10:23:32.566588   12653 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-191972:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 10:23:36.989473   12653 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v addons-191972:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.422844639s)
	I0916 10:23:36.989499   12653 kic.go:203] duration metric: took 4.422974303s to extract preloaded images to volume ...
	W0916 10:23:36.989616   12653 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 10:23:36.989704   12653 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 10:23:37.034645   12653 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-191972 --name addons-191972 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-191972 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-191972 --network addons-191972 --ip 192.168.49.2 --volume addons-191972:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 10:23:37.351088   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Running}}
	I0916 10:23:37.369798   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:37.389505   12653 cli_runner.go:164] Run: docker exec addons-191972 stat /var/lib/dpkg/alternatives/iptables
	I0916 10:23:37.432507   12653 oci.go:144] the created container "addons-191972" has a running status.
	I0916 10:23:37.432542   12653 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa...
	I0916 10:23:37.512853   12653 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 10:23:37.532177   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:37.549342   12653 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 10:23:37.549361   12653 kic_runner.go:114] Args: [docker exec --privileged addons-191972 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 10:23:37.594990   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:37.611429   12653 machine.go:93] provisionDockerMachine start ...
	I0916 10:23:37.611513   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:37.628951   12653 main.go:141] libmachine: Using SSH client type: native
	I0916 10:23:37.629230   12653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 10:23:37.629249   12653 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:23:37.630101   12653 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54456->127.0.0.1:32768: read: connection reset by peer
	I0916 10:23:40.759062   12653 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-191972
	
	I0916 10:23:40.759087   12653 ubuntu.go:169] provisioning hostname "addons-191972"
	I0916 10:23:40.759139   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:40.776123   12653 main.go:141] libmachine: Using SSH client type: native
	I0916 10:23:40.776294   12653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 10:23:40.776306   12653 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-191972 && echo "addons-191972" | sudo tee /etc/hostname
	I0916 10:23:40.917999   12653 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-191972
	
	I0916 10:23:40.918073   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:40.934369   12653 main.go:141] libmachine: Using SSH client type: native
	I0916 10:23:40.934536   12653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0916 10:23:40.934552   12653 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-191972' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-191972/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-191972' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:23:41.063670   12653 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:23:41.063696   12653 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 10:23:41.063755   12653 ubuntu.go:177] setting up certificates
	I0916 10:23:41.063769   12653 provision.go:84] configureAuth start
	I0916 10:23:41.063821   12653 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-191972
	I0916 10:23:41.080185   12653 provision.go:143] copyHostCerts
	I0916 10:23:41.080289   12653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 10:23:41.080452   12653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 10:23:41.080539   12653 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 10:23:41.080607   12653 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.addons-191972 san=[127.0.0.1 192.168.49.2 addons-191972 localhost minikube]
	I0916 10:23:41.189624   12653 provision.go:177] copyRemoteCerts
	I0916 10:23:41.189685   12653 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:23:41.189718   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:41.206072   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:41.299940   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 10:23:41.321259   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:23:41.342100   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:23:41.362764   12653 provision.go:87] duration metric: took 298.977855ms to configureAuth
	I0916 10:23:41.362793   12653 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:23:41.362955   12653 config.go:182] Loaded profile config "addons-191972": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:23:41.362966   12653 machine.go:96] duration metric: took 3.751519266s to provisionDockerMachine
	I0916 10:23:41.362991   12653 client.go:171] duration metric: took 13.498318264s to LocalClient.Create
	I0916 10:23:41.363014   12653 start.go:167] duration metric: took 13.498406844s to libmachine.API.Create "addons-191972"
	I0916 10:23:41.363024   12653 start.go:293] postStartSetup for "addons-191972" (driver="docker")
	I0916 10:23:41.363035   12653 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:23:41.363112   12653 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:23:41.363159   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:41.379631   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:41.472315   12653 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:23:41.475416   12653 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:23:41.475455   12653 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:23:41.475469   12653 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:23:41.475477   12653 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:23:41.475490   12653 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 10:23:41.475562   12653 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 10:23:41.475593   12653 start.go:296] duration metric: took 112.560003ms for postStartSetup
	I0916 10:23:41.475953   12653 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-191972
	I0916 10:23:41.491831   12653 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/config.json ...
	I0916 10:23:41.492098   12653 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:23:41.492159   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:41.508709   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:41.604422   12653 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:23:41.608355   12653 start.go:128] duration metric: took 13.746544864s to createHost
	I0916 10:23:41.608378   12653 start.go:83] releasing machines lock for "addons-191972", held for 13.74667303s
	I0916 10:23:41.608449   12653 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-191972
	I0916 10:23:41.624552   12653 ssh_runner.go:195] Run: cat /version.json
	I0916 10:23:41.624594   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:41.624666   12653 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:23:41.624742   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:41.640830   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:41.641558   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:41.811513   12653 ssh_runner.go:195] Run: systemctl --version
	I0916 10:23:41.816090   12653 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:23:41.820031   12653 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 10:23:41.841966   12653 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:23:41.842040   12653 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:23:41.867614   12653 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 10:23:41.867637   12653 start.go:495] detecting cgroup driver to use...
	I0916 10:23:41.867665   12653 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:23:41.867707   12653 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 10:23:41.878761   12653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 10:23:41.889209   12653 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:23:41.889272   12653 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:23:41.901658   12653 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:23:41.914376   12653 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:23:41.989625   12653 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:23:42.064036   12653 docker.go:233] disabling docker service ...
	I0916 10:23:42.064087   12653 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:23:42.082378   12653 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:23:42.092694   12653 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:23:42.163431   12653 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:23:42.235566   12653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:23:42.245920   12653 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:23:42.260071   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 10:23:42.268844   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:23:42.277914   12653 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:23:42.277973   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:23:42.287090   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:23:42.295426   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:23:42.303716   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:23:42.312468   12653 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:23:42.320449   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:23:42.328970   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:23:42.337386   12653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:23:42.345791   12653 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:23:42.352855   12653 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:23:42.359971   12653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:23:42.438798   12653 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 10:23:42.548862   12653 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 10:23:42.548940   12653 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 10:23:42.552403   12653 start.go:563] Will wait 60s for crictl version
	I0916 10:23:42.552460   12653 ssh_runner.go:195] Run: which crictl
	I0916 10:23:42.555471   12653 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:23:42.586679   12653 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 10:23:42.586752   12653 ssh_runner.go:195] Run: containerd --version
	I0916 10:23:42.608454   12653 ssh_runner.go:195] Run: containerd --version
	I0916 10:23:42.632432   12653 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 10:23:42.633762   12653 cli_runner.go:164] Run: docker network inspect addons-191972 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:23:42.650400   12653 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:23:42.653892   12653 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:23:42.664053   12653 kubeadm.go:883] updating cluster {Name:addons-191972 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-191972 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:23:42.664154   12653 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:23:42.664195   12653 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:23:42.695688   12653 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 10:23:42.695710   12653 containerd.go:534] Images already preloaded, skipping extraction
	I0916 10:23:42.695778   12653 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:23:42.727148   12653 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 10:23:42.727166   12653 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:23:42.727174   12653 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I0916 10:23:42.727255   12653 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-191972 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-191972 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:23:42.727302   12653 ssh_runner.go:195] Run: sudo crictl info
	I0916 10:23:42.757474   12653 cni.go:84] Creating CNI manager for ""
	I0916 10:23:42.757493   12653 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 10:23:42.757502   12653 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:23:42.757520   12653 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-191972 NodeName:addons-191972 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:23:42.757633   12653 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-191972"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:23:42.757684   12653 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:23:42.765604   12653 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:23:42.765672   12653 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:23:42.773363   12653 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0916 10:23:42.789280   12653 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:23:42.805100   12653 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0916 10:23:42.820420   12653 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0916 10:23:42.823264   12653 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:23:42.832700   12653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:23:42.907069   12653 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:23:42.919246   12653 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972 for IP: 192.168.49.2
	I0916 10:23:42.919266   12653 certs.go:194] generating shared ca certs ...
	I0916 10:23:42.919279   12653 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:42.919399   12653 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 10:23:43.054784   12653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt ...
	I0916 10:23:43.054815   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt: {Name:mkf05eaa3032985e939bd1a93aa36a6d50242974 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.055008   12653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key ...
	I0916 10:23:43.055031   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key: {Name:mk4cf19316dad04ab708c5c17e172ec92fc35230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.055134   12653 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 10:23:43.268289   12653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt ...
	I0916 10:23:43.268318   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt: {Name:mk68da284b9ad8d396a1f11e7cfb94cc6f208c5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.268510   12653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key ...
	I0916 10:23:43.268532   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key: {Name:mkdf8c5da2a6d70c9ece2277843ebe69f9105c4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.268626   12653 certs.go:256] generating profile certs ...
	I0916 10:23:43.268694   12653 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.key
	I0916 10:23:43.268720   12653 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt with IP's: []
	I0916 10:23:43.341520   12653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt ...
	I0916 10:23:43.341551   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: {Name:mke3c2895145f9c692cb1e6451d9766499ccc877 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.341738   12653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.key ...
	I0916 10:23:43.341755   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.key: {Name:mkd6237ae8ebf429452ae0c60cea457b1f9cff1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.341855   12653 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.key.ac265369
	I0916 10:23:43.341882   12653 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.crt.ac265369 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0916 10:23:43.403750   12653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.crt.ac265369 ...
	I0916 10:23:43.403775   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.crt.ac265369: {Name:mk72db26b8519849abdf811ed93be5caeac2267d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.403951   12653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.key.ac265369 ...
	I0916 10:23:43.403973   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.key.ac265369: {Name:mk4b11dab0a085e395344dc35616a0c16f298191 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.404065   12653 certs.go:381] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.crt.ac265369 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.crt
	I0916 10:23:43.404155   12653 certs.go:385] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.key.ac265369 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.key
	I0916 10:23:43.404230   12653 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.key
	I0916 10:23:43.404250   12653 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.crt with IP's: []
	I0916 10:23:43.488130   12653 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.crt ...
	I0916 10:23:43.488160   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.crt: {Name:mk11d8f9c437e5586897185f4551df7594041471 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.488342   12653 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.key ...
	I0916 10:23:43.488360   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.key: {Name:mk18734ee357c50ce0ff509ffb1c7e42743fa1b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:43.488577   12653 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:23:43.488617   12653 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 10:23:43.488652   12653 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:23:43.488682   12653 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 10:23:43.489279   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:23:43.511557   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:23:43.532934   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:23:43.553377   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:23:43.575078   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0916 10:23:43.595868   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:23:43.616905   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:23:43.637839   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:23:43.658915   12653 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:23:43.680485   12653 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:23:43.696295   12653 ssh_runner.go:195] Run: openssl version
	I0916 10:23:43.701282   12653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:23:43.709681   12653 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:43.712715   12653 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:43.712762   12653 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:23:43.718832   12653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:23:43.727190   12653 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:23:43.730247   12653 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:23:43.730290   12653 kubeadm.go:392] StartCluster: {Name:addons-191972 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-191972 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:23:43.730356   12653 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 10:23:43.730405   12653 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:23:43.761830   12653 cri.go:89] found id: ""
	I0916 10:23:43.761893   12653 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:23:43.770086   12653 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:23:43.778465   12653 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 10:23:43.778522   12653 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:23:43.786355   12653 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:23:43.786373   12653 kubeadm.go:157] found existing configuration files:
	
	I0916 10:23:43.786419   12653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 10:23:43.794471   12653 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:23:43.794519   12653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:23:43.802487   12653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 10:23:43.810401   12653 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:23:43.810451   12653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:23:43.817541   12653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 10:23:43.824799   12653 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:23:43.824842   12653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:23:43.832032   12653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 10:23:43.839239   12653 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:23:43.839298   12653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:23:43.847649   12653 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 10:23:43.880192   12653 kubeadm.go:310] W0916 10:23:43.879583    1109 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:23:43.880773   12653 kubeadm.go:310] W0916 10:23:43.880291    1109 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:23:43.896580   12653 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 10:23:43.944226   12653 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:23:52.227261   12653 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 10:23:52.227338   12653 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 10:23:52.227418   12653 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 10:23:52.227466   12653 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 10:23:52.227501   12653 kubeadm.go:310] OS: Linux
	I0916 10:23:52.227541   12653 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 10:23:52.227584   12653 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 10:23:52.227625   12653 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 10:23:52.227670   12653 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 10:23:52.227711   12653 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 10:23:52.227786   12653 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 10:23:52.227872   12653 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 10:23:52.227947   12653 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 10:23:52.227994   12653 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 10:23:52.228098   12653 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:23:52.228218   12653 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:23:52.228360   12653 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 10:23:52.228491   12653 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:23:52.230143   12653 out.go:235]   - Generating certificates and keys ...
	I0916 10:23:52.230239   12653 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 10:23:52.230328   12653 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 10:23:52.230422   12653 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 10:23:52.230504   12653 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 10:23:52.230596   12653 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 10:23:52.230685   12653 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 10:23:52.230768   12653 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 10:23:52.230910   12653 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-191972 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 10:23:52.230984   12653 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 10:23:52.231130   12653 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-191972 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 10:23:52.231228   12653 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 10:23:52.231331   12653 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 10:23:52.231395   12653 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 10:23:52.231471   12653 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:23:52.231543   12653 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:23:52.231622   12653 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 10:23:52.231683   12653 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:23:52.231759   12653 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:23:52.231871   12653 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:23:52.231979   12653 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:23:52.232069   12653 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:23:52.233407   12653 out.go:235]   - Booting up control plane ...
	I0916 10:23:52.233500   12653 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:23:52.233589   12653 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:23:52.233654   12653 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:23:52.233747   12653 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:23:52.233846   12653 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:23:52.233895   12653 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 10:23:52.234011   12653 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 10:23:52.234102   12653 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:23:52.234155   12653 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.63037ms
	I0916 10:23:52.234224   12653 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 10:23:52.234282   12653 kubeadm.go:310] [api-check] The API server is healthy after 4.501222011s
	I0916 10:23:52.234402   12653 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:23:52.234544   12653 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:23:52.234625   12653 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:23:52.234780   12653 kubeadm.go:310] [mark-control-plane] Marking the node addons-191972 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:23:52.234830   12653 kubeadm.go:310] [bootstrap-token] Using token: fe3fo6.40ynbll2pbwpp3it
	I0916 10:23:52.236918   12653 out.go:235]   - Configuring RBAC rules ...
	I0916 10:23:52.237043   12653 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:23:52.237118   12653 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:23:52.237261   12653 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:23:52.237418   12653 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:23:52.237547   12653 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:23:52.237659   12653 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:23:52.237791   12653 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:23:52.237856   12653 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 10:23:52.237898   12653 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 10:23:52.237904   12653 kubeadm.go:310] 
	I0916 10:23:52.237963   12653 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 10:23:52.237971   12653 kubeadm.go:310] 
	I0916 10:23:52.238040   12653 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 10:23:52.238046   12653 kubeadm.go:310] 
	I0916 10:23:52.238070   12653 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 10:23:52.238123   12653 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:23:52.238167   12653 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:23:52.238173   12653 kubeadm.go:310] 
	I0916 10:23:52.238218   12653 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 10:23:52.238223   12653 kubeadm.go:310] 
	I0916 10:23:52.238268   12653 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:23:52.238274   12653 kubeadm.go:310] 
	I0916 10:23:52.238329   12653 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 10:23:52.238418   12653 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:23:52.238507   12653 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:23:52.238515   12653 kubeadm.go:310] 
	I0916 10:23:52.238598   12653 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:23:52.238681   12653 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 10:23:52.238690   12653 kubeadm.go:310] 
	I0916 10:23:52.238801   12653 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fe3fo6.40ynbll2pbwpp3it \
	I0916 10:23:52.238908   12653 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 \
	I0916 10:23:52.238933   12653 kubeadm.go:310] 	--control-plane 
	I0916 10:23:52.238939   12653 kubeadm.go:310] 
	I0916 10:23:52.239012   12653 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:23:52.239020   12653 kubeadm.go:310] 
	I0916 10:23:52.239095   12653 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fe3fo6.40ynbll2pbwpp3it \
	I0916 10:23:52.239199   12653 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 
	I0916 10:23:52.239210   12653 cni.go:84] Creating CNI manager for ""
	I0916 10:23:52.239215   12653 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 10:23:52.240733   12653 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 10:23:52.241980   12653 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 10:23:52.245609   12653 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 10:23:52.245625   12653 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 10:23:52.261912   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 10:23:52.447057   12653 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:23:52.447144   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:52.447165   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-191972 minikube.k8s.io/updated_at=2024_09_16T10_23_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=addons-191972 minikube.k8s.io/primary=true
	I0916 10:23:52.543497   12653 ops.go:34] apiserver oom_adj: -16
	I0916 10:23:52.543643   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:53.044491   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:53.543770   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:54.044061   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:54.544691   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:55.044249   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:55.543918   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:56.043679   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:56.543717   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:57.044619   12653 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:23:57.107839   12653 kubeadm.go:1113] duration metric: took 4.660750668s to wait for elevateKubeSystemPrivileges
	I0916 10:23:57.107871   12653 kubeadm.go:394] duration metric: took 13.37758355s to StartCluster
	I0916 10:23:57.107890   12653 settings.go:142] acquiring lock: {Name:mk5f140da961bb985c566f732d611f3e238f1f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:57.107998   12653 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:23:57.108383   12653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:23:57.108581   12653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 10:23:57.108610   12653 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 10:23:57.108666   12653 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0916 10:23:57.108789   12653 addons.go:69] Setting yakd=true in profile "addons-191972"
	I0916 10:23:57.108813   12653 addons.go:234] Setting addon yakd=true in "addons-191972"
	I0916 10:23:57.108830   12653 config.go:182] Loaded profile config "addons-191972": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:23:57.108844   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.108885   12653 addons.go:69] Setting inspektor-gadget=true in profile "addons-191972"
	I0916 10:23:57.108900   12653 addons.go:234] Setting addon inspektor-gadget=true in "addons-191972"
	I0916 10:23:57.108928   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.109000   12653 addons.go:69] Setting gcp-auth=true in profile "addons-191972"
	I0916 10:23:57.109025   12653 mustload.go:65] Loading cluster: addons-191972
	I0916 10:23:57.109143   12653 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-191972"
	I0916 10:23:57.109187   12653 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-191972"
	I0916 10:23:57.109185   12653 addons.go:69] Setting default-storageclass=true in profile "addons-191972"
	I0916 10:23:57.109211   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.109225   12653 config.go:182] Loaded profile config "addons-191972": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:23:57.109232   12653 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-191972"
	I0916 10:23:57.109216   12653 addons.go:69] Setting cloud-spanner=true in profile "addons-191972"
	I0916 10:23:57.109259   12653 addons.go:69] Setting storage-provisioner=true in profile "addons-191972"
	I0916 10:23:57.109265   12653 addons.go:234] Setting addon cloud-spanner=true in "addons-191972"
	I0916 10:23:57.109274   12653 addons.go:234] Setting addon storage-provisioner=true in "addons-191972"
	I0916 10:23:57.109308   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.109323   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.109407   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.109485   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.109507   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.109547   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.109684   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.109757   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.109825   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.110167   12653 addons.go:69] Setting ingress-dns=true in profile "addons-191972"
	I0916 10:23:57.110372   12653 addons.go:234] Setting addon ingress-dns=true in "addons-191972"
	I0916 10:23:57.110546   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.111202   12653 addons.go:69] Setting helm-tiller=true in profile "addons-191972"
	I0916 10:23:57.111255   12653 addons.go:234] Setting addon helm-tiller=true in "addons-191972"
	I0916 10:23:57.111282   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.111445   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.111484   12653 addons.go:69] Setting ingress=true in profile "addons-191972"
	I0916 10:23:57.111498   12653 addons.go:234] Setting addon ingress=true in "addons-191972"
	I0916 10:23:57.111527   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.111731   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.110913   12653 addons.go:69] Setting metrics-server=true in profile "addons-191972"
	I0916 10:23:57.111983   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.111987   12653 addons.go:234] Setting addon metrics-server=true in "addons-191972"
	I0916 10:23:57.112171   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.110926   12653 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-191972"
	I0916 10:23:57.113223   12653 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-191972"
	I0916 10:23:57.113258   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.113700   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.115817   12653 out.go:177] * Verifying Kubernetes components...
	I0916 10:23:57.116675   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.110938   12653 addons.go:69] Setting registry=true in profile "addons-191972"
	I0916 10:23:57.116963   12653 addons.go:234] Setting addon registry=true in "addons-191972"
	I0916 10:23:57.117093   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.110938   12653 addons.go:69] Setting volcano=true in profile "addons-191972"
	I0916 10:23:57.117245   12653 addons.go:234] Setting addon volcano=true in "addons-191972"
	I0916 10:23:57.117313   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.110949   12653 addons.go:69] Setting volumesnapshots=true in profile "addons-191972"
	I0916 10:23:57.117350   12653 addons.go:234] Setting addon volumesnapshots=true in "addons-191972"
	I0916 10:23:57.117397   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.117799   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.117919   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.118954   12653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:23:57.110924   12653 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-191972"
	I0916 10:23:57.120855   12653 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-191972"
	I0916 10:23:57.121186   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.148826   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.156121   12653 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:23:57.158094   12653 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0916 10:23:57.160078   12653 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0916 10:23:57.160230   12653 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:23:57.163394   12653 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:23:57.163405   12653 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0916 10:23:57.163428   12653 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0916 10:23:57.163491   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.163933   12653 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 10:23:57.163952   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0916 10:23:57.163999   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.166339   12653 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0916 10:23:57.166352   12653 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0916 10:23:57.166505   12653 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:57.166525   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:23:57.166591   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.176509   12653 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:23:57.176539   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0916 10:23:57.176597   12653 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0916 10:23:57.176613   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0916 10:23:57.176614   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.176667   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.176871   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.184510   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0916 10:23:57.184923   12653 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0916 10:23:57.187620   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0916 10:23:57.187908   12653 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 10:23:57.187925   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0916 10:23:57.188005   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.190192   12653 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0916 10:23:57.190888   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0916 10:23:57.191984   12653 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0916 10:23:57.192004   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0916 10:23:57.192062   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.192462   12653 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-191972"
	I0916 10:23:57.192519   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.193186   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.195485   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0916 10:23:57.196395   12653 addons.go:234] Setting addon default-storageclass=true in "addons-191972"
	I0916 10:23:57.196441   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:23:57.197033   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:23:57.200024   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0916 10:23:57.200756   12653 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0916 10:23:57.202388   12653 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 10:23:57.202409   12653 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 10:23:57.202572   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.204739   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0916 10:23:57.206967   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0916 10:23:57.217725   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0916 10:23:57.217900   12653 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0916 10:23:57.219581   12653 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0916 10:23:57.219714   12653 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0916 10:23:57.219798   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.219620   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0916 10:23:57.220511   12653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0916 10:23:57.221727   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.235796   12653 out.go:177]   - Using image docker.io/registry:2.8.3
	I0916 10:23:57.237579   12653 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0916 10:23:57.239326   12653 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0916 10:23:57.239350   12653 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0916 10:23:57.239411   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.239514   12653 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0916 10:23:57.241480   12653 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0916 10:23:57.241502   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0916 10:23:57.241555   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.243883   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.255850   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.256610   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.261965   12653 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0916 10:23:57.263559   12653 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0916 10:23:57.265255   12653 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0916 10:23:57.266412   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.267838   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.268005   12653 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 10:23:57.268022   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0916 10:23:57.268074   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.269050   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.276483   12653 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:57.276507   12653 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:23:57.276573   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.283025   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.284257   12653 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 10:23:57.288880   12653 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0916 10:23:57.290776   12653 out.go:177]   - Using image docker.io/busybox:stable
	I0916 10:23:57.292419   12653 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:23:57.292444   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0916 10:23:57.292510   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:23:57.295145   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.295780   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.297628   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.298120   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.300416   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.306147   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.311231   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:23:57.314549   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	W0916 10:23:57.324739   12653 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0916 10:23:57.324769   12653 retry.go:31] will retry after 374.435778ms: ssh: handshake failed: EOF
	W0916 10:23:57.325602   12653 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0916 10:23:57.325619   12653 retry.go:31] will retry after 150.651165ms: ssh: handshake failed: EOF
	I0916 10:23:57.330682   12653 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:23:57.629690   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0916 10:23:57.729822   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:23:57.730227   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0916 10:23:57.742355   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:23:57.824974   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0916 10:23:57.842831   12653 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 10:23:57.842917   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0916 10:23:57.843332   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0916 10:23:57.921972   12653 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0916 10:23:57.922058   12653 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0916 10:23:57.922011   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0916 10:23:57.922034   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0916 10:23:57.922195   12653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0916 10:23:57.929874   12653 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0916 10:23:57.929901   12653 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0916 10:23:57.941141   12653 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0916 10:23:57.941166   12653 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0916 10:23:58.138273   12653 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0916 10:23:58.138369   12653 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0916 10:23:58.222261   12653 addons.go:431] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:23:58.222352   12653 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0916 10:23:58.229572   12653 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0916 10:23:58.229660   12653 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0916 10:23:58.232627   12653 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0916 10:23:58.232698   12653 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0916 10:23:58.322393   12653 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 10:23:58.322420   12653 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 10:23:58.339998   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0916 10:23:58.435282   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0916 10:23:58.435313   12653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0916 10:23:58.435591   12653 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.15128486s)
	I0916 10:23:58.435618   12653 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0916 10:23:58.436958   12653 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.1062474s)
	I0916 10:23:58.437947   12653 node_ready.go:35] waiting up to 6m0s for node "addons-191972" to be "Ready" ...
	I0916 10:23:58.441471   12653 node_ready.go:49] node "addons-191972" has status "Ready":"True"
	I0916 10:23:58.441502   12653 node_ready.go:38] duration metric: took 3.529013ms for node "addons-191972" to be "Ready" ...
	I0916 10:23:58.441514   12653 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:23:58.442873   12653 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0916 10:23:58.442897   12653 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0916 10:23:58.534045   12653 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-2l862" in "kube-system" namespace to be "Ready" ...
	I0916 10:23:58.540468   12653 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:23:58.540496   12653 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 10:23:58.642810   12653 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:23:58.642885   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0916 10:23:58.728521   12653 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0916 10:23:58.728554   12653 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0916 10:23:58.840472   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0916 10:23:58.921026   12653 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0916 10:23:58.921059   12653 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0916 10:23:58.936525   12653 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0916 10:23:58.936552   12653 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0916 10:23:58.939212   12653 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-191972" context rescaled to 1 replicas
	I0916 10:23:59.131614   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 10:23:59.224079   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0916 10:23:59.224104   12653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0916 10:23:59.230203   12653 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0916 10:23:59.230238   12653 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0916 10:23:59.423686   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0916 10:23:59.430144   12653 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0916 10:23:59.430176   12653 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0916 10:23:59.433784   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0916 10:23:59.433810   12653 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0916 10:23:59.542608   12653 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:23:59.542635   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0916 10:23:59.630644   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0916 10:23:59.630734   12653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0916 10:23:59.840282   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0916 10:23:59.927613   12653 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:23:59.927705   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0916 10:24:00.030859   12653 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0916 10:24:00.030936   12653 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0916 10:24:00.034479   12653 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0916 10:24:00.034549   12653 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0916 10:24:00.038488   12653 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-2l862" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-2l862" not found
	I0916 10:24:00.038522   12653 pod_ready.go:82] duration metric: took 1.504385632s for pod "coredns-7c65d6cfc9-2l862" in "kube-system" namespace to be "Ready" ...
	E0916 10:24:00.038535   12653 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-2l862" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-2l862" not found
	I0916 10:24:00.038552   12653 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:00.333635   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:24:00.339910   12653 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0916 10:24:00.339994   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0916 10:24:00.627234   12653 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0916 10:24:00.627262   12653 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0916 10:24:00.929780   12653 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0916 10:24:00.929809   12653 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0916 10:24:01.128973   12653 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:24:01.129062   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0916 10:24:01.334031   12653 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0916 10:24:01.334116   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0916 10:24:01.525220   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0916 10:24:02.022039   12653 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0916 10:24:02.022114   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0916 10:24:02.136463   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:02.532736   12653 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:24:02.532829   12653 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0916 10:24:02.738986   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0916 10:24:04.426813   12653 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0916 10:24:04.426903   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:24:04.456284   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:24:04.624938   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:04.638370   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.008571899s)
	I0916 10:24:04.638414   12653 addons.go:475] Verifying addon ingress=true in "addons-191972"
	I0916 10:24:04.638488   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (6.908226437s)
	I0916 10:24:04.638570   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.908717103s)
	I0916 10:24:04.638623   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.896188028s)
	I0916 10:24:04.638699   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.81369606s)
	I0916 10:24:04.638718   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.795359026s)
	I0916 10:24:04.638742   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.716592394s)
	I0916 10:24:04.641681   12653 out.go:177] * Verifying ingress addon...
	I0916 10:24:04.644857   12653 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	W0916 10:24:04.722084   12653 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0916 10:24:04.723574   12653 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0916 10:24:04.723598   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:04.841083   12653 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0916 10:24:04.932849   12653 addons.go:234] Setting addon gcp-auth=true in "addons-191972"
	I0916 10:24:04.932903   12653 host.go:66] Checking if "addons-191972" exists ...
	I0916 10:24:04.933372   12653 cli_runner.go:164] Run: docker container inspect addons-191972 --format={{.State.Status}}
	I0916 10:24:04.957393   12653 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0916 10:24:04.957464   12653 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-191972
	I0916 10:24:04.975728   12653 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/addons-191972/id_rsa Username:docker}
	I0916 10:24:05.150342   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:05.650366   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:06.149809   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:06.649391   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:06.834167   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (8.494119031s)
	I0916 10:24:06.834259   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.993750099s)
	I0916 10:24:06.834355   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.702687859s)
	I0916 10:24:06.834379   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.410662864s)
	I0916 10:24:06.834381   12653 addons.go:475] Verifying addon metrics-server=true in "addons-191972"
	I0916 10:24:06.834394   12653 addons.go:475] Verifying addon registry=true in "addons-191972"
	I0916 10:24:06.834447   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.994082306s)
	I0916 10:24:06.834595   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.500877662s)
	W0916 10:24:06.834635   12653 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:24:06.834660   12653 retry.go:31] will retry after 180.492463ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0916 10:24:06.834694   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (5.309367322s)
	I0916 10:24:06.836029   12653 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-191972 service yakd-dashboard -n yakd-dashboard
	
	I0916 10:24:06.836032   12653 out.go:177] * Verifying registry addon...
	I0916 10:24:06.838577   12653 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0916 10:24:06.842659   12653 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0916 10:24:06.842681   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:07.016329   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0916 10:24:07.122253   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:07.229433   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:07.346049   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:07.428384   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.689342475s)
	I0916 10:24:07.428423   12653 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-191972"
	I0916 10:24:07.428557   12653 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.471115449s)
	I0916 10:24:07.430137   12653 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0916 10:24:07.430140   12653 out.go:177] * Verifying csi-hostpath-driver addon...
	I0916 10:24:07.432142   12653 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0916 10:24:07.433350   12653 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0916 10:24:07.433452   12653 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0916 10:24:07.433472   12653 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0916 10:24:07.446890   12653 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0916 10:24:07.446929   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:07.523198   12653 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0916 10:24:07.523247   12653 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0916 10:24:07.543809   12653 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:24:07.543877   12653 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0916 10:24:07.627288   12653 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0916 10:24:07.649744   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:07.842799   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:07.943700   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.149515   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:08.343117   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:08.438263   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:08.651360   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:08.739263   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.722876496s)
	I0916 10:24:08.739377   12653 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.111993041s)
	I0916 10:24:08.740565   12653 addons.go:475] Verifying addon gcp-auth=true in "addons-191972"
	I0916 10:24:08.742658   12653 out.go:177] * Verifying gcp-auth addon...
	I0916 10:24:08.744959   12653 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0916 10:24:08.752275   12653 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 10:24:08.842486   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:08.937942   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.148485   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:09.342745   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:09.444884   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:09.544117   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:09.649057   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:09.850158   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:09.951607   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.149384   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:10.342403   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:10.437953   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:10.648926   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:10.842555   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:10.938628   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:11.149265   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:11.341824   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:11.438269   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:11.544664   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:11.649663   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:11.842706   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:11.938382   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:12.149747   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:12.341485   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:12.438115   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:12.649444   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:12.842417   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:12.937808   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:13.149247   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:13.342184   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:13.443397   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:13.544742   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:13.649342   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:13.842433   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:13.938156   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:14.148884   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:14.342230   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:14.437378   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:14.648929   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:14.841404   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:14.938373   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:15.148947   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:15.342062   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:15.437442   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:15.544833   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:15.649729   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:15.875330   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:16.063181   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:16.148410   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:16.342704   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:16.437759   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:16.649599   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:16.842196   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:16.937322   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:17.148973   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:17.342240   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:17.438331   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:17.649287   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:17.842346   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:17.937786   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:18.044459   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:18.148462   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:18.342098   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:18.438245   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:18.650618   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:18.842115   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:18.937393   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:19.148210   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:19.342331   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:19.437753   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:19.649206   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:19.841659   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:19.937929   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:20.149095   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:20.341559   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:20.437389   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:20.543697   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:20.649389   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:20.841724   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:20.939911   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:21.148803   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:21.341867   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:21.437743   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:21.649220   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:21.841636   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:21.937733   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:22.148853   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:22.341623   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:22.438291   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:22.544155   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:22.648789   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:22.842117   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:22.937569   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:23.148605   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:23.342228   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:23.437946   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:23.648725   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:23.848611   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:23.937702   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:24.148830   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:24.341472   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:24.437746   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:24.648857   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:24.841524   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:24.937579   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:25.043875   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:25.148986   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:25.341729   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:25.438614   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:25.648859   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:25.842571   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:25.937660   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:26.148067   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:26.342525   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:26.442495   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:26.649368   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:26.841986   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:26.937210   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:27.044290   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:27.148266   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:27.341925   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:27.437369   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:27.648710   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:27.842271   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:27.937289   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:28.149389   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:28.341712   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:28.437988   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:28.649507   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:28.841935   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:28.937651   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:29.148305   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:29.341758   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:29.437230   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:29.544648   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:29.648789   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:29.842453   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:29.937780   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:30.149144   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:30.341971   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:30.436935   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:30.648826   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:30.842241   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:30.937301   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:31.148532   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:31.342364   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:31.438028   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:31.649021   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:31.842529   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:31.938084   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:32.044452   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:32.148477   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:32.342165   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:32.437629   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:32.649007   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:32.841446   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:32.937583   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:33.148965   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:33.341801   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:33.437144   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:33.649484   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:33.842344   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:33.937348   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:34.148522   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:34.342404   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:34.438126   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:34.543640   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:34.649404   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:34.842417   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:34.937940   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:35.149191   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:35.341955   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:35.437296   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:35.649499   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:35.841951   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:35.937835   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:36.148878   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:36.342396   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:36.437451   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:36.648935   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:36.841429   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:36.937515   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:37.043652   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:37.148879   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:37.341650   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:37.438917   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:37.648863   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:37.843665   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:37.937755   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:38.148476   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:38.342129   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:38.437617   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:38.648850   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:38.842096   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:38.937210   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:39.044295   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:39.148546   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:39.342070   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:39.437434   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:39.649394   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:39.850992   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:39.937068   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:40.148412   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:40.342026   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:40.438818   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:40.648424   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:40.842673   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:40.937959   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:41.149077   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:41.341573   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:41.437823   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:41.544866   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:41.649385   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:41.842400   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:41.942736   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:42.148726   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:42.342124   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:42.438550   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:42.649404   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:42.841927   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:42.937808   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:43.149523   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:43.341957   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:43.437318   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:43.545247   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:43.648618   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:43.842970   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:43.938236   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:44.149170   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:44.342180   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:44.437399   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:44.649533   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:44.842942   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:44.937846   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:45.149581   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:45.342185   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:45.437873   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:45.649109   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:45.842031   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:45.937050   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:46.043865   12653 pod_ready.go:103] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"False"
	I0916 10:24:46.149131   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:46.342272   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:46.437555   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:46.649645   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:46.850195   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:46.951731   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:47.044952   12653 pod_ready.go:93] pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:47.044977   12653 pod_ready.go:82] duration metric: took 47.006412913s for pod "coredns-7c65d6cfc9-9rccl" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.044991   12653 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.048830   12653 pod_ready.go:93] pod "etcd-addons-191972" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:47.048847   12653 pod_ready.go:82] duration metric: took 3.848159ms for pod "etcd-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.048861   12653 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.052536   12653 pod_ready.go:93] pod "kube-apiserver-addons-191972" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:47.052558   12653 pod_ready.go:82] duration metric: took 3.691187ms for pod "kube-apiserver-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.052566   12653 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.056167   12653 pod_ready.go:93] pod "kube-controller-manager-addons-191972" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:47.056192   12653 pod_ready.go:82] duration metric: took 3.620465ms for pod "kube-controller-manager-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.056201   12653 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fnr7f" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.060021   12653 pod_ready.go:93] pod "kube-proxy-fnr7f" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:47.060038   12653 pod_ready.go:82] duration metric: took 3.830746ms for pod "kube-proxy-fnr7f" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.060046   12653 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.149672   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:47.342533   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:47.437808   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:47.441161   12653 pod_ready.go:93] pod "kube-scheduler-addons-191972" in "kube-system" namespace has status "Ready":"True"
	I0916 10:24:47.441181   12653 pod_ready.go:82] duration metric: took 381.129532ms for pod "kube-scheduler-addons-191972" in "kube-system" namespace to be "Ready" ...
	I0916 10:24:47.441188   12653 pod_ready.go:39] duration metric: took 48.999654984s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:24:47.441205   12653 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:24:47.441254   12653 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:24:47.453909   12653 api_server.go:72] duration metric: took 50.345260117s to wait for apiserver process to appear ...
	I0916 10:24:47.453935   12653 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:24:47.453960   12653 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 10:24:47.458673   12653 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 10:24:47.459648   12653 api_server.go:141] control plane version: v1.31.1
	I0916 10:24:47.459673   12653 api_server.go:131] duration metric: took 5.729621ms to wait for apiserver health ...
	I0916 10:24:47.459683   12653 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:24:47.648237   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:47.648583   12653 system_pods.go:59] 19 kube-system pods found
	I0916 10:24:47.648620   12653 system_pods.go:61] "coredns-7c65d6cfc9-9rccl" [f2ffddc5-3995-4d5a-8059-297b3859f9c5] Running
	I0916 10:24:47.648634   12653 system_pods.go:61] "csi-hostpath-attacher-0" [07b91be6-2f2b-441c-80ee-f832e9ee2e18] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 10:24:47.648642   12653 system_pods.go:61] "csi-hostpath-resizer-0" [311c306b-accd-4242-8415-81199a4ce054] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 10:24:47.648653   12653 system_pods.go:61] "csi-hostpathplugin-qdnbn" [8f5408a2-a7eb-4f76-8ce0-b885fb4b47e5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 10:24:47.648667   12653 system_pods.go:61] "etcd-addons-191972" [81af20b7-9b19-4723-9b92-0ded3d775cd3] Running
	I0916 10:24:47.648673   12653 system_pods.go:61] "kindnet-rxp8k" [02b143c0-bbb4-4f94-8448-9c3c4f248a87] Running
	I0916 10:24:47.648678   12653 system_pods.go:61] "kube-apiserver-addons-191972" [1aabf917-f381-4e69-8524-954958c99b7e] Running
	I0916 10:24:47.648684   12653 system_pods.go:61] "kube-controller-manager-addons-191972" [ee796e67-bd06-4d93-9d20-aabcbb395ba2] Running
	I0916 10:24:47.648690   12653 system_pods.go:61] "kube-ingress-dns-minikube" [0f28fa0b-84f1-4215-aa41-4596ab4cef8b] Running
	I0916 10:24:47.648696   12653 system_pods.go:61] "kube-proxy-fnr7f" [a9e53f94-30ad-4178-b9e2-3ba4354a5adf] Running
	I0916 10:24:47.648700   12653 system_pods.go:61] "kube-scheduler-addons-191972" [32bbb72d-e291-4c84-8709-6066cf10b0cf] Running
	I0916 10:24:47.648709   12653 system_pods.go:61] "metrics-server-84c5f94fbc-s7654" [14e280ea-8ba8-4805-844c-aeff8fb18ce0] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 10:24:47.648719   12653 system_pods.go:61] "nvidia-device-plugin-daemonset-vpb85" [14ca6c72-b73b-4254-910a-0b876ca73f90] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 10:24:47.648732   12653 system_pods.go:61] "registry-66c9cd494c-vsbgv" [bd70fcec-e032-4dbd-902c-a139ac179bbf] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 10:24:47.648740   12653 system_pods.go:61] "registry-proxy-6vsnj" [05d6014b-9706-4d7a-a816-dbc7f557cd15] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 10:24:47.648749   12653 system_pods.go:61] "snapshot-controller-56fcc65765-4g9w6" [ad55cf42-58df-4d10-aae8-7626a86e7e0d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:24:47.648760   12653 system_pods.go:61] "snapshot-controller-56fcc65765-htkmc" [70a5c810-f514-4b23-b3a3-7a4cc46c35e2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:24:47.648766   12653 system_pods.go:61] "storage-provisioner" [ca9a25b7-8324-4ea1-a525-24f8c308baea] Running
	I0916 10:24:47.648777   12653 system_pods.go:61] "tiller-deploy-b48cc5f79-ddkxz" [dfe534c4-9e29-4907-b8cc-1dd12fc52f45] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 10:24:47.648789   12653 system_pods.go:74] duration metric: took 189.097544ms to wait for pod list to return data ...
	I0916 10:24:47.648801   12653 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:24:47.841018   12653 default_sa.go:45] found service account: "default"
	I0916 10:24:47.841043   12653 default_sa.go:55] duration metric: took 192.233696ms for default service account to be created ...
	I0916 10:24:47.841053   12653 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:24:47.841394   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:47.937402   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:48.049475   12653 system_pods.go:86] 19 kube-system pods found
	I0916 10:24:48.049509   12653 system_pods.go:89] "coredns-7c65d6cfc9-9rccl" [f2ffddc5-3995-4d5a-8059-297b3859f9c5] Running
	I0916 10:24:48.049523   12653 system_pods.go:89] "csi-hostpath-attacher-0" [07b91be6-2f2b-441c-80ee-f832e9ee2e18] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0916 10:24:48.049533   12653 system_pods.go:89] "csi-hostpath-resizer-0" [311c306b-accd-4242-8415-81199a4ce054] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0916 10:24:48.049541   12653 system_pods.go:89] "csi-hostpathplugin-qdnbn" [8f5408a2-a7eb-4f76-8ce0-b885fb4b47e5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0916 10:24:48.049546   12653 system_pods.go:89] "etcd-addons-191972" [81af20b7-9b19-4723-9b92-0ded3d775cd3] Running
	I0916 10:24:48.049550   12653 system_pods.go:89] "kindnet-rxp8k" [02b143c0-bbb4-4f94-8448-9c3c4f248a87] Running
	I0916 10:24:48.049554   12653 system_pods.go:89] "kube-apiserver-addons-191972" [1aabf917-f381-4e69-8524-954958c99b7e] Running
	I0916 10:24:48.049560   12653 system_pods.go:89] "kube-controller-manager-addons-191972" [ee796e67-bd06-4d93-9d20-aabcbb395ba2] Running
	I0916 10:24:48.049569   12653 system_pods.go:89] "kube-ingress-dns-minikube" [0f28fa0b-84f1-4215-aa41-4596ab4cef8b] Running
	I0916 10:24:48.049572   12653 system_pods.go:89] "kube-proxy-fnr7f" [a9e53f94-30ad-4178-b9e2-3ba4354a5adf] Running
	I0916 10:24:48.049576   12653 system_pods.go:89] "kube-scheduler-addons-191972" [32bbb72d-e291-4c84-8709-6066cf10b0cf] Running
	I0916 10:24:48.049579   12653 system_pods.go:89] "metrics-server-84c5f94fbc-s7654" [14e280ea-8ba8-4805-844c-aeff8fb18ce0] Running
	I0916 10:24:48.049587   12653 system_pods.go:89] "nvidia-device-plugin-daemonset-vpb85" [14ca6c72-b73b-4254-910a-0b876ca73f90] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0916 10:24:48.049595   12653 system_pods.go:89] "registry-66c9cd494c-vsbgv" [bd70fcec-e032-4dbd-902c-a139ac179bbf] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0916 10:24:48.049600   12653 system_pods.go:89] "registry-proxy-6vsnj" [05d6014b-9706-4d7a-a816-dbc7f557cd15] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0916 10:24:48.049605   12653 system_pods.go:89] "snapshot-controller-56fcc65765-4g9w6" [ad55cf42-58df-4d10-aae8-7626a86e7e0d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:24:48.049613   12653 system_pods.go:89] "snapshot-controller-56fcc65765-htkmc" [70a5c810-f514-4b23-b3a3-7a4cc46c35e2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0916 10:24:48.049618   12653 system_pods.go:89] "storage-provisioner" [ca9a25b7-8324-4ea1-a525-24f8c308baea] Running
	I0916 10:24:48.049625   12653 system_pods.go:89] "tiller-deploy-b48cc5f79-ddkxz" [dfe534c4-9e29-4907-b8cc-1dd12fc52f45] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0916 10:24:48.049634   12653 system_pods.go:126] duration metric: took 208.573497ms to wait for k8s-apps to be running ...
	I0916 10:24:48.049644   12653 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:24:48.049682   12653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:24:48.060846   12653 system_svc.go:56] duration metric: took 11.19263ms WaitForService to wait for kubelet
	I0916 10:24:48.060871   12653 kubeadm.go:582] duration metric: took 50.952228588s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:24:48.060890   12653 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:24:48.148219   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:48.242671   12653 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:24:48.242705   12653 node_conditions.go:123] node cpu capacity is 8
	I0916 10:24:48.242718   12653 node_conditions.go:105] duration metric: took 181.823571ms to run NodePressure ...
	I0916 10:24:48.242730   12653 start.go:241] waiting for startup goroutines ...
	I0916 10:24:48.342074   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:48.437253   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:48.650425   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:48.850814   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:48.937328   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:49.149694   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:49.342609   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:49.438289   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:49.649584   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:49.842847   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:49.936933   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:50.149348   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:50.342164   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:50.438163   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:50.649197   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:50.853453   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:50.938034   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:51.148940   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:51.341925   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:51.437207   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:51.649501   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:51.841516   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:51.937843   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:52.148973   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:52.341463   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:52.437548   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:52.649904   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:52.842395   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:52.938876   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:53.150346   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:53.342226   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:53.437852   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:53.650214   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:53.841999   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:53.938041   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:54.149543   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:54.342470   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:54.438196   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:54.649301   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:54.842219   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:54.937405   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:55.148757   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:55.342352   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:55.437453   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:55.649467   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:55.842884   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:55.938335   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:56.149527   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:56.342461   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:56.438207   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:56.649107   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:56.841744   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:56.938316   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:57.150214   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:57.342941   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:57.438321   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:57.650060   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:57.841776   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:57.937801   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:58.148724   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:58.342609   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:58.437714   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:58.648506   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:58.842214   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:58.937202   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:59.149022   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:59.341924   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:59.437205   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:24:59.649919   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:24:59.842721   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0916 10:24:59.943895   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:00.148461   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:00.342965   12653 kapi.go:107] duration metric: took 53.504381408s to wait for kubernetes.io/minikube-addons=registry ...
	I0916 10:25:00.438324   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:00.649093   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:00.937839   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:01.148871   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:01.436988   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:01.649359   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:01.937842   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:02.149127   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:02.439235   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:02.648644   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:02.937625   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:03.148437   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:03.438471   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:03.649883   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:03.936881   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:04.149787   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:04.438325   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:04.649405   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:04.937307   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:05.148501   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:05.437162   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:05.649408   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:05.937329   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:06.148922   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:06.437615   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:06.648794   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:06.937817   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:07.149424   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:07.437622   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:07.648805   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:07.975373   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:08.148579   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:08.438130   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:08.649051   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:08.938155   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:09.241812   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:09.438112   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:09.649051   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:09.937597   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:10.148065   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:10.438452   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:10.649615   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:10.937657   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:11.150286   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:11.438138   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:11.648515   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:11.938254   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:12.148855   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:12.437045   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:12.648984   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:12.937480   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:13.149222   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:13.437879   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:13.648073   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:13.937714   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:14.148744   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:14.437856   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:14.648905   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:14.937125   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:15.149947   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:15.438534   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:15.649415   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:15.938563   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:16.148929   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:16.437971   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:16.649574   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:16.938374   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:17.149584   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:17.437332   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:17.649230   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:17.939095   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:18.148655   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:18.437781   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:18.648991   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:18.937887   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:19.149216   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:19.437411   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:19.649222   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:19.937654   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:20.149853   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:20.438168   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:20.648811   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:20.948409   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:21.172608   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:21.655855   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:21.656415   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:21.973917   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:22.149178   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:22.438576   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:22.649097   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:22.939034   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:23.149425   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:23.438124   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:23.650285   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:23.938421   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:24.148909   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:24.441944   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:24.649383   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:24.938850   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:25.149722   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:25.437832   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:25.649648   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:25.938500   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:26.149259   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:26.437884   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:26.649790   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:26.937641   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:27.149739   12653 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0916 10:25:27.438223   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:27.648728   12653 kapi.go:107] duration metric: took 1m23.003864669s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0916 10:25:27.938153   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:28.438461   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:28.939228   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:29.438060   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:29.937952   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:30.438284   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:30.938383   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0916 10:25:31.437781   12653 kapi.go:107] duration metric: took 1m24.004430138s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0916 10:26:53.748019   12653 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0916 10:26:53.748042   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:54.248033   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:54.748085   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:55.248231   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:55.748800   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:56.251601   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:56.748202   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:57.248415   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:57.748866   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:58.248439   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:58.748615   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:59.248797   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:26:59.748674   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:00.248751   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:00.748977   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:01.247802   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:01.749050   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:02.247827   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:02.751439   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:03.248607   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:03.748774   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:04.248993   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:04.748179   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:05.248453   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:05.748269   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:06.248843   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:06.749191   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:07.248224   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:07.748003   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:08.248208   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:08.748339   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:09.248558   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:09.748890   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:10.247853   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:10.748462   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:11.248698   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:11.748605   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:12.249209   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:12.747956   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:13.247977   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:13.748012   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:14.248098   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:14.748444   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:15.248890   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:15.748752   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:16.248803   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:16.749124   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:17.248063   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:17.747865   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:18.247931   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:18.748279   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:19.248473   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:19.748289   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:20.248375   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:20.748484   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:21.248848   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:21.748816   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:22.247827   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:22.748462   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:23.248760   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:23.749167   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:24.248424   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:24.748963   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:25.248350   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:25.748222   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:26.248413   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:26.748789   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:27.247908   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:27.747837   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:28.248226   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:28.748371   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:29.249618   12653 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0916 10:27:29.748597   12653 kapi.go:107] duration metric: took 3m21.003635946s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0916 10:27:29.750701   12653 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-191972 cluster.
	I0916 10:27:29.752412   12653 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0916 10:27:29.754028   12653 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0916 10:27:29.756074   12653 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, nvidia-device-plugin, cloud-spanner, storage-provisioner-rancher, volcano, helm-tiller, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0916 10:27:29.757930   12653 addons.go:510] duration metric: took 3m32.649258168s for enable addons: enabled=[storage-provisioner ingress-dns nvidia-device-plugin cloud-spanner storage-provisioner-rancher volcano helm-tiller metrics-server inspektor-gadget yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0916 10:27:29.758012   12653 start.go:246] waiting for cluster config update ...
	I0916 10:27:29.758039   12653 start.go:255] writing updated cluster config ...
	I0916 10:27:29.758383   12653 ssh_runner.go:195] Run: rm -f paused
	I0916 10:27:29.765351   12653 out.go:177] * Done! kubectl is now configured to use "addons-191972" cluster and "default" namespace by default
	E0916 10:27:29.767004   12653 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	cfade64badb92       db2fc13d44d50       11 minutes ago      Running             gcp-auth                                 0                   99d0fe27850b3       gcp-auth-89d5ffd79-6r2td
	df81f1fc28725       a876393c9504b       12 minutes ago      Running             admission                                0                   0aa4b1d0acb5a       volcano-admission-77d7d48b68-rcfsk
	9dd4a83ba6d70       6041e92ec449f       12 minutes ago      Running             volcano-scheduler                        1                   9564c1c96bcee       volcano-scheduler-576bc46687-jtz7f
	72101e37ab665       738351fd438f0       13 minutes ago      Running             csi-snapshotter                          0                   73e3689e18fe9       csi-hostpathplugin-qdnbn
	da8f6a34306e1       931dbfd16f87c       13 minutes ago      Running             csi-provisioner                          0                   73e3689e18fe9       csi-hostpathplugin-qdnbn
	1649420a66573       e899260153aed       13 minutes ago      Running             liveness-probe                           0                   73e3689e18fe9       csi-hostpathplugin-qdnbn
	e0e474b6d95e5       e255e073c508c       13 minutes ago      Running             hostpath                                 0                   73e3689e18fe9       csi-hostpathplugin-qdnbn
	d5fc898fd874b       a80c8fd6e5229       13 minutes ago      Running             controller                               0                   30db636a12234       ingress-nginx-controller-bc57996ff-lpb7q
	06d43e898075b       88ef14a257f42       13 minutes ago      Running             node-driver-registrar                    0                   73e3689e18fe9       csi-hostpathplugin-qdnbn
	39c5183f27011       ce263a8653f9c       13 minutes ago      Exited              patch                                    0                   589d98ccee909       ingress-nginx-admission-patch-8f8nz
	a8bb0086c52b5       6041e92ec449f       13 minutes ago      Exited              volcano-scheduler                        0                   9564c1c96bcee       volcano-scheduler-576bc46687-jtz7f
	ddf31d8b68bc1       a876393c9504b       13 minutes ago      Exited              main                                     0                   b49978f431ab4       volcano-admission-init-57gk4
	06cf11b7a83f9       ce263a8653f9c       13 minutes ago      Exited              create                                   0                   6301c91177942       ingress-nginx-admission-create-5rjsx
	1cd468b4437bd       a1ed5895ba635       13 minutes ago      Running             csi-external-health-monitor-controller   0                   73e3689e18fe9       csi-hostpathplugin-qdnbn
	79266075c79ff       59cbb42146a37       13 minutes ago      Running             csi-attacher                             0                   a4c401b363464       csi-hostpath-attacher-0
	c65d9de60c2d0       aa61ee9c70bc4       13 minutes ago      Running             volume-snapshot-controller               0                   dba5883c9dc9b       snapshot-controller-56fcc65765-4g9w6
	0c025c1b7dd4c       19a639eda60f0       13 minutes ago      Running             csi-resizer                              0                   176615116e8de       csi-hostpath-resizer-0
	c7d7b6bb58927       96e410111f023       13 minutes ago      Running             volcano-controllers                      0                   84cb34271a61b       volcano-controllers-56675bb4d5-hdpdb
	6819af68287c4       aa61ee9c70bc4       13 minutes ago      Running             volume-snapshot-controller               0                   bb404cbffba4e       snapshot-controller-56fcc65765-htkmc
	576d6c9483015       48d9cfaaf3904       13 minutes ago      Running             metrics-server                           0                   debbe4f662687       metrics-server-84c5f94fbc-s7654
	3c2ba113f3a92       c69fa2e9cbf5f       13 minutes ago      Running             coredns                                  0                   e557eec597dbb       coredns-7c65d6cfc9-9rccl
	74825d98cba88       e16d1e3a10667       13 minutes ago      Running             local-path-provisioner                   0                   1e611781a41cb       local-path-provisioner-86d989889c-w6mf9
	dfe8c0b03e5c3       30dd67412fdea       14 minutes ago      Running             minikube-ingress-dns                     0                   6682d7fdc0949       kube-ingress-dns-minikube
	62a4b8c25074d       6e38f40d628db       14 minutes ago      Running             storage-provisioner                      0                   54247c11bac23       storage-provisioner
	4c4482bfa98cf       12968670680f4       14 minutes ago      Running             kindnet-cni                              0                   48c4106711b6e       kindnet-rxp8k
	d9d3353287790       60c005f310ff3       14 minutes ago      Running             kube-proxy                               0                   b70e27ed4bc15       kube-proxy-fnr7f
	6e4dbd39a8ef5       175ffd71cce3d       14 minutes ago      Running             kube-controller-manager                  0                   f593f7267aeda       kube-controller-manager-addons-191972
	c76b948fbd083       6bab7719df100       14 minutes ago      Running             kube-apiserver                           0                   a7eb33c199dbc       kube-apiserver-addons-191972
	0539bdd901d4a       9aa1fad941575       14 minutes ago      Running             kube-scheduler                           0                   3aba8d618e3fa       kube-scheduler-addons-191972
	92c65a04535dd       2e96e5913fc06       14 minutes ago      Running             etcd                                     0                   84fc0865b25fe       etcd-addons-191972
	
	
	==> containerd <==
	Sep 16 10:33:51 addons-191972 containerd[858]: time="2024-09-16T10:33:51.718739801Z" level=info msg="RemovePodSandbox \"e900b36241c1f65531303fa71becfdd0cc9f3b3a9824c2167224d1221a60bba1\" returns successfully"
	Sep 16 10:34:13 addons-191972 containerd[858]: time="2024-09-16T10:34:13.893088028Z" level=info msg="StopContainer for \"89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f\" with timeout 30 (s)"
	Sep 16 10:34:13 addons-191972 containerd[858]: time="2024-09-16T10:34:13.893678641Z" level=info msg="Stop container \"89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f\" with signal terminated"
	Sep 16 10:34:13 addons-191972 containerd[858]: time="2024-09-16T10:34:13.949783990Z" level=info msg="shim disconnected" id=89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f namespace=k8s.io
	Sep 16 10:34:13 addons-191972 containerd[858]: time="2024-09-16T10:34:13.949848132Z" level=warning msg="cleaning up after shim disconnected" id=89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f namespace=k8s.io
	Sep 16 10:34:13 addons-191972 containerd[858]: time="2024-09-16T10:34:13.949861213Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 16 10:34:13 addons-191972 containerd[858]: time="2024-09-16T10:34:13.966111874Z" level=info msg="StopContainer for \"89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f\" returns successfully"
	Sep 16 10:34:13 addons-191972 containerd[858]: time="2024-09-16T10:34:13.966683146Z" level=info msg="StopPodSandbox for \"79bab02e559b8717ec0b0e5e6dd4571fe23f373c4108f24ed2348682765448ec\""
	Sep 16 10:34:13 addons-191972 containerd[858]: time="2024-09-16T10:34:13.966753968Z" level=info msg="Container to stop \"89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Sep 16 10:34:14 addons-191972 containerd[858]: time="2024-09-16T10:34:14.020680304Z" level=info msg="shim disconnected" id=79bab02e559b8717ec0b0e5e6dd4571fe23f373c4108f24ed2348682765448ec namespace=k8s.io
	Sep 16 10:34:14 addons-191972 containerd[858]: time="2024-09-16T10:34:14.020753568Z" level=warning msg="cleaning up after shim disconnected" id=79bab02e559b8717ec0b0e5e6dd4571fe23f373c4108f24ed2348682765448ec namespace=k8s.io
	Sep 16 10:34:14 addons-191972 containerd[858]: time="2024-09-16T10:34:14.020766147Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 16 10:34:14 addons-191972 containerd[858]: time="2024-09-16T10:34:14.072500899Z" level=info msg="TearDown network for sandbox \"79bab02e559b8717ec0b0e5e6dd4571fe23f373c4108f24ed2348682765448ec\" successfully"
	Sep 16 10:34:14 addons-191972 containerd[858]: time="2024-09-16T10:34:14.072542928Z" level=info msg="StopPodSandbox for \"79bab02e559b8717ec0b0e5e6dd4571fe23f373c4108f24ed2348682765448ec\" returns successfully"
	Sep 16 10:34:14 addons-191972 containerd[858]: time="2024-09-16T10:34:14.555396151Z" level=info msg="RemoveContainer for \"89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f\""
	Sep 16 10:34:14 addons-191972 containerd[858]: time="2024-09-16T10:34:14.564554463Z" level=info msg="RemoveContainer for \"89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f\" returns successfully"
	Sep 16 10:34:14 addons-191972 containerd[858]: time="2024-09-16T10:34:14.565133715Z" level=error msg="ContainerStatus for \"89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f\": not found"
	Sep 16 10:34:51 addons-191972 containerd[858]: time="2024-09-16T10:34:51.722950975Z" level=info msg="StopPodSandbox for \"79bab02e559b8717ec0b0e5e6dd4571fe23f373c4108f24ed2348682765448ec\""
	Sep 16 10:34:51 addons-191972 containerd[858]: time="2024-09-16T10:34:51.735667444Z" level=info msg="TearDown network for sandbox \"79bab02e559b8717ec0b0e5e6dd4571fe23f373c4108f24ed2348682765448ec\" successfully"
	Sep 16 10:34:51 addons-191972 containerd[858]: time="2024-09-16T10:34:51.735697631Z" level=info msg="StopPodSandbox for \"79bab02e559b8717ec0b0e5e6dd4571fe23f373c4108f24ed2348682765448ec\" returns successfully"
	Sep 16 10:34:51 addons-191972 containerd[858]: time="2024-09-16T10:34:51.736003533Z" level=info msg="RemovePodSandbox for \"79bab02e559b8717ec0b0e5e6dd4571fe23f373c4108f24ed2348682765448ec\""
	Sep 16 10:34:51 addons-191972 containerd[858]: time="2024-09-16T10:34:51.736041465Z" level=info msg="Forcibly stopping sandbox \"79bab02e559b8717ec0b0e5e6dd4571fe23f373c4108f24ed2348682765448ec\""
	Sep 16 10:34:51 addons-191972 containerd[858]: time="2024-09-16T10:34:51.743227381Z" level=info msg="TearDown network for sandbox \"79bab02e559b8717ec0b0e5e6dd4571fe23f373c4108f24ed2348682765448ec\" successfully"
	Sep 16 10:34:51 addons-191972 containerd[858]: time="2024-09-16T10:34:51.747713672Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"79bab02e559b8717ec0b0e5e6dd4571fe23f373c4108f24ed2348682765448ec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 16 10:34:51 addons-191972 containerd[858]: time="2024-09-16T10:34:51.747853738Z" level=info msg="RemovePodSandbox \"79bab02e559b8717ec0b0e5e6dd4571fe23f373c4108f24ed2348682765448ec\" returns successfully"
	
	
	==> coredns [3c2ba113f3a928b6de94c4ca0bf607534ff798f3d85ffd2a7685ed6dacc00744] <==
	[INFO] 10.244.0.3:34722 - 16813 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000126799s
	[INFO] 10.244.0.3:47807 - 19593 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000078163s
	[INFO] 10.244.0.3:47807 - 48005 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00012131s
	[INFO] 10.244.0.3:52137 - 389 "AAAA IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.004304691s
	[INFO] 10.244.0.3:52137 - 40577 "A IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.004777432s
	[INFO] 10.244.0.3:37044 - 23366 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.003875752s
	[INFO] 10.244.0.3:37044 - 14153 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004520489s
	[INFO] 10.244.0.3:37775 - 29429 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003806717s
	[INFO] 10.244.0.3:37775 - 41674 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.003872738s
	[INFO] 10.244.0.3:58704 - 7476 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000090446s
	[INFO] 10.244.0.3:58704 - 1849 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000134094s
	[INFO] 10.244.0.25:38825 - 37363 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000216144s
	[INFO] 10.244.0.25:38931 - 39307 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000245831s
	[INFO] 10.244.0.25:50024 - 16483 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000164924s
	[INFO] 10.244.0.25:42236 - 32299 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000196632s
	[INFO] 10.244.0.25:49331 - 38072 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000114124s
	[INFO] 10.244.0.25:36861 - 61813 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000164666s
	[INFO] 10.244.0.25:33081 - 5019 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.00927584s
	[INFO] 10.244.0.25:32825 - 10257 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.009718235s
	[INFO] 10.244.0.25:50215 - 44243 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007980557s
	[INFO] 10.244.0.25:46089 - 36172 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008374403s
	[INFO] 10.244.0.25:60708 - 60516 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.00523636s
	[INFO] 10.244.0.25:53932 - 3930 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005436837s
	[INFO] 10.244.0.25:33968 - 30856 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.002295196s
	[INFO] 10.244.0.25:51453 - 49493 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002387298s
	
	
	==> describe nodes <==
	Name:               addons-191972
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-191972
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=addons-191972
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_23_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-191972
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-191972"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:23:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-191972
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:38:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:37:58 +0000   Mon, 16 Sep 2024 10:23:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:37:58 +0000   Mon, 16 Sep 2024 10:23:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:37:58 +0000   Mon, 16 Sep 2024 10:23:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:37:58 +0000   Mon, 16 Sep 2024 10:23:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-191972
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 0263fbb37d3545b09ff38a7b68907e4c
	  System UUID:                45c87f39-d597-4b0c-a097-439ebdb945ff
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  gcp-auth                    gcp-auth-89d5ffd79-6r2td                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-lpb7q    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         14m
	  kube-system                 coredns-7c65d6cfc9-9rccl                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     14m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 csi-hostpathplugin-qdnbn                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 etcd-addons-191972                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         14m
	  kube-system                 kindnet-rxp8k                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-addons-191972                250m (3%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-addons-191972       200m (2%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-fnr7f                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-addons-191972                100m (1%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 metrics-server-84c5f94fbc-s7654             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         14m
	  kube-system                 snapshot-controller-56fcc65765-4g9w6        0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 snapshot-controller-56fcc65765-htkmc        0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  local-path-storage          local-path-provisioner-86d989889c-w6mf9     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  volcano-system              volcano-admission-77d7d48b68-rcfsk          0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  volcano-system              volcano-controllers-56675bb4d5-hdpdb        0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  volcano-system              volcano-scheduler-576bc46687-jtz7f          0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             510Mi (1%)   220Mi (0%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 14m   kube-proxy       
	  Normal   Starting                 14m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 14m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  14m   kubelet          Node addons-191972 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m   kubelet          Node addons-191972 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m   kubelet          Node addons-191972 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m   node-controller  Node addons-191972 event: Registered Node addons-191972 in Controller
	
	
	==> dmesg <==
	[Sep16 10:17]  #2
	[  +0.001391]  #3
	[  +0.000000]  #4
	[  +0.003060] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.003238] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.002037] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.002135]  #5
	[  +0.000696]  #6
	[  +0.003195]  #7
	[  +0.058540] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.423486] i8042: Warning: Keylock active
	[  +0.007424] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.002994] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000654] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000607] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000622] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000638] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000657] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000607] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000608] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.588189] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +7.251152] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [92c65a04535ddef6879f2eb4260843c6961d1fb2395f595b3a5665263c562002] <==
	{"level":"info","ts":"2024-09-16T10:23:47.262322Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-16T10:23:47.262576Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:24:15.873285Z","caller":"traceutil/trace.go:171","msg":"trace[187537689] transaction","detail":"{read_only:false; response_revision:986; number_of_response:1; }","duration":"119.841789ms","start":"2024-09-16T10:24:15.753419Z","end":"2024-09-16T10:24:15.873261Z","steps":["trace[187537689] 'process raft request'  (duration: 119.705144ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:24:16.060589Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.178284ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:24:16.060680Z","caller":"traceutil/trace.go:171","msg":"trace[2127996318] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:986; }","duration":"125.313412ms","start":"2024-09-16T10:24:15.935346Z","end":"2024-09-16T10:24:16.060659Z","steps":["trace[2127996318] 'range keys from in-memory index tree'  (duration: 125.097316ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:07.796336Z","caller":"traceutil/trace.go:171","msg":"trace[28147226] transaction","detail":"{read_only:false; response_revision:1242; number_of_response:1; }","duration":"128.826483ms","start":"2024-09-16T10:25:07.667485Z","end":"2024-09-16T10:25:07.796311Z","steps":["trace[28147226] 'process raft request'  (duration: 41.106171ms)","trace[28147226] 'compare'  (duration: 87.53434ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:25:21.424319Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.488522ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128031931970271159 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/volcano-system/volcano-scheduler-576bc46687-jtz7f\" mod_revision:812 > success:<request_put:<key:\"/registry/pods/volcano-system/volcano-scheduler-576bc46687-jtz7f\" value_size:4029 >> failure:<request_range:<key:\"/registry/pods/volcano-system/volcano-scheduler-576bc46687-jtz7f\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-09-16T10:25:21.424401Z","caller":"traceutil/trace.go:171","msg":"trace[1168470588] linearizableReadLoop","detail":"{readStateIndex:1335; appliedIndex:1334; }","duration":"177.395065ms","start":"2024-09-16T10:25:21.246995Z","end":"2024-09-16T10:25:21.424390Z","steps":["trace[1168470588] 'read index received'  (duration: 48.427907ms)","trace[1168470588] 'applied index is now lower than readState.Index'  (duration: 128.965162ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:25:21.424444Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"177.446761ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:21.424466Z","caller":"traceutil/trace.go:171","msg":"trace[1171179904] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1302; }","duration":"177.469291ms","start":"2024-09-16T10:25:21.246991Z","end":"2024-09-16T10:25:21.424460Z","steps":["trace[1171179904] 'agreement among raft nodes before linearized reading'  (duration: 177.429463ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:21.424486Z","caller":"traceutil/trace.go:171","msg":"trace[1930200040] transaction","detail":"{read_only:false; response_revision:1302; number_of_response:1; }","duration":"247.357795ms","start":"2024-09-16T10:25:21.177107Z","end":"2024-09-16T10:25:21.424464Z","steps":["trace[1930200040] 'process raft request'  (duration: 118.297085ms)","trace[1930200040] 'compare'  (duration: 128.26971ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T10:25:21.652910Z","caller":"traceutil/trace.go:171","msg":"trace[1856019889] linearizableReadLoop","detail":"{readStateIndex:1338; appliedIndex:1335; }","duration":"218.326846ms","start":"2024-09-16T10:25:21.434567Z","end":"2024-09-16T10:25:21.652894Z","steps":["trace[1856019889] 'read index received'  (duration: 55.93458ms)","trace[1856019889] 'applied index is now lower than readState.Index'  (duration: 162.391571ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T10:25:21.652969Z","caller":"traceutil/trace.go:171","msg":"trace[1279722024] transaction","detail":"{read_only:false; response_revision:1305; number_of_response:1; }","duration":"224.683287ms","start":"2024-09-16T10:25:21.428268Z","end":"2024-09-16T10:25:21.652951Z","steps":["trace[1279722024] 'process raft request'  (duration: 224.540452ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:21.653003Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"218.415614ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:21.653027Z","caller":"traceutil/trace.go:171","msg":"trace[1008371896] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1305; }","duration":"218.457307ms","start":"2024-09-16T10:25:21.434563Z","end":"2024-09-16T10:25:21.653020Z","steps":["trace[1008371896] 'agreement among raft nodes before linearized reading'  (duration: 218.392253ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:25:21.652921Z","caller":"traceutil/trace.go:171","msg":"trace[1132385399] transaction","detail":"{read_only:false; response_revision:1304; number_of_response:1; }","duration":"225.049342ms","start":"2024-09-16T10:25:21.427850Z","end":"2024-09-16T10:25:21.652899Z","steps":["trace[1132385399] 'process raft request'  (duration: 131.625555ms)","trace[1132385399] 'compare'  (duration: 93.227933ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T10:25:21.868227Z","caller":"traceutil/trace.go:171","msg":"trace[1246984751] linearizableReadLoop","detail":"{readStateIndex:1339; appliedIndex:1338; }","duration":"139.924393ms","start":"2024-09-16T10:25:21.728284Z","end":"2024-09-16T10:25:21.868208Z","steps":["trace[1246984751] 'read index received'  (duration: 63.202511ms)","trace[1246984751] 'applied index is now lower than readState.Index'  (duration: 76.72121ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T10:25:21.868259Z","caller":"traceutil/trace.go:171","msg":"trace[501466804] transaction","detail":"{read_only:false; response_revision:1306; number_of_response:1; }","duration":"210.400699ms","start":"2024-09-16T10:25:21.657832Z","end":"2024-09-16T10:25:21.868233Z","steps":["trace[501466804] 'process raft request'  (duration: 133.673421ms)","trace[501466804] 'compare'  (duration: 76.618072ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T10:25:21.868373Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"121.878283ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T10:25:21.868410Z","caller":"traceutil/trace.go:171","msg":"trace[1169815467] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1306; }","duration":"121.931335ms","start":"2024-09-16T10:25:21.746471Z","end":"2024-09-16T10:25:21.868402Z","steps":["trace[1169815467] 'agreement among raft nodes before linearized reading'  (duration: 121.861476ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:25:21.868538Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.236255ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-09-16T10:25:21.868579Z","caller":"traceutil/trace.go:171","msg":"trace[344111638] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1306; }","duration":"140.292497ms","start":"2024-09-16T10:25:21.728276Z","end":"2024-09-16T10:25:21.868569Z","steps":["trace[344111638] 'agreement among raft nodes before linearized reading'  (duration: 140.016451ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T10:33:47.645977Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1761}
	{"level":"info","ts":"2024-09-16T10:33:47.672836Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1761,"took":"26.323299ms","hash":3150463749,"current-db-size-bytes":9527296,"current-db-size":"9.5 MB","current-db-size-in-use-bytes":5414912,"current-db-size-in-use":"5.4 MB"}
	{"level":"info","ts":"2024-09-16T10:33:47.672899Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3150463749,"revision":1761,"compact-revision":-1}
	
	
	==> gcp-auth [cfade64badb92dacf9d0c56d24c0fb7e95088f5abf7a814ef4801971e4b26216] <==
	2024/09/16 10:27:29 GCP Auth Webhook started!
	2024/09/16 10:32:45 Ready to marshal response ...
	2024/09/16 10:32:45 Ready to write response ...
	2024/09/16 10:32:45 Ready to marshal response ...
	2024/09/16 10:32:45 Ready to write response ...
	2024/09/16 10:32:45 Ready to marshal response ...
	2024/09/16 10:32:45 Ready to write response ...
	
	
	==> kernel <==
	 10:38:33 up 20 min,  0 users,  load average: 0.82, 0.44, 0.37
	Linux addons-191972 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [4c4482bfa98cf1024c4b123130c5a320a891204919b9a1459b6f3269e1e7d29d] <==
	I0916 10:36:29.444520       1 main.go:299] handling current node
	I0916 10:36:39.443843       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:36:39.443879       1 main.go:299] handling current node
	I0916 10:36:49.442870       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:36:49.442901       1 main.go:299] handling current node
	I0916 10:36:59.441892       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:36:59.441924       1 main.go:299] handling current node
	I0916 10:37:09.442842       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:37:09.442902       1 main.go:299] handling current node
	I0916 10:37:19.441102       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:37:19.441140       1 main.go:299] handling current node
	I0916 10:37:29.444620       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:37:29.444680       1 main.go:299] handling current node
	I0916 10:37:39.441366       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:37:39.441408       1 main.go:299] handling current node
	I0916 10:37:49.448997       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:37:49.449030       1 main.go:299] handling current node
	I0916 10:37:59.441595       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:37:59.441635       1 main.go:299] handling current node
	I0916 10:38:09.443872       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:38:09.443917       1 main.go:299] handling current node
	I0916 10:38:19.448505       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:38:19.448544       1 main.go:299] handling current node
	I0916 10:38:29.441704       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:38:29.441735       1 main.go:299] handling current node
	
	
	==> kube-apiserver [c76b948fbd083e0e5229c3ac96548e67224afd5a037343a2b118da9b9ae5ad3a] <==
	W0916 10:26:15.413935       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:16.459096       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:17.509475       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:18.532761       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:19.545400       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:20.553347       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:21.640741       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:22.735942       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:24.007851       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:25.084707       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:26.137166       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:27.215912       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:28.269709       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:29.285978       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:30.385745       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:31.389520       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.104.215.42:443: connect: connection refused
	W0916 10:26:53.671732       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.114.210:443: connect: connection refused
	E0916 10:26:53.671804       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.114.210:443: connect: connection refused" logger="UnhandledError"
	W0916 10:27:11.712823       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.114.210:443: connect: connection refused
	E0916 10:27:11.712858       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.114.210:443: connect: connection refused" logger="UnhandledError"
	W0916 10:27:11.785537       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.114.210:443: connect: connection refused
	E0916 10:27:11.785576       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.114.210:443: connect: connection refused" logger="UnhandledError"
	I0916 10:32:45.560480       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.108.245.36"}
	I0916 10:33:06.754025       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0916 10:33:07.773034       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	
	
	==> kube-controller-manager [6e4dbd39a8ef56c5a753071ab0489111fcbcaac9f7cbe3b4fdf88030aa41c77b] <==
	I0916 10:33:16.875487       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	I0916 10:33:26.294136       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0916 10:33:26.294178       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:33:26.604903       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0916 10:33:26.604950       1 shared_informer.go:320] Caches are synced for garbage collector
	W0916 10:33:28.016022       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:33:28.016059       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:33:43.495209       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:33:43.495252       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:34:13.882297       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/tiller-deploy-b48cc5f79" duration="6.965µs"
	W0916 10:34:32.902333       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:34:32.902376       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:35:11.270373       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:35:11.270415       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:35:54.708226       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:35:54.708272       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:36:25.735577       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:36:25.735622       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:36:56.645729       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:36:56.645783       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0916 10:37:39.901634       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:37:39.901675       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0916 10:37:58.599537       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-191972"
	W0916 10:38:33.684006       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0916 10:38:33.684058       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [d9d335328779062c055353442bb9ca0c1e2fef63bc1c598650e6ea25604013a5] <==
	I0916 10:23:59.129562       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:23:59.824945       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:23:59.825067       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:24:00.037013       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:24:00.040602       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:24:00.135054       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:24:00.135450       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:24:00.135471       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:24:00.237323       1 config.go:199] "Starting service config controller"
	I0916 10:24:00.237372       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:24:00.237410       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:24:00.237416       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:24:00.237471       1 config.go:328] "Starting node config controller"
	I0916 10:24:00.237491       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:24:00.337642       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:24:00.337724       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:24:00.337829       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0539bdd901d4af068b2160b27df45018e72113a7a75c6a082ae7e2f64f3f908b] <==
	W0916 10:23:49.138663       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0916 10:23:49.138662       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:23:49.138689       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0916 10:23:49.138707       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:49.138696       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0916 10:23:49.138760       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0916 10:23:49.138769       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 10:23:49.138774       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 10:23:49.138779       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 10:23:49.138787       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:49.139877       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:49.139916       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:50.064082       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 10:23:50.064133       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:50.118512       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:23:50.118558       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:50.132045       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 10:23:50.132096       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:50.175403       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:50.175438       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:50.199805       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:23:50.199848       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:23:50.241540       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:23:50.241599       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0916 10:23:50.633994       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.976354    1565 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-bpffs" (OuterVolumeSpecName: "bpffs") pod "62b2176c-9dcb-4741-bd18-81ab2a2303f2" (UID: "62b2176c-9dcb-4741-bd18-81ab2a2303f2"). InnerVolumeSpecName "bpffs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.976380    1565 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"debugfs\" (UniqueName: \"kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-debugfs\") pod \"62b2176c-9dcb-4741-bd18-81ab2a2303f2\" (UID: \"62b2176c-9dcb-4741-bd18-81ab2a2303f2\") "
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.976380    1565 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-cgroup" (OuterVolumeSpecName: "cgroup") pod "62b2176c-9dcb-4741-bd18-81ab2a2303f2" (UID: "62b2176c-9dcb-4741-bd18-81ab2a2303f2"). InnerVolumeSpecName "cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.976353    1565 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-modules" (OuterVolumeSpecName: "modules") pod "62b2176c-9dcb-4741-bd18-81ab2a2303f2" (UID: "62b2176c-9dcb-4741-bd18-81ab2a2303f2"). InnerVolumeSpecName "modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.976356    1565 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-host" (OuterVolumeSpecName: "host") pod "62b2176c-9dcb-4741-bd18-81ab2a2303f2" (UID: "62b2176c-9dcb-4741-bd18-81ab2a2303f2"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.976396    1565 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-run" (OuterVolumeSpecName: "run") pod "62b2176c-9dcb-4741-bd18-81ab2a2303f2" (UID: "62b2176c-9dcb-4741-bd18-81ab2a2303f2"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.976402    1565 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-debugfs" (OuterVolumeSpecName: "debugfs") pod "62b2176c-9dcb-4741-bd18-81ab2a2303f2" (UID: "62b2176c-9dcb-4741-bd18-81ab2a2303f2"). InnerVolumeSpecName "debugfs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.976506    1565 reconciler_common.go:288] "Volume detached for volume \"modules\" (UniqueName: \"kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-modules\") on node \"addons-191972\" DevicePath \"\""
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.976522    1565 reconciler_common.go:288] "Volume detached for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-bpffs\") on node \"addons-191972\" DevicePath \"\""
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.976533    1565 reconciler_common.go:288] "Volume detached for volume \"cgroup\" (UniqueName: \"kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-cgroup\") on node \"addons-191972\" DevicePath \"\""
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.976546    1565 reconciler_common.go:288] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-run\") on node \"addons-191972\" DevicePath \"\""
	Sep 16 10:33:06 addons-191972 kubelet[1565]: I0916 10:33:06.978118    1565 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62b2176c-9dcb-4741-bd18-81ab2a2303f2-kube-api-access-5jwxh" (OuterVolumeSpecName: "kube-api-access-5jwxh") pod "62b2176c-9dcb-4741-bd18-81ab2a2303f2" (UID: "62b2176c-9dcb-4741-bd18-81ab2a2303f2"). InnerVolumeSpecName "kube-api-access-5jwxh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:33:07 addons-191972 kubelet[1565]: I0916 10:33:07.076713    1565 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5jwxh\" (UniqueName: \"kubernetes.io/projected/62b2176c-9dcb-4741-bd18-81ab2a2303f2-kube-api-access-5jwxh\") on node \"addons-191972\" DevicePath \"\""
	Sep 16 10:33:07 addons-191972 kubelet[1565]: I0916 10:33:07.076783    1565 reconciler_common.go:288] "Volume detached for volume \"debugfs\" (UniqueName: \"kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-debugfs\") on node \"addons-191972\" DevicePath \"\""
	Sep 16 10:33:07 addons-191972 kubelet[1565]: I0916 10:33:07.076797    1565 reconciler_common.go:288] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/62b2176c-9dcb-4741-bd18-81ab2a2303f2-host\") on node \"addons-191972\" DevicePath \"\""
	Sep 16 10:33:07 addons-191972 kubelet[1565]: I0916 10:33:07.398404    1565 scope.go:117] "RemoveContainer" containerID="85bcbbfdfc074366faf8d70de5e5b0ae05b2c86caf5118e07c5f5779a11f6f09"
	Sep 16 10:33:07 addons-191972 kubelet[1565]: I0916 10:33:07.474491    1565 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62b2176c-9dcb-4741-bd18-81ab2a2303f2" path="/var/lib/kubelet/pods/62b2176c-9dcb-4741-bd18-81ab2a2303f2/volumes"
	Sep 16 10:34:14 addons-191972 kubelet[1565]: I0916 10:34:14.233694    1565 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fvwn\" (UniqueName: \"kubernetes.io/projected/dfe534c4-9e29-4907-b8cc-1dd12fc52f45-kube-api-access-4fvwn\") pod \"dfe534c4-9e29-4907-b8cc-1dd12fc52f45\" (UID: \"dfe534c4-9e29-4907-b8cc-1dd12fc52f45\") "
	Sep 16 10:34:14 addons-191972 kubelet[1565]: I0916 10:34:14.236128    1565 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfe534c4-9e29-4907-b8cc-1dd12fc52f45-kube-api-access-4fvwn" (OuterVolumeSpecName: "kube-api-access-4fvwn") pod "dfe534c4-9e29-4907-b8cc-1dd12fc52f45" (UID: "dfe534c4-9e29-4907-b8cc-1dd12fc52f45"). InnerVolumeSpecName "kube-api-access-4fvwn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:34:14 addons-191972 kubelet[1565]: I0916 10:34:14.334770    1565 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-4fvwn\" (UniqueName: \"kubernetes.io/projected/dfe534c4-9e29-4907-b8cc-1dd12fc52f45-kube-api-access-4fvwn\") on node \"addons-191972\" DevicePath \"\""
	Sep 16 10:34:14 addons-191972 kubelet[1565]: I0916 10:34:14.553835    1565 scope.go:117] "RemoveContainer" containerID="89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f"
	Sep 16 10:34:14 addons-191972 kubelet[1565]: I0916 10:34:14.564810    1565 scope.go:117] "RemoveContainer" containerID="89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f"
	Sep 16 10:34:14 addons-191972 kubelet[1565]: E0916 10:34:14.565324    1565 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f\": not found" containerID="89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f"
	Sep 16 10:34:14 addons-191972 kubelet[1565]: I0916 10:34:14.565368    1565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f"} err="failed to get container status \"89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f\": rpc error: code = NotFound desc = an error occurred when try to find container \"89cfd63e70df20af6123d295e1fb5893956f150c5282e964e765d6274328503f\": not found"
	Sep 16 10:34:15 addons-191972 kubelet[1565]: I0916 10:34:15.475017    1565 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfe534c4-9e29-4907-b8cc-1dd12fc52f45" path="/var/lib/kubelet/pods/dfe534c4-9e29-4907-b8cc-1dd12fc52f45/volumes"
	
	
	==> storage-provisioner [62a4b8c25074dcef9656a9b6e749de86b5f7c97f45a25cd328153d14be1d5a78] <==
	I0916 10:24:03.139108       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:24:03.230289       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:24:03.230361       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:24:03.238016       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:24:03.238457       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ff346362-6d54-491c-b142-6d85e8abf2d5", APIVersion:"v1", ResourceVersion:"576", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-191972_e8089787-9f1d-4116-8123-a579d9482714 became leader
	I0916 10:24:03.238505       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-191972_e8089787-9f1d-4116-8123-a579d9482714!
	I0916 10:24:03.339118       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-191972_e8089787-9f1d-4116-8123-a579d9482714!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-191972 -n addons-191972
helpers_test.go:261: (dbg) Run:  kubectl --context addons-191972 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context addons-191972 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (424.347µs)
helpers_test.go:263: kubectl --context addons-191972 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestAddons/parallel/CSI (361.86s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (0s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-191972 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:982: (dbg) Non-zero exit: kubectl --context addons-191972 apply -f testdata/storage-provisioner-rancher/pvc.yaml: fork/exec /usr/local/bin/kubectl: exec format error (364.422µs)
addons_test.go:984: kubectl apply pvc.yaml failed: args "kubectl --context addons-191972 apply -f testdata/storage-provisioner-rancher/pvc.yaml": fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestAddons/parallel/LocalPath (0.00s)

                                                
                                    
x
+
TestCertOptions (27.96s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-840054 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-840054 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (23.300266681s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-840054 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-840054 config view
cert_options_test.go:88: (dbg) Non-zero exit: kubectl --context cert-options-840054 config view: fork/exec /usr/local/bin/kubectl: exec format error (547.296µs)
cert_options_test.go:90: failed to get kubectl config. args "kubectl --context cert-options-840054 config view" : fork/exec /usr/local/bin/kubectl: exec format error
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = ""
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-840054 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-09-16 11:08:26.302164091 +0000 UTC m=+2773.284421165
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestCertOptions]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect cert-options-840054
helpers_test.go:235: (dbg) docker inspect cert-options-840054:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "39c2de91a69287f79352c0161d6f4cdc069b48df29121cf0833b6db4679b34db",
	        "Created": "2024-09-16T11:08:07.915062304Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 256440,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T11:08:08.065476472Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/39c2de91a69287f79352c0161d6f4cdc069b48df29121cf0833b6db4679b34db/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/39c2de91a69287f79352c0161d6f4cdc069b48df29121cf0833b6db4679b34db/hostname",
	        "HostsPath": "/var/lib/docker/containers/39c2de91a69287f79352c0161d6f4cdc069b48df29121cf0833b6db4679b34db/hosts",
	        "LogPath": "/var/lib/docker/containers/39c2de91a69287f79352c0161d6f4cdc069b48df29121cf0833b6db4679b34db/39c2de91a69287f79352c0161d6f4cdc069b48df29121cf0833b6db4679b34db-json.log",
	        "Name": "/cert-options-840054",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "cert-options-840054:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "cert-options-840054",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8555/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8cd3ba04e10d5ac0d3217858c727dfbbe0d82e57b9d6840063f2c3bf8cb6fdb5-init/diff:/var/lib/docker/overlay2/e391a31d00d345931d18da68f8a7d7338830f6d9b98fe203461225d03a65d594/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8cd3ba04e10d5ac0d3217858c727dfbbe0d82e57b9d6840063f2c3bf8cb6fdb5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8cd3ba04e10d5ac0d3217858c727dfbbe0d82e57b9d6840063f2c3bf8cb6fdb5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8cd3ba04e10d5ac0d3217858c727dfbbe0d82e57b9d6840063f2c3bf8cb6fdb5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "cert-options-840054",
	                "Source": "/var/lib/docker/volumes/cert-options-840054/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "cert-options-840054",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8555/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "cert-options-840054",
	                "name.minikube.sigs.k8s.io": "cert-options-840054",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "025e295c037c0caae4ce287fe6194c4c438d2ddc35cbcbec8e02e211e8b56a06",
	            "SandboxKey": "/var/run/docker/netns/025e295c037c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "8555/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "cert-options-840054": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "DriverOpts": null,
	                    "NetworkID": "345f493f6b00091f9507eaf24b73124019d96db9547225bbbcb712916afe1fcf",
	                    "EndpointID": "d06cbdfea1843b516b62935467725b02685c798a2eca48b6202d5d23dd00597a",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "cert-options-840054",
	                        "39c2de91a692"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p cert-options-840054 -n cert-options-840054
helpers_test.go:244: <<< TestCertOptions FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestCertOptions]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-840054 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p cert-options-840054 logs -n 25: (1.244856309s)
helpers_test.go:252: TestCertOptions logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                    Args                    |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-771611 sudo cat                  | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo                      | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | cri-dockerd --version                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo                      | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | systemctl status containerd                |                           |         |         |                     |                     |
	|         | --all --full --no-pager                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo                      | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | systemctl cat containerd                   |                           |         |         |                     |                     |
	|         | --no-pager                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo cat                  | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | /lib/systemd/system/containerd.service     |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo cat                  | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | /etc/containerd/config.toml                |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo                      | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | containerd config dump                     |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo                      | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | systemctl status crio --all                |                           |         |         |                     |                     |
	|         | --full --no-pager                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo                      | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | systemctl cat crio --no-pager              |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo find                 | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | /etc/crio -type f -exec sh -c              |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                       |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo crio                 | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | config                                     |                           |         |         |                     |                     |
	| delete  | -p cilium-771611                           | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	| delete  | -p missing-upgrade-327796                  | missing-upgrade-327796    | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	| start   | -p cert-expiration-021107                  | cert-expiration-021107    | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:08 UTC |
	|         | --memory=2048                              |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                       |                           |         |         |                     |                     |
	|         | --driver=docker                            |                           |         |         |                     |                     |
	|         | --container-runtime=containerd             |                           |         |         |                     |                     |
	| start   | -p force-systemd-flag-917705               | force-systemd-flag-917705 | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:08 UTC |
	|         | --memory=2048 --force-systemd              |                           |         |         |                     |                     |
	|         | --alsologtostderr                          |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=containerd             |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-846070                   | force-systemd-env-846070  | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | ssh cat                                    |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-846070                | force-systemd-env-846070  | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| stop    | -p kubernetes-upgrade-311911               | kubernetes-upgrade-311911 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| start   | -p cert-options-840054                     | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | --memory=2048                              |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                  |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15              |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com           |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                      |                           |         |         |                     |                     |
	|         | --driver=docker                            |                           |         |         |                     |                     |
	|         | --container-runtime=containerd             |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-311911               | kubernetes-upgrade-311911 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC |                     |
	|         | --memory=2200                              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1               |                           |         |         |                     |                     |
	|         | --alsologtostderr                          |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                       |                           |         |         |                     |                     |
	|         | --container-runtime=containerd             |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-917705                  | force-systemd-flag-917705 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | ssh cat                                    |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-917705               | force-systemd-flag-917705 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| start   | -p old-k8s-version-371039                  | old-k8s-version-371039    | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC |                     |
	|         | --memory=2200                              |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true              |                           |         |         |                     |                     |
	|         | --kvm-network=default                      |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system              |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                    |                           |         |         |                     |                     |
	|         | --keep-context=false                       |                           |         |         |                     |                     |
	|         | --driver=docker                            |                           |         |         |                     |                     |
	|         | --container-runtime=containerd             |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0               |                           |         |         |                     |                     |
	| ssh     | cert-options-840054 ssh                    | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | openssl x509 -text -noout -in              |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt      |                           |         |         |                     |                     |
	| ssh     | -p cert-options-840054 -- sudo             | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | cat /etc/kubernetes/admin.conf             |                           |         |         |                     |                     |
	|---------|--------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:08:20
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:08:20.214043  260870 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:08:20.214357  260870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:08:20.214367  260870 out.go:358] Setting ErrFile to fd 2...
	I0916 11:08:20.214373  260870 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:08:20.214599  260870 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 11:08:20.215203  260870 out.go:352] Setting JSON to false
	I0916 11:08:20.216555  260870 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3044,"bootTime":1726481856,"procs":306,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:08:20.216647  260870 start.go:139] virtualization: kvm guest
	I0916 11:08:20.219449  260870 out.go:177] * [old-k8s-version-371039] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:08:20.223896  260870 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:08:20.223919  260870 notify.go:220] Checking for updates...
	I0916 11:08:20.227232  260870 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:08:20.229072  260870 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:08:20.230475  260870 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 11:08:20.232411  260870 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:08:20.233913  260870 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:08:20.235947  260870 config.go:182] Loaded profile config "cert-expiration-021107": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:08:20.236096  260870 config.go:182] Loaded profile config "cert-options-840054": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:08:20.236225  260870 config.go:182] Loaded profile config "kubernetes-upgrade-311911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:08:20.236346  260870 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:08:20.272514  260870 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 11:08:20.272600  260870 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:08:20.319583  260870 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:76 SystemTime:2024-09-16 11:08:20.309921658 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:08:20.319676  260870 docker.go:318] overlay module found
	I0916 11:08:20.321936  260870 out.go:177] * Using the docker driver based on user configuration
	I0916 11:08:20.323505  260870 start.go:297] selected driver: docker
	I0916 11:08:20.323531  260870 start.go:901] validating driver "docker" against <nil>
	I0916 11:08:20.323546  260870 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:08:20.324474  260870 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:08:20.377265  260870 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:true NGoroutines:76 SystemTime:2024-09-16 11:08:20.368053459 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:08:20.377446  260870 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 11:08:20.377742  260870 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:08:20.379607  260870 out.go:177] * Using Docker driver with root privileges
	I0916 11:08:20.380925  260870 cni.go:84] Creating CNI manager for ""
	I0916 11:08:20.380983  260870 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:08:20.380990  260870 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 11:08:20.381077  260870 start.go:340] cluster config:
	{Name:old-k8s-version-371039 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-371039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:08:20.382482  260870 out.go:177] * Starting "old-k8s-version-371039" primary control-plane node in "old-k8s-version-371039" cluster
	I0916 11:08:20.383827  260870 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 11:08:20.385162  260870 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 11:08:20.386371  260870 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0916 11:08:20.386416  260870 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0916 11:08:20.386439  260870 cache.go:56] Caching tarball of preloaded images
	I0916 11:08:20.386495  260870 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 11:08:20.386528  260870 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 11:08:20.386538  260870 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0916 11:08:20.386638  260870 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/config.json ...
	I0916 11:08:20.386660  260870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/config.json: {Name:mk80970a6e2b14aa8eb876cf0c57e8fb177309d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0916 11:08:20.407351  260870 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 11:08:20.407371  260870 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 11:08:20.407460  260870 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 11:08:20.407478  260870 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 11:08:20.407486  260870 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 11:08:20.407493  260870 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 11:08:20.407500  260870 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 11:08:20.461703  260870 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 11:08:20.461749  260870 cache.go:194] Successfully downloaded all kic artifacts
	I0916 11:08:20.461786  260870 start.go:360] acquireMachinesLock for old-k8s-version-371039: {Name:mkee7b58040c5212d75aee187b093a1684178371 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:08:20.461927  260870 start.go:364] duration metric: took 116.651µs to acquireMachinesLock for "old-k8s-version-371039"
	I0916 11:08:20.461961  260870 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-371039 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-371039 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 11:08:20.462088  260870 start.go:125] createHost starting for "" (driver="docker")
	I0916 11:08:22.233242  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 11:08:22.233298  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:08:23.192659  254197 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 11:08:23.192724  254197 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 11:08:23.192871  254197 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 11:08:23.192947  254197 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 11:08:23.192992  254197 kubeadm.go:310] OS: Linux
	I0916 11:08:23.193051  254197 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 11:08:23.193112  254197 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 11:08:23.193173  254197 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 11:08:23.193233  254197 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 11:08:23.193301  254197 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 11:08:23.193368  254197 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 11:08:23.193427  254197 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 11:08:23.193488  254197 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 11:08:23.193547  254197 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 11:08:23.193640  254197 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 11:08:23.193759  254197 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 11:08:23.193877  254197 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 11:08:23.193979  254197 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 11:08:23.198169  254197 out.go:235]   - Generating certificates and keys ...
	I0916 11:08:23.198301  254197 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 11:08:23.198392  254197 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 11:08:23.198488  254197 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 11:08:23.198545  254197 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 11:08:23.198600  254197 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 11:08:23.198654  254197 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 11:08:23.198703  254197 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 11:08:23.198839  254197 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [cert-options-840054 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0916 11:08:23.198888  254197 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 11:08:23.198985  254197 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [cert-options-840054 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0916 11:08:23.199038  254197 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 11:08:23.199088  254197 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 11:08:23.199123  254197 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 11:08:23.199187  254197 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 11:08:23.199257  254197 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 11:08:23.199340  254197 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 11:08:23.199420  254197 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 11:08:23.199508  254197 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 11:08:23.199582  254197 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 11:08:23.199677  254197 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 11:08:23.199779  254197 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 11:08:23.202201  254197 out.go:235]   - Booting up control plane ...
	I0916 11:08:23.202351  254197 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 11:08:23.202476  254197 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 11:08:23.202554  254197 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 11:08:23.202670  254197 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 11:08:23.202792  254197 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 11:08:23.202848  254197 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 11:08:23.203037  254197 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 11:08:23.203126  254197 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 11:08:23.203185  254197 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00170596s
	I0916 11:08:23.203269  254197 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 11:08:23.203352  254197 kubeadm.go:310] [api-check] The API server is healthy after 4.501982562s
	I0916 11:08:23.203477  254197 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 11:08:23.203586  254197 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 11:08:23.203636  254197 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 11:08:23.203874  254197 kubeadm.go:310] [mark-control-plane] Marking the node cert-options-840054 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 11:08:23.203927  254197 kubeadm.go:310] [bootstrap-token] Using token: rc7nvh.kgilh091mfz39rxh
	I0916 11:08:23.205736  254197 out.go:235]   - Configuring RBAC rules ...
	I0916 11:08:23.205857  254197 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 11:08:23.205965  254197 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 11:08:23.206104  254197 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 11:08:23.206223  254197 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 11:08:23.206337  254197 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 11:08:23.206426  254197 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 11:08:23.206526  254197 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 11:08:23.206562  254197 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 11:08:23.206619  254197 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 11:08:23.206625  254197 kubeadm.go:310] 
	I0916 11:08:23.206676  254197 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 11:08:23.206680  254197 kubeadm.go:310] 
	I0916 11:08:23.206742  254197 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 11:08:23.206744  254197 kubeadm.go:310] 
	I0916 11:08:23.206764  254197 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 11:08:23.206828  254197 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 11:08:23.206877  254197 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 11:08:23.206880  254197 kubeadm.go:310] 
	I0916 11:08:23.206923  254197 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 11:08:23.206925  254197 kubeadm.go:310] 
	I0916 11:08:23.206962  254197 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 11:08:23.206965  254197 kubeadm.go:310] 
	I0916 11:08:23.207006  254197 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 11:08:23.207074  254197 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 11:08:23.207143  254197 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 11:08:23.207146  254197 kubeadm.go:310] 
	I0916 11:08:23.207213  254197 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 11:08:23.207275  254197 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 11:08:23.207277  254197 kubeadm.go:310] 
	I0916 11:08:23.207354  254197 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8555 --token rc7nvh.kgilh091mfz39rxh \
	I0916 11:08:23.207443  254197 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 \
	I0916 11:08:23.207461  254197 kubeadm.go:310] 	--control-plane 
	I0916 11:08:23.207463  254197 kubeadm.go:310] 
	I0916 11:08:23.207542  254197 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 11:08:23.207547  254197 kubeadm.go:310] 
	I0916 11:08:23.207618  254197 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8555 --token rc7nvh.kgilh091mfz39rxh \
	I0916 11:08:23.207716  254197 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 
	I0916 11:08:23.207725  254197 cni.go:84] Creating CNI manager for ""
	I0916 11:08:23.207730  254197 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:08:23.209564  254197 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 11:08:23.210950  254197 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 11:08:23.214985  254197 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 11:08:23.214994  254197 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 11:08:23.232475  254197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 11:08:23.475543  254197 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 11:08:23.475639  254197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:08:23.475723  254197 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes cert-options-840054 minikube.k8s.io/updated_at=2024_09_16T11_08_23_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=cert-options-840054 minikube.k8s.io/primary=true
	I0916 11:08:23.647672  254197 ops.go:34] apiserver oom_adj: -16
	I0916 11:08:23.647728  254197 kubeadm.go:1113] duration metric: took 172.170495ms to wait for elevateKubeSystemPrivileges
	I0916 11:08:23.647804  254197 kubeadm.go:394] duration metric: took 10.072090445s to StartCluster
	I0916 11:08:23.647824  254197 settings.go:142] acquiring lock: {Name:mk5f140da961bb985c566f732d611f3e238f1f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:23.647904  254197 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:08:23.649558  254197 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:23.649833  254197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 11:08:23.649842  254197 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8555 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 11:08:23.649882  254197 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:08:23.649972  254197 addons.go:69] Setting storage-provisioner=true in profile "cert-options-840054"
	I0916 11:08:23.649990  254197 addons.go:234] Setting addon storage-provisioner=true in "cert-options-840054"
	I0916 11:08:23.650000  254197 addons.go:69] Setting default-storageclass=true in profile "cert-options-840054"
	I0916 11:08:23.650015  254197 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-options-840054"
	I0916 11:08:23.650021  254197 host.go:66] Checking if "cert-options-840054" exists ...
	I0916 11:08:23.650067  254197 config.go:182] Loaded profile config "cert-options-840054": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:08:23.650405  254197 cli_runner.go:164] Run: docker container inspect cert-options-840054 --format={{.State.Status}}
	I0916 11:08:23.650598  254197 cli_runner.go:164] Run: docker container inspect cert-options-840054 --format={{.State.Status}}
	I0916 11:08:23.652503  254197 out.go:177] * Verifying Kubernetes components...
	I0916 11:08:23.653816  254197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:08:23.678541  254197 addons.go:234] Setting addon default-storageclass=true in "cert-options-840054"
	I0916 11:08:23.678572  254197 host.go:66] Checking if "cert-options-840054" exists ...
	I0916 11:08:23.679000  254197 cli_runner.go:164] Run: docker container inspect cert-options-840054 --format={{.State.Status}}
	I0916 11:08:23.679258  254197 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:08:23.680824  254197 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:08:23.680836  254197 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:08:23.680892  254197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-840054
	I0916 11:08:23.710642  254197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/cert-options-840054/id_rsa Username:docker}
	I0916 11:08:23.711426  254197 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:08:23.711438  254197 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:08:23.711488  254197 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-840054
	I0916 11:08:23.732787  254197 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/cert-options-840054/id_rsa Username:docker}
	I0916 11:08:23.767892  254197 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 11:08:23.822025  254197 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:08:23.842278  254197 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:08:24.038641  254197 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:08:24.070484  254197 start.go:971] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I0916 11:08:24.071440  254197 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:08:24.071474  254197 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:08:24.626901  254197 kapi.go:214] "coredns" deployment in "kube-system" namespace and "cert-options-840054" context rescaled to 1 replicas
	I0916 11:08:24.996970  254197 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.154664113s)
	I0916 11:08:24.997083  254197 api_server.go:72] duration metric: took 1.347210853s to wait for apiserver process to appear ...
	I0916 11:08:24.997092  254197 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:08:24.997113  254197 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8555/healthz ...
	I0916 11:08:25.083612  254197 api_server.go:279] https://192.168.94.2:8555/healthz returned 200:
	ok
	I0916 11:08:25.084711  254197 api_server.go:141] control plane version: v1.31.1
	I0916 11:08:25.084729  254197 api_server.go:131] duration metric: took 87.630956ms to wait for apiserver health ...
	I0916 11:08:25.084738  254197 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 11:08:25.090352  254197 system_pods.go:59] 5 kube-system pods found
	I0916 11:08:25.090372  254197 system_pods.go:61] "etcd-cert-options-840054" [486e000c-6ee4-43d3-81c6-a575aad1c716] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0916 11:08:25.090378  254197 system_pods.go:61] "kube-apiserver-cert-options-840054" [1386abd1-04b1-4de5-9aaf-1ad98f93ad81] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0916 11:08:25.090387  254197 system_pods.go:61] "kube-controller-manager-cert-options-840054" [88a07244-754c-43c5-aa94-f81e305daace] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0916 11:08:25.090392  254197 system_pods.go:61] "kube-scheduler-cert-options-840054" [65ab2a65-c3f3-4f6b-91ef-22bac0a1044a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0916 11:08:25.090396  254197 system_pods.go:61] "storage-provisioner" [1e554af5-0a57-4714-8e65-32ac03fbdcd6] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0916 11:08:25.090425  254197 system_pods.go:74] duration metric: took 5.658799ms to wait for pod list to return data ...
	I0916 11:08:25.090434  254197 kubeadm.go:582] duration metric: took 1.44056675s to wait for: map[apiserver:true system_pods:true]
	I0916 11:08:25.090444  254197 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:08:25.147631  254197 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0916 11:08:20.464139  260870 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 11:08:20.464406  260870 start.go:159] libmachine.API.Create for "old-k8s-version-371039" (driver="docker")
	I0916 11:08:20.464444  260870 client.go:168] LocalClient.Create starting
	I0916 11:08:20.464519  260870 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem
	I0916 11:08:20.464557  260870 main.go:141] libmachine: Decoding PEM data...
	I0916 11:08:20.464577  260870 main.go:141] libmachine: Parsing certificate...
	I0916 11:08:20.464667  260870 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem
	I0916 11:08:20.464708  260870 main.go:141] libmachine: Decoding PEM data...
	I0916 11:08:20.464723  260870 main.go:141] libmachine: Parsing certificate...
	I0916 11:08:20.465113  260870 cli_runner.go:164] Run: docker network inspect old-k8s-version-371039 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 11:08:20.482342  260870 cli_runner.go:211] docker network inspect old-k8s-version-371039 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 11:08:20.482436  260870 network_create.go:284] running [docker network inspect old-k8s-version-371039] to gather additional debugging logs...
	I0916 11:08:20.482456  260870 cli_runner.go:164] Run: docker network inspect old-k8s-version-371039
	W0916 11:08:20.499505  260870 cli_runner.go:211] docker network inspect old-k8s-version-371039 returned with exit code 1
	I0916 11:08:20.499539  260870 network_create.go:287] error running [docker network inspect old-k8s-version-371039]: docker network inspect old-k8s-version-371039: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-371039 not found
	I0916 11:08:20.499555  260870 network_create.go:289] output of [docker network inspect old-k8s-version-371039]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-371039 not found
	
	** /stderr **
	I0916 11:08:20.499669  260870 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:08:20.517009  260870 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c95c64bb41bd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:bd:76:ab:c4} reservation:<nil>}
	I0916 11:08:20.517928  260870 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fad43aa9929b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:94:3f:61:fd} reservation:<nil>}
	I0916 11:08:20.518739  260870 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-49585fce923a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:ad:45:94:54} reservation:<nil>}
	I0916 11:08:20.519366  260870 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-45dc384def28 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:95:3e:48:c3} reservation:<nil>}
	I0916 11:08:20.520186  260870 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-b7c76f2e9a1f IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:4a:59:5d:75} reservation:<nil>}
	I0916 11:08:20.520886  260870 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-345f493f6b00 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:02:42:90:8b:81:da} reservation:<nil>}
	I0916 11:08:20.521744  260870 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cbf380}
	I0916 11:08:20.521769  260870 network_create.go:124] attempt to create docker network old-k8s-version-371039 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I0916 11:08:20.521826  260870 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-371039 old-k8s-version-371039
	I0916 11:08:20.586470  260870 network_create.go:108] docker network old-k8s-version-371039 192.168.103.0/24 created
	I0916 11:08:20.586498  260870 kic.go:121] calculated static IP "192.168.103.2" for the "old-k8s-version-371039" container
	I0916 11:08:20.586568  260870 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 11:08:20.603942  260870 cli_runner.go:164] Run: docker volume create old-k8s-version-371039 --label name.minikube.sigs.k8s.io=old-k8s-version-371039 --label created_by.minikube.sigs.k8s.io=true
	I0916 11:08:20.621161  260870 oci.go:103] Successfully created a docker volume old-k8s-version-371039
	I0916 11:08:20.621232  260870 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-371039-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-371039 --entrypoint /usr/bin/test -v old-k8s-version-371039:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 11:08:21.145081  260870 oci.go:107] Successfully prepared a docker volume old-k8s-version-371039
	I0916 11:08:21.145125  260870 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0916 11:08:21.145150  260870 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 11:08:21.145228  260870 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-371039:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 11:08:25.222158  254197 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 11:08:25.222174  254197 node_conditions.go:123] node cpu capacity is 8
	I0916 11:08:25.222184  254197 node_conditions.go:105] duration metric: took 131.737459ms to run NodePressure ...
	I0916 11:08:25.222194  254197 start.go:241] waiting for startup goroutines ...
	I0916 11:08:25.228791  254197 addons.go:510] duration metric: took 1.578902315s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 11:08:25.228832  254197 start.go:246] waiting for cluster config update ...
	I0916 11:08:25.228848  254197 start.go:255] writing updated cluster config ...
	I0916 11:08:25.288761  254197 ssh_runner.go:195] Run: rm -f paused
	I0916 11:08:25.363330  254197 out.go:177] * Done! kubectl is now configured to use "cert-options-840054" cluster and "default" namespace by default
	E0916 11:08:25.446762  254197 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b7c29ec183571       9aa1fad941575       9 seconds ago       Running             kube-scheduler            0                   f4eae37609325       kube-scheduler-cert-options-840054
	c71d58e0ad8c4       6bab7719df100       9 seconds ago       Running             kube-apiserver            0                   be1a61b87c318       kube-apiserver-cert-options-840054
	a33df1366c56b       175ffd71cce3d       9 seconds ago       Running             kube-controller-manager   0                   07122d6e77f23       kube-controller-manager-cert-options-840054
	dcfde0ffa5ece       2e96e5913fc06       9 seconds ago       Running             etcd                      0                   aab687234c959       etcd-cert-options-840054
	
	
	==> containerd <==
	Sep 16 11:08:17 cert-options-840054 containerd[849]: time="2024-09-16T11:08:17.408288078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:08:17 cert-options-840054 containerd[849]: time="2024-09-16T11:08:17.422047283Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 11:08:17 cert-options-840054 containerd[849]: time="2024-09-16T11:08:17.422136232Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 11:08:17 cert-options-840054 containerd[849]: time="2024-09-16T11:08:17.422165072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:08:17 cert-options-840054 containerd[849]: time="2024-09-16T11:08:17.422288603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:08:17 cert-options-840054 containerd[849]: time="2024-09-16T11:08:17.489350410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:etcd-cert-options-840054,Uid:f2511e928c47c0ac4d6b412b2e9840c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"aab687234c9591389dd52725ed2a65f43da434ad8436038064a9a85495c78214\""
	Sep 16 11:08:17 cert-options-840054 containerd[849]: time="2024-09-16T11:08:17.496668020Z" level=info msg="CreateContainer within sandbox \"aab687234c9591389dd52725ed2a65f43da434ad8436038064a9a85495c78214\" for container &ContainerMetadata{Name:etcd,Attempt:0,}"
	Sep 16 11:08:17 cert-options-840054 containerd[849]: time="2024-09-16T11:08:17.532256578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-cert-options-840054,Uid:98c08cecb1a7b581be75ba7ecf943e60,Namespace:kube-system,Attempt:0,} returns sandbox id \"07122d6e77f2371b1c89475dd56639d2eca89f35286471155867f85a46eb20c7\""
	Sep 16 11:08:17 cert-options-840054 containerd[849]: time="2024-09-16T11:08:17.536916729Z" level=info msg="CreateContainer within sandbox \"07122d6e77f2371b1c89475dd56639d2eca89f35286471155867f85a46eb20c7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
	Sep 16 11:08:17 cert-options-840054 containerd[849]: time="2024-09-16T11:08:17.542083508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-cert-options-840054,Uid:099a839665ff9c875cda78b6657245ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"be1a61b87c31869e9556babfee5660d0e74b7ae4332b70fdce629447c3705eb3\""
	Sep 16 11:08:17 cert-options-840054 containerd[849]: time="2024-09-16T11:08:17.546355386Z" level=info msg="CreateContainer within sandbox \"be1a61b87c31869e9556babfee5660d0e74b7ae4332b70fdce629447c3705eb3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
	Sep 16 11:08:17 cert-options-840054 containerd[849]: time="2024-09-16T11:08:17.566404203Z" level=info msg="CreateContainer within sandbox \"aab687234c9591389dd52725ed2a65f43da434ad8436038064a9a85495c78214\" for &ContainerMetadata{Name:etcd,Attempt:0,} returns container id \"dcfde0ffa5eceee86607d96c92556fdba4a3a8baaebf88e1b42e275e401c9b8c\""
	Sep 16 11:08:17 cert-options-840054 containerd[849]: time="2024-09-16T11:08:17.566476747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-cert-options-840054,Uid:856e1c19f3284422f42f6ed1d07a9b4a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4eae37609325ab5422604f47a153737cf06c1b765a156412d09b2e5542c1a2e\""
	Sep 16 11:08:17 cert-options-840054 containerd[849]: time="2024-09-16T11:08:17.568398413Z" level=info msg="StartContainer for \"dcfde0ffa5eceee86607d96c92556fdba4a3a8baaebf88e1b42e275e401c9b8c\""
	Sep 16 11:08:17 cert-options-840054 containerd[849]: time="2024-09-16T11:08:17.576418570Z" level=info msg="CreateContainer within sandbox \"07122d6e77f2371b1c89475dd56639d2eca89f35286471155867f85a46eb20c7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a33df1366c56b118481fc510b621e28c58eadf784239737826305796bb792c7e\""
	Sep 16 11:08:17 cert-options-840054 containerd[849]: time="2024-09-16T11:08:17.577014209Z" level=info msg="CreateContainer within sandbox \"f4eae37609325ab5422604f47a153737cf06c1b765a156412d09b2e5542c1a2e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
	Sep 16 11:08:17 cert-options-840054 containerd[849]: time="2024-09-16T11:08:17.577152460Z" level=info msg="StartContainer for \"a33df1366c56b118481fc510b621e28c58eadf784239737826305796bb792c7e\""
	Sep 16 11:08:17 cert-options-840054 containerd[849]: time="2024-09-16T11:08:17.592703101Z" level=info msg="CreateContainer within sandbox \"be1a61b87c31869e9556babfee5660d0e74b7ae4332b70fdce629447c3705eb3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c71d58e0ad8c4a5b8da877b3f8b69a95b4c51bb8400868dfe0f3c9c4ab426d37\""
	Sep 16 11:08:17 cert-options-840054 containerd[849]: time="2024-09-16T11:08:17.593821406Z" level=info msg="StartContainer for \"c71d58e0ad8c4a5b8da877b3f8b69a95b4c51bb8400868dfe0f3c9c4ab426d37\""
	Sep 16 11:08:17 cert-options-840054 containerd[849]: time="2024-09-16T11:08:17.623626018Z" level=info msg="CreateContainer within sandbox \"f4eae37609325ab5422604f47a153737cf06c1b765a156412d09b2e5542c1a2e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b7c29ec1835716b6d68225c6c3ff02987fbc5d8f17603751863f0eefa2a69df5\""
	Sep 16 11:08:17 cert-options-840054 containerd[849]: time="2024-09-16T11:08:17.624577653Z" level=info msg="StartContainer for \"b7c29ec1835716b6d68225c6c3ff02987fbc5d8f17603751863f0eefa2a69df5\""
	Sep 16 11:08:17 cert-options-840054 containerd[849]: time="2024-09-16T11:08:17.727597243Z" level=info msg="StartContainer for \"dcfde0ffa5eceee86607d96c92556fdba4a3a8baaebf88e1b42e275e401c9b8c\" returns successfully"
	Sep 16 11:08:17 cert-options-840054 containerd[849]: time="2024-09-16T11:08:17.749351046Z" level=info msg="StartContainer for \"a33df1366c56b118481fc510b621e28c58eadf784239737826305796bb792c7e\" returns successfully"
	Sep 16 11:08:17 cert-options-840054 containerd[849]: time="2024-09-16T11:08:17.840470403Z" level=info msg="StartContainer for \"c71d58e0ad8c4a5b8da877b3f8b69a95b4c51bb8400868dfe0f3c9c4ab426d37\" returns successfully"
	Sep 16 11:08:17 cert-options-840054 containerd[849]: time="2024-09-16T11:08:17.840803529Z" level=info msg="StartContainer for \"b7c29ec1835716b6d68225c6c3ff02987fbc5d8f17603751863f0eefa2a69df5\" returns successfully"
	
	
	==> describe nodes <==
	Name:               cert-options-840054
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=cert-options-840054
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=cert-options-840054
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T11_08_23_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:08:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  cert-options-840054
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:08:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:08:22 +0000   Mon, 16 Sep 2024 11:08:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:08:22 +0000   Mon, 16 Sep 2024 11:08:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:08:22 +0000   Mon, 16 Sep 2024 11:08:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:08:22 +0000   Mon, 16 Sep 2024 11:08:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    cert-options-840054
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 b285295d6d1643b69cd727f95c80bf7a
	  System UUID:                985a6bac-d1e1-4d9e-881f-b07e4f9bc6d6
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (5 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-cert-options-840054                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5s
	  kube-system                 kube-apiserver-cert-options-840054             250m (3%)     0 (0%)      0 (0%)           0 (0%)         5s
	  kube-system                 kube-controller-manager-cert-options-840054    200m (2%)     0 (0%)      0 (0%)           0 (0%)         5s
	  kube-system                 kube-scheduler-cert-options-840054             100m (1%)     0 (0%)      0 (0%)           0 (0%)         5s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (8%)   0 (0%)
	  memory             100Mi (0%)  0 (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 5s    kubelet          Starting kubelet.
	  Warning  CgroupV1                 5s    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  5s    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  5s    kubelet          Node cert-options-840054 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5s    kubelet          Node cert-options-840054 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5s    kubelet          Node cert-options-840054 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           1s    node-controller  Node cert-options-840054 event: Registered Node cert-options-840054 in Controller
	
	
	==> dmesg <==
	[Sep16 11:00] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000006] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000002] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +0.000040] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000004] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +1.028430] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000006] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +0.004229] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000004] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +2.011572] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000009] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +4.031652] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000006] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +0.000018] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +8.195254] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000007] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[Sep16 11:03] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-26a107bdc9bf
	[  +0.000006] ll header: 00000000: 02 42 7f 04 0c 59 02 42 c0 a8 4c 02 08 00
	[  +1.005595] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-26a107bdc9bf
	[  +0.000005] ll header: 00000000: 02 42 7f 04 0c 59 02 42 c0 a8 4c 02 08 00
	[Sep16 11:07] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [dcfde0ffa5eceee86607d96c92556fdba4a3a8baaebf88e1b42e275e401c9b8c] <==
	{"level":"info","ts":"2024-09-16T11:08:18.068943Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2024-09-16T11:08:18.071964Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:08:18.072249Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:08:18.072408Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:08:18.072437Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-09-16T11:08:24.277960Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.321827ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571756786420800917 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/token-cleaner\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/token-cleaner\" value_size:118 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-16T11:08:24.278077Z","caller":"traceutil/trace.go:171","msg":"trace[237152524] linearizableReadLoop","detail":"{readStateIndex:283; appliedIndex:282; }","duration":"153.392089ms","start":"2024-09-16T11:08:24.124671Z","end":"2024-09-16T11:08:24.278063Z","steps":["trace[237152524] 'read index received'  (duration: 29.488651ms)","trace[237152524] 'applied index is now lower than readState.Index'  (duration: 123.901089ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T11:08:24.278107Z","caller":"traceutil/trace.go:171","msg":"trace[1951468637] transaction","detail":"{read_only:false; response_revision:275; number_of_response:1; }","duration":"155.624335ms","start":"2024-09-16T11:08:24.122458Z","end":"2024-09-16T11:08:24.278083Z","steps":["trace[1951468637] 'process raft request'  (duration: 31.70077ms)","trace[1951468637] 'compare'  (duration: 123.210609ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T11:08:24.278223Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"153.542223ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:3193"}
	{"level":"info","ts":"2024-09-16T11:08:24.278258Z","caller":"traceutil/trace.go:171","msg":"trace[1221773665] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:275; }","duration":"153.590144ms","start":"2024-09-16T11:08:24.124660Z","end":"2024-09-16T11:08:24.278250Z","steps":["trace[1221773665] 'agreement among raft nodes before linearized reading'  (duration: 153.479806ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T11:08:24.481018Z","caller":"traceutil/trace.go:171","msg":"trace[587567724] transaction","detail":"{read_only:false; response_revision:276; number_of_response:1; }","duration":"198.913252ms","start":"2024-09-16T11:08:24.282084Z","end":"2024-09-16T11:08:24.480998Z","steps":["trace[587567724] 'process raft request'  (duration: 124.017947ms)","trace[587567724] 'compare'  (duration: 74.724014ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T11:08:24.481137Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"198.407591ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-09-16T11:08:24.481169Z","caller":"traceutil/trace.go:171","msg":"trace[600612884] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:276; }","duration":"198.456021ms","start":"2024-09-16T11:08:24.282706Z","end":"2024-09-16T11:08:24.481162Z","steps":["trace[600612884] 'agreement among raft nodes before linearized reading'  (duration: 198.348388ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T11:08:24.481016Z","caller":"traceutil/trace.go:171","msg":"trace[247909472] linearizableReadLoop","detail":"{readStateIndex:284; appliedIndex:283; }","duration":"198.284387ms","start":"2024-09-16T11:08:24.282710Z","end":"2024-09-16T11:08:24.480994Z","steps":["trace[247909472] 'read index received'  (duration: 123.404006ms)","trace[247909472] 'applied index is now lower than readState.Index'  (duration: 74.879771ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T11:08:24.482544Z","caller":"traceutil/trace.go:171","msg":"trace[1280953053] transaction","detail":"{read_only:false; response_revision:278; number_of_response:1; }","duration":"199.326327ms","start":"2024-09-16T11:08:24.283203Z","end":"2024-09-16T11:08:24.482529Z","steps":["trace[1280953053] 'process raft request'  (duration: 199.285ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T11:08:24.482572Z","caller":"traceutil/trace.go:171","msg":"trace[871011757] transaction","detail":"{read_only:false; response_revision:277; number_of_response:1; }","duration":"199.429053ms","start":"2024-09-16T11:08:24.283125Z","end":"2024-09-16T11:08:24.482554Z","steps":["trace[871011757] 'process raft request'  (duration: 199.2385ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T11:08:24.990409Z","caller":"traceutil/trace.go:171","msg":"trace[211867157] transaction","detail":"{read_only:false; response_revision:286; number_of_response:1; }","duration":"174.775029ms","start":"2024-09-16T11:08:24.815614Z","end":"2024-09-16T11:08:24.990389Z","steps":["trace[211867157] 'process raft request'  (duration: 112.660114ms)","trace[211867157] 'compare'  (duration: 62.022379ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T11:08:24.998518Z","caller":"traceutil/trace.go:171","msg":"trace[1349474653] transaction","detail":"{read_only:false; response_revision:287; number_of_response:1; }","duration":"144.465969ms","start":"2024-09-16T11:08:24.854037Z","end":"2024-09-16T11:08:24.998503Z","steps":["trace[1349474653] 'process raft request'  (duration: 144.35257ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T11:08:25.219958Z","caller":"traceutil/trace.go:171","msg":"trace[598400793] linearizableReadLoop","detail":"{readStateIndex:298; appliedIndex:297; }","duration":"128.237918ms","start":"2024-09-16T11:08:25.091700Z","end":"2024-09-16T11:08:25.219938Z","steps":["trace[598400793] 'read index received'  (duration: 55.575798ms)","trace[598400793] 'applied index is now lower than readState.Index'  (duration: 72.661434ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T11:08:25.220058Z","caller":"traceutil/trace.go:171","msg":"trace[2068929132] transaction","detail":"{read_only:false; response_revision:290; number_of_response:1; }","duration":"132.250502ms","start":"2024-09-16T11:08:25.087774Z","end":"2024-09-16T11:08:25.220025Z","steps":["trace[2068929132] 'process raft request'  (duration: 59.455831ms)","trace[2068929132] 'compare'  (duration: 72.58421ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T11:08:25.220149Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.428823ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-16T11:08:25.220186Z","caller":"traceutil/trace.go:171","msg":"trace[470969859] range","detail":"{range_begin:/registry/minions; range_end:; response_count:0; response_revision:290; }","duration":"128.48305ms","start":"2024-09-16T11:08:25.091690Z","end":"2024-09-16T11:08:25.220173Z","steps":["trace[470969859] 'agreement among raft nodes before linearized reading'  (duration: 128.347235ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T11:08:25.446351Z","caller":"traceutil/trace.go:171","msg":"trace[229289747] transaction","detail":"{read_only:false; response_revision:292; number_of_response:1; }","duration":"141.988363ms","start":"2024-09-16T11:08:25.304344Z","end":"2024-09-16T11:08:25.446333Z","steps":["trace[229289747] 'process raft request'  (duration: 54.284808ms)","trace[229289747] 'compare'  (duration: 87.587166ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T11:08:25.642496Z","caller":"traceutil/trace.go:171","msg":"trace[1521280253] transaction","detail":"{read_only:false; response_revision:293; number_of_response:1; }","duration":"171.939371ms","start":"2024-09-16T11:08:25.470530Z","end":"2024-09-16T11:08:25.642470Z","steps":["trace[1521280253] 'process raft request'  (duration: 99.789326ms)","trace[1521280253] 'compare'  (duration: 72.024797ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T11:08:26.280250Z","caller":"traceutil/trace.go:171","msg":"trace[1690525044] transaction","detail":"{read_only:false; response_revision:297; number_of_response:1; }","duration":"176.304662ms","start":"2024-09-16T11:08:26.103926Z","end":"2024-09-16T11:08:26.280231Z","steps":["trace[1690525044] 'process raft request'  (duration: 113.455949ms)","trace[1690525044] 'compare'  (duration: 62.740876ms)"],"step_count":2}
	
	
	==> kernel <==
	 11:08:27 up 50 min,  0 users,  load average: 7.35, 3.81, 2.13
	Linux cert-options-840054 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [c71d58e0ad8c4a5b8da877b3f8b69a95b4c51bb8400868dfe0f3c9c4ab426d37] <==
	I0916 11:08:20.137749       1 policy_source.go:224] refreshing policies
	I0916 11:08:20.219914       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 11:08:20.219975       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 11:08:20.220037       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 11:08:20.219980       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 11:08:20.220550       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 11:08:20.221301       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 11:08:20.224285       1 controller.go:615] quota admission added evaluator for: namespaces
	E0916 11:08:20.225372       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0916 11:08:20.429175       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 11:08:20.996171       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0916 11:08:21.000208       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0916 11:08:21.000236       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 11:08:21.510052       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 11:08:21.555644       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 11:08:21.635774       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0916 11:08:21.642861       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I0916 11:08:21.644084       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 11:08:21.648889       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 11:08:22.057826       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 11:08:22.597468       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 11:08:22.623076       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 11:08:22.632391       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 11:08:26.863036       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 11:08:27.562450       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [a33df1366c56b118481fc510b621e28c58eadf784239737826305796bb792c7e] <==
	I0916 11:08:26.812690       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0916 11:08:26.813740       1 shared_informer.go:320] Caches are synced for node
	I0916 11:08:26.813812       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0916 11:08:26.813869       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0916 11:08:26.813885       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0916 11:08:26.813892       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0916 11:08:26.821810       1 shared_informer.go:320] Caches are synced for namespace
	I0916 11:08:26.821931       1 shared_informer.go:320] Caches are synced for ephemeral
	I0916 11:08:26.822365       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="cert-options-840054" podCIDRs=["10.244.0.0/24"]
	I0916 11:08:26.822404       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="cert-options-840054"
	I0916 11:08:26.823463       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="cert-options-840054"
	I0916 11:08:26.827327       1 shared_informer.go:320] Caches are synced for PV protection
	I0916 11:08:26.851625       1 shared_informer.go:320] Caches are synced for persistent volume
	I0916 11:08:26.857745       1 shared_informer.go:320] Caches are synced for attach detach
	I0916 11:08:26.857923       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0916 11:08:26.907833       1 shared_informer.go:320] Caches are synced for cronjob
	I0916 11:08:26.947430       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0916 11:08:26.985054       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 11:08:27.002531       1 shared_informer.go:320] Caches are synced for stateful set
	I0916 11:08:27.012275       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 11:08:27.058308       1 shared_informer.go:320] Caches are synced for disruption
	I0916 11:08:27.431577       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:08:27.431612       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 11:08:27.437749       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:08:27.671071       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="cert-options-840054"
	
	
	==> kube-scheduler [b7c29ec1835716b6d68225c6c3ff02987fbc5d8f17603751863f0eefa2a69df5] <==
	W0916 11:08:20.223554       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 11:08:20.223579       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:20.223662       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 11:08:20.223688       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:20.223700       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:08:20.223724       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:20.223809       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 11:08:20.223833       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:20.223849       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 11:08:20.223870       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:20.224025       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:08:20.224055       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0916 11:08:21.072749       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 11:08:21.072805       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:21.098484       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 11:08:21.098539       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:21.127236       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 11:08:21.127289       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:21.246494       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 11:08:21.246652       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:21.259537       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 11:08:21.259792       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:21.307223       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 11:08:21.307342       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 11:08:21.655332       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 11:08:22 cert-options-840054 kubelet[1559]: I0916 11:08:22.825715    1559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/099a839665ff9c875cda78b6657245ec-k8s-certs\") pod \"kube-apiserver-cert-options-840054\" (UID: \"099a839665ff9c875cda78b6657245ec\") " pod="kube-system/kube-apiserver-cert-options-840054"
	Sep 16 11:08:22 cert-options-840054 kubelet[1559]: I0916 11:08:22.825732    1559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98c08cecb1a7b581be75ba7ecf943e60-etc-ca-certificates\") pod \"kube-controller-manager-cert-options-840054\" (UID: \"98c08cecb1a7b581be75ba7ecf943e60\") " pod="kube-system/kube-controller-manager-cert-options-840054"
	Sep 16 11:08:23 cert-options-840054 kubelet[1559]: I0916 11:08:23.414166    1559 apiserver.go:52] "Watching apiserver"
	Sep 16 11:08:23 cert-options-840054 kubelet[1559]: I0916 11:08:23.424019    1559 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 16 11:08:23 cert-options-840054 kubelet[1559]: E0916 11:08:23.455574    1559 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"etcd-cert-options-840054\" already exists" pod="kube-system/etcd-cert-options-840054"
	Sep 16 11:08:23 cert-options-840054 kubelet[1559]: I0916 11:08:23.484526    1559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-cert-options-840054" podStartSLOduration=1.484499072 podStartE2EDuration="1.484499072s" podCreationTimestamp="2024-09-16 11:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:08:23.472137315 +0000 UTC m=+1.120324255" watchObservedRunningTime="2024-09-16 11:08:23.484499072 +0000 UTC m=+1.132686011"
	Sep 16 11:08:23 cert-options-840054 kubelet[1559]: I0916 11:08:23.484655    1559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-cert-options-840054" podStartSLOduration=1.484649623 podStartE2EDuration="1.484649623s" podCreationTimestamp="2024-09-16 11:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:08:23.484642904 +0000 UTC m=+1.132829843" watchObservedRunningTime="2024-09-16 11:08:23.484649623 +0000 UTC m=+1.132836561"
	Sep 16 11:08:23 cert-options-840054 kubelet[1559]: I0916 11:08:23.525583    1559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-cert-options-840054" podStartSLOduration=1.52555964 podStartE2EDuration="1.52555964s" podCreationTimestamp="2024-09-16 11:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:08:23.52531708 +0000 UTC m=+1.173504019" watchObservedRunningTime="2024-09-16 11:08:23.52555964 +0000 UTC m=+1.173746580"
	Sep 16 11:08:23 cert-options-840054 kubelet[1559]: I0916 11:08:23.535116    1559 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-cert-options-840054" podStartSLOduration=1.535086693 podStartE2EDuration="1.535086693s" podCreationTimestamp="2024-09-16 11:08:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:08:23.534693911 +0000 UTC m=+1.182880851" watchObservedRunningTime="2024-09-16 11:08:23.535086693 +0000 UTC m=+1.183273630"
	Sep 16 11:08:26 cert-options-840054 kubelet[1559]: I0916 11:08:26.850718    1559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1e554af5-0a57-4714-8e65-32ac03fbdcd6-tmp\") pod \"storage-provisioner\" (UID: \"1e554af5-0a57-4714-8e65-32ac03fbdcd6\") " pod="kube-system/storage-provisioner"
	Sep 16 11:08:26 cert-options-840054 kubelet[1559]: I0916 11:08:26.850799    1559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dq2x\" (UniqueName: \"kubernetes.io/projected/1e554af5-0a57-4714-8e65-32ac03fbdcd6-kube-api-access-8dq2x\") pod \"storage-provisioner\" (UID: \"1e554af5-0a57-4714-8e65-32ac03fbdcd6\") " pod="kube-system/storage-provisioner"
	Sep 16 11:08:26 cert-options-840054 kubelet[1559]: E0916 11:08:26.959047    1559 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Sep 16 11:08:26 cert-options-840054 kubelet[1559]: E0916 11:08:26.959109    1559 projected.go:194] Error preparing data for projected volume kube-api-access-8dq2x for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Sep 16 11:08:26 cert-options-840054 kubelet[1559]: E0916 11:08:26.959182    1559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1e554af5-0a57-4714-8e65-32ac03fbdcd6-kube-api-access-8dq2x podName:1e554af5-0a57-4714-8e65-32ac03fbdcd6 nodeName:}" failed. No retries permitted until 2024-09-16 11:08:27.45915923 +0000 UTC m=+5.107346151 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8dq2x" (UniqueName: "kubernetes.io/projected/1e554af5-0a57-4714-8e65-32ac03fbdcd6-kube-api-access-8dq2x") pod "storage-provisioner" (UID: "1e554af5-0a57-4714-8e65-32ac03fbdcd6") : configmap "kube-root-ca.crt" not found
	Sep 16 11:08:27 cert-options-840054 kubelet[1559]: E0916 11:08:27.556098    1559 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Sep 16 11:08:27 cert-options-840054 kubelet[1559]: E0916 11:08:27.556146    1559 projected.go:194] Error preparing data for projected volume kube-api-access-8dq2x for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Sep 16 11:08:27 cert-options-840054 kubelet[1559]: E0916 11:08:27.556208    1559 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1e554af5-0a57-4714-8e65-32ac03fbdcd6-kube-api-access-8dq2x podName:1e554af5-0a57-4714-8e65-32ac03fbdcd6 nodeName:}" failed. No retries permitted until 2024-09-16 11:08:28.556185867 +0000 UTC m=+6.204372808 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-8dq2x" (UniqueName: "kubernetes.io/projected/1e554af5-0a57-4714-8e65-32ac03fbdcd6-kube-api-access-8dq2x") pod "storage-provisioner" (UID: "1e554af5-0a57-4714-8e65-32ac03fbdcd6") : configmap "kube-root-ca.crt" not found
	Sep 16 11:08:27 cert-options-840054 kubelet[1559]: I0916 11:08:27.757435    1559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d4807eb6-fb81-40a3-8b9d-38c44c65d651-kube-proxy\") pod \"kube-proxy-9mfhl\" (UID: \"d4807eb6-fb81-40a3-8b9d-38c44c65d651\") " pod="kube-system/kube-proxy-9mfhl"
	Sep 16 11:08:27 cert-options-840054 kubelet[1559]: I0916 11:08:27.757481    1559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4807eb6-fb81-40a3-8b9d-38c44c65d651-xtables-lock\") pod \"kube-proxy-9mfhl\" (UID: \"d4807eb6-fb81-40a3-8b9d-38c44c65d651\") " pod="kube-system/kube-proxy-9mfhl"
	Sep 16 11:08:27 cert-options-840054 kubelet[1559]: I0916 11:08:27.757496    1559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4807eb6-fb81-40a3-8b9d-38c44c65d651-lib-modules\") pod \"kube-proxy-9mfhl\" (UID: \"d4807eb6-fb81-40a3-8b9d-38c44c65d651\") " pod="kube-system/kube-proxy-9mfhl"
	Sep 16 11:08:27 cert-options-840054 kubelet[1559]: I0916 11:08:27.757514    1559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8319c6a8-9a07-4b93-9f1c-ef64da29da8c-cni-cfg\") pod \"kindnet-fp4s9\" (UID: \"8319c6a8-9a07-4b93-9f1c-ef64da29da8c\") " pod="kube-system/kindnet-fp4s9"
	Sep 16 11:08:27 cert-options-840054 kubelet[1559]: I0916 11:08:27.757537    1559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8319c6a8-9a07-4b93-9f1c-ef64da29da8c-xtables-lock\") pod \"kindnet-fp4s9\" (UID: \"8319c6a8-9a07-4b93-9f1c-ef64da29da8c\") " pod="kube-system/kindnet-fp4s9"
	Sep 16 11:08:27 cert-options-840054 kubelet[1559]: I0916 11:08:27.757552    1559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8319c6a8-9a07-4b93-9f1c-ef64da29da8c-lib-modules\") pod \"kindnet-fp4s9\" (UID: \"8319c6a8-9a07-4b93-9f1c-ef64da29da8c\") " pod="kube-system/kindnet-fp4s9"
	Sep 16 11:08:27 cert-options-840054 kubelet[1559]: I0916 11:08:27.757568    1559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24ntb\" (UniqueName: \"kubernetes.io/projected/8319c6a8-9a07-4b93-9f1c-ef64da29da8c-kube-api-access-24ntb\") pod \"kindnet-fp4s9\" (UID: \"8319c6a8-9a07-4b93-9f1c-ef64da29da8c\") " pod="kube-system/kindnet-fp4s9"
	Sep 16 11:08:27 cert-options-840054 kubelet[1559]: I0916 11:08:27.757590    1559 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg5lz\" (UniqueName: \"kubernetes.io/projected/d4807eb6-fb81-40a3-8b9d-38c44c65d651-kube-api-access-mg5lz\") pod \"kube-proxy-9mfhl\" (UID: \"d4807eb6-fb81-40a3-8b9d-38c44c65d651\") " pod="kube-system/kube-proxy-9mfhl"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p cert-options-840054 -n cert-options-840054
helpers_test.go:261: (dbg) Run:  kubectl --context cert-options-840054 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context cert-options-840054 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (521.867µs)
helpers_test.go:263: kubectl --context cert-options-840054 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:175: Cleaning up "cert-options-840054" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-840054
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-840054: (1.987142378s)
--- FAIL: TestCertOptions (27.96s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
functional_test.go:681: (dbg) Non-zero exit: kubectl config current-context: fork/exec /usr/local/bin/kubectl: exec format error (510.633µs)
functional_test.go:683: failed to get current-context. args "kubectl config current-context" : fork/exec /usr/local/bin/kubectl: exec format error
functional_test.go:687: expected current-context = "functional-016570", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/KubeContext]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-016570
helpers_test.go:235: (dbg) docker inspect functional-016570:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b",
	        "Created": "2024-09-16T10:40:22.437729239Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 44343,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:40:22.549860744Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b/hostname",
	        "HostsPath": "/var/lib/docker/containers/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b/hosts",
	        "LogPath": "/var/lib/docker/containers/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b-json.log",
	        "Name": "/functional-016570",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-016570:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-016570",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/89f1441ae4b7c160cc54fa69d84178c0aef1412348cbc8b5731e08fc84a1cd48-init/diff:/var/lib/docker/overlay2/e391a31d00d345931d18da68f8a7d7338830f6d9b98fe203461225d03a65d594/diff",
	                "MergedDir": "/var/lib/docker/overlay2/89f1441ae4b7c160cc54fa69d84178c0aef1412348cbc8b5731e08fc84a1cd48/merged",
	                "UpperDir": "/var/lib/docker/overlay2/89f1441ae4b7c160cc54fa69d84178c0aef1412348cbc8b5731e08fc84a1cd48/diff",
	                "WorkDir": "/var/lib/docker/overlay2/89f1441ae4b7c160cc54fa69d84178c0aef1412348cbc8b5731e08fc84a1cd48/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-016570",
	                "Source": "/var/lib/docker/volumes/functional-016570/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-016570",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-016570",
	                "name.minikube.sigs.k8s.io": "functional-016570",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cea48287f6ef85c384c2d0fd7e890db0e03a4817385b96024c8e0a0688fd7962",
	            "SandboxKey": "/var/run/docker/netns/cea48287f6ef",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-016570": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "0f0536305092a99e49a50f7adb5b668b5d3bb1c3c21d82e43be0c155c4e7cfe5",
	                    "EndpointID": "9201dc1ccce436b0c1f5c3cef087a9b08f43da4627ab9f3717ae9bbdfb5de9f3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-016570",
	                        "389fc216adb5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-016570 -n functional-016570
helpers_test.go:244: <<< TestFunctional/serial/KubeContext FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/KubeContext]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-016570 logs -n 25: (1.057048745s)
helpers_test.go:252: TestFunctional/serial/KubeContext logs: 
-- stdout --
	
	==> Audit <==
	|------------|--------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command   |              Args              |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|------------|--------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| addons     | addons-191972 addons           | addons-191972     | jenkins | v1.34.0 | 16 Sep 24 10:38 UTC | 16 Sep 24 10:38 UTC |
	|            | disable metrics-server         |                   |         |         |                     |                     |
	|            | --alsologtostderr -v=1         |                   |         |         |                     |                     |
	| stop       | -p addons-191972               | addons-191972     | jenkins | v1.34.0 | 16 Sep 24 10:38 UTC | 16 Sep 24 10:38 UTC |
	| addons     | enable dashboard -p            | addons-191972     | jenkins | v1.34.0 | 16 Sep 24 10:38 UTC | 16 Sep 24 10:38 UTC |
	|            | addons-191972                  |                   |         |         |                     |                     |
	| addons     | disable dashboard -p           | addons-191972     | jenkins | v1.34.0 | 16 Sep 24 10:38 UTC | 16 Sep 24 10:38 UTC |
	|            | addons-191972                  |                   |         |         |                     |                     |
	| addons     | disable gvisor -p              | addons-191972     | jenkins | v1.34.0 | 16 Sep 24 10:38 UTC | 16 Sep 24 10:38 UTC |
	|            | addons-191972                  |                   |         |         |                     |                     |
	| delete     | -p addons-191972               | addons-191972     | jenkins | v1.34.0 | 16 Sep 24 10:38 UTC | 16 Sep 24 10:39 UTC |
	| start      | -p dockerenv-042187            | dockerenv-042187  | jenkins | v1.34.0 | 16 Sep 24 10:39 UTC | 16 Sep 24 10:39 UTC |
	|            | --driver=docker                |                   |         |         |                     |                     |
	|            | --container-runtime=containerd |                   |         |         |                     |                     |
	| docker-env | --ssh-host --ssh-add -p        | dockerenv-042187  | jenkins | v1.34.0 | 16 Sep 24 10:39 UTC | 16 Sep 24 10:39 UTC |
	|            | dockerenv-042187               |                   |         |         |                     |                     |
	| delete     | -p dockerenv-042187            | dockerenv-042187  | jenkins | v1.34.0 | 16 Sep 24 10:39 UTC | 16 Sep 24 10:39 UTC |
	| start      | -p nospam-421019 -n=1          | nospam-421019     | jenkins | v1.34.0 | 16 Sep 24 10:39 UTC | 16 Sep 24 10:40 UTC |
	|            | --memory=2250 --wait=false     |                   |         |         |                     |                     |
	|            | --log_dir=/tmp/nospam-421019   |                   |         |         |                     |                     |
	|            | --driver=docker                |                   |         |         |                     |                     |
	|            | --container-runtime=containerd |                   |         |         |                     |                     |
	| start      | nospam-421019 --log_dir        | nospam-421019     | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC |                     |
	|            | /tmp/nospam-421019 start       |                   |         |         |                     |                     |
	|            | --dry-run                      |                   |         |         |                     |                     |
	| start      | nospam-421019 --log_dir        | nospam-421019     | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC |                     |
	|            | /tmp/nospam-421019 start       |                   |         |         |                     |                     |
	|            | --dry-run                      |                   |         |         |                     |                     |
	| start      | nospam-421019 --log_dir        | nospam-421019     | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC |                     |
	|            | /tmp/nospam-421019 start       |                   |         |         |                     |                     |
	|            | --dry-run                      |                   |         |         |                     |                     |
	| pause      | nospam-421019 --log_dir        | nospam-421019     | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|            | /tmp/nospam-421019 pause       |                   |         |         |                     |                     |
	| pause      | nospam-421019 --log_dir        | nospam-421019     | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|            | /tmp/nospam-421019 pause       |                   |         |         |                     |                     |
	| pause      | nospam-421019 --log_dir        | nospam-421019     | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|            | /tmp/nospam-421019 pause       |                   |         |         |                     |                     |
	| unpause    | nospam-421019 --log_dir        | nospam-421019     | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|            | /tmp/nospam-421019 unpause     |                   |         |         |                     |                     |
	| unpause    | nospam-421019 --log_dir        | nospam-421019     | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|            | /tmp/nospam-421019 unpause     |                   |         |         |                     |                     |
	| unpause    | nospam-421019 --log_dir        | nospam-421019     | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|            | /tmp/nospam-421019 unpause     |                   |         |         |                     |                     |
	| stop       | nospam-421019 --log_dir        | nospam-421019     | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|            | /tmp/nospam-421019 stop        |                   |         |         |                     |                     |
	| stop       | nospam-421019 --log_dir        | nospam-421019     | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|            | /tmp/nospam-421019 stop        |                   |         |         |                     |                     |
	| stop       | nospam-421019 --log_dir        | nospam-421019     | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|            | /tmp/nospam-421019 stop        |                   |         |         |                     |                     |
	| delete     | -p nospam-421019               | nospam-421019     | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	| start      | -p functional-016570           | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|            | --memory=4000                  |                   |         |         |                     |                     |
	|            | --apiserver-port=8441          |                   |         |         |                     |                     |
	|            | --wait=all --driver=docker     |                   |         |         |                     |                     |
	|            | --container-runtime=containerd |                   |         |         |                     |                     |
	| start      | -p functional-016570           | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:41 UTC |
	|            | --alsologtostderr -v=8         |                   |         |         |                     |                     |
	|------------|--------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:40:57
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:40:57.801064   47238 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:40:57.801186   47238 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:40:57.801195   47238 out.go:358] Setting ErrFile to fd 2...
	I0916 10:40:57.801199   47238 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:40:57.801394   47238 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 10:40:57.801941   47238 out.go:352] Setting JSON to false
	I0916 10:40:57.802953   47238 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1402,"bootTime":1726481856,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:40:57.803057   47238 start.go:139] virtualization: kvm guest
	I0916 10:40:57.805403   47238 out.go:177] * [functional-016570] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:40:57.806637   47238 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:40:57.806666   47238 notify.go:220] Checking for updates...
	I0916 10:40:57.809259   47238 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:40:57.810762   47238 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:40:57.812069   47238 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 10:40:57.813514   47238 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:40:57.815019   47238 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:40:57.816973   47238 config.go:182] Loaded profile config "functional-016570": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:40:57.817077   47238 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:40:57.839585   47238 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:40:57.839718   47238 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:40:57.886950   47238 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-16 10:40:57.877803547 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:40:57.887149   47238 docker.go:318] overlay module found
	I0916 10:40:57.890164   47238 out.go:177] * Using the docker driver based on existing profile
	I0916 10:40:57.891300   47238 start.go:297] selected driver: docker
	I0916 10:40:57.891312   47238 start.go:901] validating driver "docker" against &{Name:functional-016570 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-016570 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:40:57.891411   47238 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:40:57.891484   47238 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:40:57.938679   47238 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-16 10:40:57.929626713 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:40:57.939314   47238 cni.go:84] Creating CNI manager for ""
	I0916 10:40:57.939371   47238 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 10:40:57.939433   47238 start.go:340] cluster config:
	{Name:functional-016570 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-016570 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:40:57.941267   47238 out.go:177] * Starting "functional-016570" primary control-plane node in "functional-016570" cluster
	I0916 10:40:57.942673   47238 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 10:40:57.943952   47238 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:40:57.945225   47238 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:40:57.945275   47238 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4
	I0916 10:40:57.945284   47238 cache.go:56] Caching tarball of preloaded images
	I0916 10:40:57.945345   47238 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:40:57.945363   47238 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 10:40:57.945371   47238 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 10:40:57.945475   47238 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/config.json ...
	W0916 10:40:57.966210   47238 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:40:57.966232   47238 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:40:57.966331   47238 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:40:57.966350   47238 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:40:57.966359   47238 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:40:57.966370   47238 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:40:57.966381   47238 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:40:57.967807   47238 image.go:273] response: 
	I0916 10:40:58.024435   47238 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:40:58.024502   47238 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:40:58.024538   47238 start.go:360] acquireMachinesLock for functional-016570: {Name:mkd69bbb7ce10518607df066fca58f5ba9fc9f29 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:40:58.024634   47238 start.go:364] duration metric: took 58.863µs to acquireMachinesLock for "functional-016570"
	I0916 10:40:58.024659   47238 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:40:58.024672   47238 fix.go:54] fixHost starting: 
	I0916 10:40:58.024900   47238 cli_runner.go:164] Run: docker container inspect functional-016570 --format={{.State.Status}}
	I0916 10:40:58.041479   47238 fix.go:112] recreateIfNeeded on functional-016570: state=Running err=<nil>
	W0916 10:40:58.041524   47238 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:40:58.043986   47238 out.go:177] * Updating the running docker "functional-016570" container ...
	I0916 10:40:58.045535   47238 machine.go:93] provisionDockerMachine start ...
	I0916 10:40:58.045628   47238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-016570
	I0916 10:40:58.064374   47238 main.go:141] libmachine: Using SSH client type: native
	I0916 10:40:58.064598   47238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 10:40:58.064611   47238 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:40:58.195127   47238 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-016570
	
	I0916 10:40:58.195154   47238 ubuntu.go:169] provisioning hostname "functional-016570"
	I0916 10:40:58.195228   47238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-016570
	I0916 10:40:58.213677   47238 main.go:141] libmachine: Using SSH client type: native
	I0916 10:40:58.213872   47238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 10:40:58.213890   47238 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-016570 && echo "functional-016570" | sudo tee /etc/hostname
	I0916 10:40:58.358080   47238 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-016570
	
	I0916 10:40:58.358159   47238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-016570
	I0916 10:40:58.375262   47238 main.go:141] libmachine: Using SSH client type: native
	I0916 10:40:58.375442   47238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 10:40:58.375459   47238 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-016570' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-016570/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-016570' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:40:58.508210   47238 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:40:58.508244   47238 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 10:40:58.508301   47238 ubuntu.go:177] setting up certificates
	I0916 10:40:58.508311   47238 provision.go:84] configureAuth start
	I0916 10:40:58.508363   47238 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-016570
	I0916 10:40:58.526211   47238 provision.go:143] copyHostCerts
	I0916 10:40:58.526258   47238 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:40:58.526285   47238 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 10:40:58.526293   47238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:40:58.526345   47238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 10:40:58.526433   47238 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:40:58.526451   47238 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 10:40:58.526455   47238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:40:58.526473   47238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 10:40:58.526526   47238 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:40:58.526542   47238 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 10:40:58.526545   47238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:40:58.526562   47238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 10:40:58.526633   47238 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.functional-016570 san=[127.0.0.1 192.168.49.2 functional-016570 localhost minikube]
	I0916 10:40:58.629211   47238 provision.go:177] copyRemoteCerts
	I0916 10:40:58.629273   47238 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:40:58.629309   47238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-016570
	I0916 10:40:58.646436   47238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/functional-016570/id_rsa Username:docker}
	I0916 10:40:58.740118   47238 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:40:58.740185   47238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:40:58.762366   47238 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:40:58.762424   47238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 10:40:58.785467   47238 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:40:58.785521   47238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0916 10:40:58.807085   47238 provision.go:87] duration metric: took 298.760725ms to configureAuth
	I0916 10:40:58.807111   47238 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:40:58.807264   47238 config.go:182] Loaded profile config "functional-016570": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:40:58.807275   47238 machine.go:96] duration metric: took 761.722306ms to provisionDockerMachine
	I0916 10:40:58.807282   47238 start.go:293] postStartSetup for "functional-016570" (driver="docker")
	I0916 10:40:58.807291   47238 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:40:58.807342   47238 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:40:58.807375   47238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-016570
	I0916 10:40:58.824359   47238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/functional-016570/id_rsa Username:docker}
	I0916 10:40:58.920537   47238 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:40:58.924164   47238 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.4 LTS"
	I0916 10:40:58.924189   47238 command_runner.go:130] > NAME="Ubuntu"
	I0916 10:40:58.924197   47238 command_runner.go:130] > VERSION_ID="22.04"
	I0916 10:40:58.924202   47238 command_runner.go:130] > VERSION="22.04.4 LTS (Jammy Jellyfish)"
	I0916 10:40:58.924207   47238 command_runner.go:130] > VERSION_CODENAME=jammy
	I0916 10:40:58.924211   47238 command_runner.go:130] > ID=ubuntu
	I0916 10:40:58.924215   47238 command_runner.go:130] > ID_LIKE=debian
	I0916 10:40:58.924219   47238 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0916 10:40:58.924223   47238 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0916 10:40:58.924231   47238 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0916 10:40:58.924242   47238 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0916 10:40:58.924252   47238 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0916 10:40:58.924323   47238 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:40:58.924352   47238 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:40:58.924363   47238 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:40:58.924370   47238 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:40:58.924381   47238 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 10:40:58.924428   47238 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 10:40:58.924494   47238 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 10:40:58.924503   47238 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /etc/ssl/certs/111892.pem
	I0916 10:40:58.924564   47238 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/test/nested/copy/11189/hosts -> hosts in /etc/test/nested/copy/11189
	I0916 10:40:58.924571   47238 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/test/nested/copy/11189/hosts -> /etc/test/nested/copy/11189/hosts
	I0916 10:40:58.924603   47238 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/11189
	I0916 10:40:58.932694   47238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:40:58.955302   47238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/test/nested/copy/11189/hosts --> /etc/test/nested/copy/11189/hosts (40 bytes)
	I0916 10:40:58.978430   47238 start.go:296] duration metric: took 171.129668ms for postStartSetup
	I0916 10:40:58.978509   47238 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:40:58.978603   47238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-016570
	I0916 10:40:58.996303   47238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/functional-016570/id_rsa Username:docker}
	I0916 10:40:59.088555   47238 command_runner.go:130] > 31%
	I0916 10:40:59.088652   47238 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:40:59.092936   47238 command_runner.go:130] > 203G
	I0916 10:40:59.093107   47238 fix.go:56] duration metric: took 1.068428938s for fixHost
	I0916 10:40:59.093132   47238 start.go:83] releasing machines lock for "functional-016570", held for 1.068483187s
	I0916 10:40:59.093197   47238 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-016570
	I0916 10:40:59.110531   47238 ssh_runner.go:195] Run: cat /version.json
	I0916 10:40:59.110583   47238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-016570
	I0916 10:40:59.110622   47238 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:40:59.110674   47238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-016570
	I0916 10:40:59.130492   47238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/functional-016570/id_rsa Username:docker}
	I0916 10:40:59.130489   47238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/functional-016570/id_rsa Username:docker}
	I0916 10:40:59.219785   47238 command_runner.go:130] > {"iso_version": "v1.34.0-1726281733-19643", "kicbase_version": "v0.0.45-1726358845-19644", "minikube_version": "v1.34.0", "commit": "f890713149c79cf50e25c13e6a5c0470aa0f0450"}
	I0916 10:40:59.295890   47238 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0916 10:40:59.298245   47238 ssh_runner.go:195] Run: systemctl --version
	I0916 10:40:59.302144   47238 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.12)
	I0916 10:40:59.302179   47238 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0916 10:40:59.302248   47238 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:40:59.305912   47238 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0916 10:40:59.305935   47238 command_runner.go:130] >   Size: 78        	Blocks: 8          IO Block: 4096   regular file
	I0916 10:40:59.305945   47238 command_runner.go:130] > Device: 35h/53d	Inode: 557469      Links: 1
	I0916 10:40:59.305955   47238 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:40:59.305966   47238 command_runner.go:130] > Access: 2024-09-16 10:40:27.358300495 +0000
	I0916 10:40:59.305970   47238 command_runner.go:130] > Modify: 2024-09-16 10:40:27.330298025 +0000
	I0916 10:40:59.305975   47238 command_runner.go:130] > Change: 2024-09-16 10:40:27.330298025 +0000
	I0916 10:40:59.305980   47238 command_runner.go:130] >  Birth: 2024-09-16 10:40:27.330298025 +0000
	I0916 10:40:59.306236   47238 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 10:40:59.322521   47238 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:40:59.322585   47238 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:40:59.330495   47238 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:40:59.330513   47238 start.go:495] detecting cgroup driver to use...
	I0916 10:40:59.330547   47238 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:40:59.330596   47238 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 10:40:59.341280   47238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 10:40:59.351393   47238 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:40:59.351465   47238 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:40:59.363231   47238 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:40:59.373985   47238 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:40:59.466615   47238 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:40:59.561571   47238 docker.go:233] disabling docker service ...
	I0916 10:40:59.561641   47238 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:40:59.573455   47238 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:40:59.583564   47238 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:40:59.677173   47238 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:40:59.773169   47238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:40:59.784178   47238 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:40:59.798434   47238 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0916 10:40:59.799426   47238 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 10:40:59.808973   47238 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:40:59.818252   47238 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:40:59.818311   47238 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:40:59.827654   47238 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:40:59.836704   47238 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:40:59.845834   47238 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:40:59.855319   47238 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:40:59.864271   47238 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:40:59.873845   47238 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:40:59.883580   47238 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:40:59.893035   47238 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:40:59.900923   47238 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0916 10:40:59.900991   47238 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:40:59.909078   47238 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:41:00.006315   47238 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 10:41:00.261815   47238 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 10:41:00.261911   47238 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 10:41:00.265423   47238 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I0916 10:41:00.265443   47238 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0916 10:41:00.265449   47238 command_runner.go:130] > Device: 41h/65d	Inode: 599         Links: 1
	I0916 10:41:00.265456   47238 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:41:00.265462   47238 command_runner.go:130] > Access: 2024-09-16 10:41:00.197195840 +0000
	I0916 10:41:00.265466   47238 command_runner.go:130] > Modify: 2024-09-16 10:41:00.197195840 +0000
	I0916 10:41:00.265472   47238 command_runner.go:130] > Change: 2024-09-16 10:41:00.197195840 +0000
	I0916 10:41:00.265476   47238 command_runner.go:130] >  Birth: -
	I0916 10:41:00.265499   47238 start.go:563] Will wait 60s for crictl version
	I0916 10:41:00.265532   47238 ssh_runner.go:195] Run: which crictl
	I0916 10:41:00.268687   47238 command_runner.go:130] > /usr/bin/crictl
	I0916 10:41:00.268830   47238 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:41:00.297784   47238 command_runner.go:130] > Version:  0.1.0
	I0916 10:41:00.297805   47238 command_runner.go:130] > RuntimeName:  containerd
	I0916 10:41:00.297814   47238 command_runner.go:130] > RuntimeVersion:  1.7.22
	I0916 10:41:00.297819   47238 command_runner.go:130] > RuntimeApiVersion:  v1
	I0916 10:41:00.299901   47238 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 10:41:00.299951   47238 ssh_runner.go:195] Run: containerd --version
	I0916 10:41:00.320188   47238 command_runner.go:130] > containerd containerd.io 1.7.22 7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c
	I0916 10:41:00.320265   47238 ssh_runner.go:195] Run: containerd --version
	I0916 10:41:00.339092   47238 command_runner.go:130] > containerd containerd.io 1.7.22 7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c
	I0916 10:41:00.343304   47238 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 10:41:00.344502   47238 cli_runner.go:164] Run: docker network inspect functional-016570 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:41:00.361653   47238 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:41:00.365244   47238 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I0916 10:41:00.365343   47238 kubeadm.go:883] updating cluster {Name:functional-016570 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-016570 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:41:00.365442   47238 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:41:00.365483   47238 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:41:00.395218   47238 command_runner.go:130] > {
	I0916 10:41:00.395243   47238 command_runner.go:130] >   "images": [
	I0916 10:41:00.395252   47238 command_runner.go:130] >     {
	I0916 10:41:00.395263   47238 command_runner.go:130] >       "id": "sha256:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 10:41:00.395272   47238 command_runner.go:130] >       "repoTags": [
	I0916 10:41:00.395281   47238 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 10:41:00.395287   47238 command_runner.go:130] >       ],
	I0916 10:41:00.395295   47238 command_runner.go:130] >       "repoDigests": [
	I0916 10:41:00.395309   47238 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 10:41:00.395319   47238 command_runner.go:130] >       ],
	I0916 10:41:00.395324   47238 command_runner.go:130] >       "size": "36793393",
	I0916 10:41:00.395331   47238 command_runner.go:130] >       "uid": null,
	I0916 10:41:00.395342   47238 command_runner.go:130] >       "username": "",
	I0916 10:41:00.395350   47238 command_runner.go:130] >       "spec": null,
	I0916 10:41:00.395360   47238 command_runner.go:130] >       "pinned": false
	I0916 10:41:00.395367   47238 command_runner.go:130] >     },
	I0916 10:41:00.395374   47238 command_runner.go:130] >     {
	I0916 10:41:00.395390   47238 command_runner.go:130] >       "id": "sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 10:41:00.395400   47238 command_runner.go:130] >       "repoTags": [
	I0916 10:41:00.395410   47238 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 10:41:00.395419   47238 command_runner.go:130] >       ],
	I0916 10:41:00.395426   47238 command_runner.go:130] >       "repoDigests": [
	I0916 10:41:00.395443   47238 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0916 10:41:00.395450   47238 command_runner.go:130] >       ],
	I0916 10:41:00.395458   47238 command_runner.go:130] >       "size": "9058936",
	I0916 10:41:00.395468   47238 command_runner.go:130] >       "uid": null,
	I0916 10:41:00.395475   47238 command_runner.go:130] >       "username": "",
	I0916 10:41:00.395484   47238 command_runner.go:130] >       "spec": null,
	I0916 10:41:00.395491   47238 command_runner.go:130] >       "pinned": false
	I0916 10:41:00.395500   47238 command_runner.go:130] >     },
	I0916 10:41:00.395506   47238 command_runner.go:130] >     {
	I0916 10:41:00.395520   47238 command_runner.go:130] >       "id": "sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 10:41:00.395529   47238 command_runner.go:130] >       "repoTags": [
	I0916 10:41:00.395542   47238 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 10:41:00.395553   47238 command_runner.go:130] >       ],
	I0916 10:41:00.395563   47238 command_runner.go:130] >       "repoDigests": [
	I0916 10:41:00.395584   47238 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"
	I0916 10:41:00.395592   47238 command_runner.go:130] >       ],
	I0916 10:41:00.395600   47238 command_runner.go:130] >       "size": "18562039",
	I0916 10:41:00.395609   47238 command_runner.go:130] >       "uid": null,
	I0916 10:41:00.395617   47238 command_runner.go:130] >       "username": "nonroot",
	I0916 10:41:00.395626   47238 command_runner.go:130] >       "spec": null,
	I0916 10:41:00.395633   47238 command_runner.go:130] >       "pinned": false
	I0916 10:41:00.395641   47238 command_runner.go:130] >     },
	I0916 10:41:00.395647   47238 command_runner.go:130] >     {
	I0916 10:41:00.395661   47238 command_runner.go:130] >       "id": "sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 10:41:00.395670   47238 command_runner.go:130] >       "repoTags": [
	I0916 10:41:00.395678   47238 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 10:41:00.395686   47238 command_runner.go:130] >       ],
	I0916 10:41:00.395693   47238 command_runner.go:130] >       "repoDigests": [
	I0916 10:41:00.395707   47238 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 10:41:00.395716   47238 command_runner.go:130] >       ],
	I0916 10:41:00.395724   47238 command_runner.go:130] >       "size": "56909194",
	I0916 10:41:00.395733   47238 command_runner.go:130] >       "uid": {
	I0916 10:41:00.395759   47238 command_runner.go:130] >         "value": "0"
	I0916 10:41:00.395768   47238 command_runner.go:130] >       },
	I0916 10:41:00.395776   47238 command_runner.go:130] >       "username": "",
	I0916 10:41:00.395785   47238 command_runner.go:130] >       "spec": null,
	I0916 10:41:00.395792   47238 command_runner.go:130] >       "pinned": false
	I0916 10:41:00.395800   47238 command_runner.go:130] >     },
	I0916 10:41:00.395807   47238 command_runner.go:130] >     {
	I0916 10:41:00.395822   47238 command_runner.go:130] >       "id": "sha256:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 10:41:00.395832   47238 command_runner.go:130] >       "repoTags": [
	I0916 10:41:00.395841   47238 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 10:41:00.395849   47238 command_runner.go:130] >       ],
	I0916 10:41:00.395856   47238 command_runner.go:130] >       "repoDigests": [
	I0916 10:41:00.395873   47238 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 10:41:00.395883   47238 command_runner.go:130] >       ],
	I0916 10:41:00.395892   47238 command_runner.go:130] >       "size": "28047142",
	I0916 10:41:00.395901   47238 command_runner.go:130] >       "uid": {
	I0916 10:41:00.395910   47238 command_runner.go:130] >         "value": "0"
	I0916 10:41:00.395918   47238 command_runner.go:130] >       },
	I0916 10:41:00.395925   47238 command_runner.go:130] >       "username": "",
	I0916 10:41:00.395934   47238 command_runner.go:130] >       "spec": null,
	I0916 10:41:00.395943   47238 command_runner.go:130] >       "pinned": false
	I0916 10:41:00.395951   47238 command_runner.go:130] >     },
	I0916 10:41:00.395958   47238 command_runner.go:130] >     {
	I0916 10:41:00.395971   47238 command_runner.go:130] >       "id": "sha256:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 10:41:00.395980   47238 command_runner.go:130] >       "repoTags": [
	I0916 10:41:00.395992   47238 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 10:41:00.396000   47238 command_runner.go:130] >       ],
	I0916 10:41:00.396008   47238 command_runner.go:130] >       "repoDigests": [
	I0916 10:41:00.396021   47238 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1"
	I0916 10:41:00.396027   47238 command_runner.go:130] >       ],
	I0916 10:41:00.396037   47238 command_runner.go:130] >       "size": "26221554",
	I0916 10:41:00.396042   47238 command_runner.go:130] >       "uid": {
	I0916 10:41:00.396047   47238 command_runner.go:130] >         "value": "0"
	I0916 10:41:00.396052   47238 command_runner.go:130] >       },
	I0916 10:41:00.396057   47238 command_runner.go:130] >       "username": "",
	I0916 10:41:00.396063   47238 command_runner.go:130] >       "spec": null,
	I0916 10:41:00.396070   47238 command_runner.go:130] >       "pinned": false
	I0916 10:41:00.396077   47238 command_runner.go:130] >     },
	I0916 10:41:00.396085   47238 command_runner.go:130] >     {
	I0916 10:41:00.396111   47238 command_runner.go:130] >       "id": "sha256:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 10:41:00.396122   47238 command_runner.go:130] >       "repoTags": [
	I0916 10:41:00.396127   47238 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 10:41:00.396130   47238 command_runner.go:130] >       ],
	I0916 10:41:00.396135   47238 command_runner.go:130] >       "repoDigests": [
	I0916 10:41:00.396142   47238 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"
	I0916 10:41:00.396148   47238 command_runner.go:130] >       ],
	I0916 10:41:00.396153   47238 command_runner.go:130] >       "size": "30211884",
	I0916 10:41:00.396157   47238 command_runner.go:130] >       "uid": null,
	I0916 10:41:00.396161   47238 command_runner.go:130] >       "username": "",
	I0916 10:41:00.396164   47238 command_runner.go:130] >       "spec": null,
	I0916 10:41:00.396168   47238 command_runner.go:130] >       "pinned": false
	I0916 10:41:00.396172   47238 command_runner.go:130] >     },
	I0916 10:41:00.396175   47238 command_runner.go:130] >     {
	I0916 10:41:00.396182   47238 command_runner.go:130] >       "id": "sha256:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 10:41:00.396188   47238 command_runner.go:130] >       "repoTags": [
	I0916 10:41:00.396193   47238 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 10:41:00.396196   47238 command_runner.go:130] >       ],
	I0916 10:41:00.396200   47238 command_runner.go:130] >       "repoDigests": [
	I0916 10:41:00.396207   47238 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"
	I0916 10:41:00.396213   47238 command_runner.go:130] >       ],
	I0916 10:41:00.396216   47238 command_runner.go:130] >       "size": "20177215",
	I0916 10:41:00.396220   47238 command_runner.go:130] >       "uid": {
	I0916 10:41:00.396224   47238 command_runner.go:130] >         "value": "0"
	I0916 10:41:00.396230   47238 command_runner.go:130] >       },
	I0916 10:41:00.396236   47238 command_runner.go:130] >       "username": "",
	I0916 10:41:00.396240   47238 command_runner.go:130] >       "spec": null,
	I0916 10:41:00.396244   47238 command_runner.go:130] >       "pinned": false
	I0916 10:41:00.396248   47238 command_runner.go:130] >     },
	I0916 10:41:00.396251   47238 command_runner.go:130] >     {
	I0916 10:41:00.396257   47238 command_runner.go:130] >       "id": "sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 10:41:00.396264   47238 command_runner.go:130] >       "repoTags": [
	I0916 10:41:00.396269   47238 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 10:41:00.396272   47238 command_runner.go:130] >       ],
	I0916 10:41:00.396276   47238 command_runner.go:130] >       "repoDigests": [
	I0916 10:41:00.396283   47238 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 10:41:00.396295   47238 command_runner.go:130] >       ],
	I0916 10:41:00.396303   47238 command_runner.go:130] >       "size": "320368",
	I0916 10:41:00.396307   47238 command_runner.go:130] >       "uid": {
	I0916 10:41:00.396311   47238 command_runner.go:130] >         "value": "65535"
	I0916 10:41:00.396315   47238 command_runner.go:130] >       },
	I0916 10:41:00.396319   47238 command_runner.go:130] >       "username": "",
	I0916 10:41:00.396323   47238 command_runner.go:130] >       "spec": null,
	I0916 10:41:00.396327   47238 command_runner.go:130] >       "pinned": true
	I0916 10:41:00.396330   47238 command_runner.go:130] >     }
	I0916 10:41:00.396334   47238 command_runner.go:130] >   ]
	I0916 10:41:00.396337   47238 command_runner.go:130] > }
	I0916 10:41:00.397229   47238 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 10:41:00.397246   47238 containerd.go:534] Images already preloaded, skipping extraction
	I0916 10:41:00.397300   47238 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:41:00.426822   47238 command_runner.go:130] > {
	I0916 10:41:00.426840   47238 command_runner.go:130] >   "images": [
	I0916 10:41:00.426844   47238 command_runner.go:130] >     {
	I0916 10:41:00.426854   47238 command_runner.go:130] >       "id": "sha256:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 10:41:00.426861   47238 command_runner.go:130] >       "repoTags": [
	I0916 10:41:00.426866   47238 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 10:41:00.426870   47238 command_runner.go:130] >       ],
	I0916 10:41:00.426877   47238 command_runner.go:130] >       "repoDigests": [
	I0916 10:41:00.426893   47238 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 10:41:00.426903   47238 command_runner.go:130] >       ],
	I0916 10:41:00.426911   47238 command_runner.go:130] >       "size": "36793393",
	I0916 10:41:00.426917   47238 command_runner.go:130] >       "uid": null,
	I0916 10:41:00.426925   47238 command_runner.go:130] >       "username": "",
	I0916 10:41:00.426929   47238 command_runner.go:130] >       "spec": null,
	I0916 10:41:00.426936   47238 command_runner.go:130] >       "pinned": false
	I0916 10:41:00.426940   47238 command_runner.go:130] >     },
	I0916 10:41:00.426943   47238 command_runner.go:130] >     {
	I0916 10:41:00.426960   47238 command_runner.go:130] >       "id": "sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 10:41:00.426970   47238 command_runner.go:130] >       "repoTags": [
	I0916 10:41:00.426978   47238 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 10:41:00.426985   47238 command_runner.go:130] >       ],
	I0916 10:41:00.426992   47238 command_runner.go:130] >       "repoDigests": [
	I0916 10:41:00.427007   47238 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0916 10:41:00.427016   47238 command_runner.go:130] >       ],
	I0916 10:41:00.427025   47238 command_runner.go:130] >       "size": "9058936",
	I0916 10:41:00.427034   47238 command_runner.go:130] >       "uid": null,
	I0916 10:41:00.427041   47238 command_runner.go:130] >       "username": "",
	I0916 10:41:00.427047   47238 command_runner.go:130] >       "spec": null,
	I0916 10:41:00.427051   47238 command_runner.go:130] >       "pinned": false
	I0916 10:41:00.427055   47238 command_runner.go:130] >     },
	I0916 10:41:00.427058   47238 command_runner.go:130] >     {
	I0916 10:41:00.427068   47238 command_runner.go:130] >       "id": "sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 10:41:00.427078   47238 command_runner.go:130] >       "repoTags": [
	I0916 10:41:00.427091   47238 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 10:41:00.427100   47238 command_runner.go:130] >       ],
	I0916 10:41:00.427107   47238 command_runner.go:130] >       "repoDigests": [
	I0916 10:41:00.427121   47238 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"
	I0916 10:41:00.427129   47238 command_runner.go:130] >       ],
	I0916 10:41:00.427136   47238 command_runner.go:130] >       "size": "18562039",
	I0916 10:41:00.427144   47238 command_runner.go:130] >       "uid": null,
	I0916 10:41:00.427150   47238 command_runner.go:130] >       "username": "nonroot",
	I0916 10:41:00.427155   47238 command_runner.go:130] >       "spec": null,
	I0916 10:41:00.427162   47238 command_runner.go:130] >       "pinned": false
	I0916 10:41:00.427169   47238 command_runner.go:130] >     },
	I0916 10:41:00.427174   47238 command_runner.go:130] >     {
	I0916 10:41:00.427188   47238 command_runner.go:130] >       "id": "sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 10:41:00.427196   47238 command_runner.go:130] >       "repoTags": [
	I0916 10:41:00.427205   47238 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 10:41:00.427213   47238 command_runner.go:130] >       ],
	I0916 10:41:00.427220   47238 command_runner.go:130] >       "repoDigests": [
	I0916 10:41:00.427234   47238 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 10:41:00.427240   47238 command_runner.go:130] >       ],
	I0916 10:41:00.427246   47238 command_runner.go:130] >       "size": "56909194",
	I0916 10:41:00.427255   47238 command_runner.go:130] >       "uid": {
	I0916 10:41:00.427261   47238 command_runner.go:130] >         "value": "0"
	I0916 10:41:00.427271   47238 command_runner.go:130] >       },
	I0916 10:41:00.427280   47238 command_runner.go:130] >       "username": "",
	I0916 10:41:00.427288   47238 command_runner.go:130] >       "spec": null,
	I0916 10:41:00.427297   47238 command_runner.go:130] >       "pinned": false
	I0916 10:41:00.427304   47238 command_runner.go:130] >     },
	I0916 10:41:00.427313   47238 command_runner.go:130] >     {
	I0916 10:41:00.427322   47238 command_runner.go:130] >       "id": "sha256:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 10:41:00.427328   47238 command_runner.go:130] >       "repoTags": [
	I0916 10:41:00.427335   47238 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 10:41:00.427344   47238 command_runner.go:130] >       ],
	I0916 10:41:00.427351   47238 command_runner.go:130] >       "repoDigests": [
	I0916 10:41:00.427380   47238 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 10:41:00.427389   47238 command_runner.go:130] >       ],
	I0916 10:41:00.427396   47238 command_runner.go:130] >       "size": "28047142",
	I0916 10:41:00.427405   47238 command_runner.go:130] >       "uid": {
	I0916 10:41:00.427409   47238 command_runner.go:130] >         "value": "0"
	I0916 10:41:00.427414   47238 command_runner.go:130] >       },
	I0916 10:41:00.427420   47238 command_runner.go:130] >       "username": "",
	I0916 10:41:00.427429   47238 command_runner.go:130] >       "spec": null,
	I0916 10:41:00.427436   47238 command_runner.go:130] >       "pinned": false
	I0916 10:41:00.427445   47238 command_runner.go:130] >     },
	I0916 10:41:00.427450   47238 command_runner.go:130] >     {
	I0916 10:41:00.427464   47238 command_runner.go:130] >       "id": "sha256:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 10:41:00.427472   47238 command_runner.go:130] >       "repoTags": [
	I0916 10:41:00.427481   47238 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 10:41:00.427490   47238 command_runner.go:130] >       ],
	I0916 10:41:00.427496   47238 command_runner.go:130] >       "repoDigests": [
	I0916 10:41:00.427508   47238 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1"
	I0916 10:41:00.427517   47238 command_runner.go:130] >       ],
	I0916 10:41:00.427524   47238 command_runner.go:130] >       "size": "26221554",
	I0916 10:41:00.427533   47238 command_runner.go:130] >       "uid": {
	I0916 10:41:00.427547   47238 command_runner.go:130] >         "value": "0"
	I0916 10:41:00.427554   47238 command_runner.go:130] >       },
	I0916 10:41:00.427561   47238 command_runner.go:130] >       "username": "",
	I0916 10:41:00.427570   47238 command_runner.go:130] >       "spec": null,
	I0916 10:41:00.427578   47238 command_runner.go:130] >       "pinned": false
	I0916 10:41:00.427585   47238 command_runner.go:130] >     },
	I0916 10:41:00.427589   47238 command_runner.go:130] >     {
	I0916 10:41:00.427595   47238 command_runner.go:130] >       "id": "sha256:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 10:41:00.427603   47238 command_runner.go:130] >       "repoTags": [
	I0916 10:41:00.427611   47238 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 10:41:00.427621   47238 command_runner.go:130] >       ],
	I0916 10:41:00.427627   47238 command_runner.go:130] >       "repoDigests": [
	I0916 10:41:00.427638   47238 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"
	I0916 10:41:00.427649   47238 command_runner.go:130] >       ],
	I0916 10:41:00.427656   47238 command_runner.go:130] >       "size": "30211884",
	I0916 10:41:00.427662   47238 command_runner.go:130] >       "uid": null,
	I0916 10:41:00.427668   47238 command_runner.go:130] >       "username": "",
	I0916 10:41:00.427674   47238 command_runner.go:130] >       "spec": null,
	I0916 10:41:00.427681   47238 command_runner.go:130] >       "pinned": false
	I0916 10:41:00.427687   47238 command_runner.go:130] >     },
	I0916 10:41:00.427695   47238 command_runner.go:130] >     {
	I0916 10:41:00.427708   47238 command_runner.go:130] >       "id": "sha256:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 10:41:00.427717   47238 command_runner.go:130] >       "repoTags": [
	I0916 10:41:00.427726   47238 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 10:41:00.427731   47238 command_runner.go:130] >       ],
	I0916 10:41:00.427751   47238 command_runner.go:130] >       "repoDigests": [
	I0916 10:41:00.427767   47238 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"
	I0916 10:41:00.427776   47238 command_runner.go:130] >       ],
	I0916 10:41:00.427783   47238 command_runner.go:130] >       "size": "20177215",
	I0916 10:41:00.427792   47238 command_runner.go:130] >       "uid": {
	I0916 10:41:00.427799   47238 command_runner.go:130] >         "value": "0"
	I0916 10:41:00.427806   47238 command_runner.go:130] >       },
	I0916 10:41:00.427817   47238 command_runner.go:130] >       "username": "",
	I0916 10:41:00.427824   47238 command_runner.go:130] >       "spec": null,
	I0916 10:41:00.427835   47238 command_runner.go:130] >       "pinned": false
	I0916 10:41:00.427844   47238 command_runner.go:130] >     },
	I0916 10:41:00.427852   47238 command_runner.go:130] >     {
	I0916 10:41:00.427867   47238 command_runner.go:130] >       "id": "sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 10:41:00.427877   47238 command_runner.go:130] >       "repoTags": [
	I0916 10:41:00.427885   47238 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 10:41:00.427891   47238 command_runner.go:130] >       ],
	I0916 10:41:00.427897   47238 command_runner.go:130] >       "repoDigests": [
	I0916 10:41:00.427907   47238 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 10:41:00.427913   47238 command_runner.go:130] >       ],
	I0916 10:41:00.427920   47238 command_runner.go:130] >       "size": "320368",
	I0916 10:41:00.427925   47238 command_runner.go:130] >       "uid": {
	I0916 10:41:00.427932   47238 command_runner.go:130] >         "value": "65535"
	I0916 10:41:00.427938   47238 command_runner.go:130] >       },
	I0916 10:41:00.427944   47238 command_runner.go:130] >       "username": "",
	I0916 10:41:00.427950   47238 command_runner.go:130] >       "spec": null,
	I0916 10:41:00.427959   47238 command_runner.go:130] >       "pinned": true
	I0916 10:41:00.427967   47238 command_runner.go:130] >     }
	I0916 10:41:00.427974   47238 command_runner.go:130] >   ]
	I0916 10:41:00.427977   47238 command_runner.go:130] > }
	I0916 10:41:00.428100   47238 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 10:41:00.428111   47238 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:41:00.428118   47238 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.31.1 containerd true true} ...
	I0916 10:41:00.428243   47238 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-016570 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:functional-016570 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:41:00.428307   47238 ssh_runner.go:195] Run: sudo crictl info
	I0916 10:41:00.461913   47238 command_runner.go:130] > {
	I0916 10:41:00.461938   47238 command_runner.go:130] >   "status": {
	I0916 10:41:00.461943   47238 command_runner.go:130] >     "conditions": [
	I0916 10:41:00.461947   47238 command_runner.go:130] >       {
	I0916 10:41:00.461953   47238 command_runner.go:130] >         "type": "RuntimeReady",
	I0916 10:41:00.461958   47238 command_runner.go:130] >         "status": true,
	I0916 10:41:00.461962   47238 command_runner.go:130] >         "reason": "",
	I0916 10:41:00.461967   47238 command_runner.go:130] >         "message": ""
	I0916 10:41:00.461972   47238 command_runner.go:130] >       },
	I0916 10:41:00.461988   47238 command_runner.go:130] >       {
	I0916 10:41:00.461994   47238 command_runner.go:130] >         "type": "NetworkReady",
	I0916 10:41:00.462000   47238 command_runner.go:130] >         "status": true,
	I0916 10:41:00.462005   47238 command_runner.go:130] >         "reason": "",
	I0916 10:41:00.462020   47238 command_runner.go:130] >         "message": ""
	I0916 10:41:00.462025   47238 command_runner.go:130] >       },
	I0916 10:41:00.462035   47238 command_runner.go:130] >       {
	I0916 10:41:00.462042   47238 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings",
	I0916 10:41:00.462047   47238 command_runner.go:130] >         "status": true,
	I0916 10:41:00.462051   47238 command_runner.go:130] >         "reason": "",
	I0916 10:41:00.462055   47238 command_runner.go:130] >         "message": ""
	I0916 10:41:00.462059   47238 command_runner.go:130] >       }
	I0916 10:41:00.462062   47238 command_runner.go:130] >     ]
	I0916 10:41:00.462065   47238 command_runner.go:130] >   },
	I0916 10:41:00.462071   47238 command_runner.go:130] >   "cniconfig": {
	I0916 10:41:00.462075   47238 command_runner.go:130] >     "PluginDirs": [
	I0916 10:41:00.462079   47238 command_runner.go:130] >       "/opt/cni/bin"
	I0916 10:41:00.462082   47238 command_runner.go:130] >     ],
	I0916 10:41:00.462096   47238 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I0916 10:41:00.462106   47238 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I0916 10:41:00.462113   47238 command_runner.go:130] >     "Prefix": "eth",
	I0916 10:41:00.462120   47238 command_runner.go:130] >     "Networks": [
	I0916 10:41:00.462129   47238 command_runner.go:130] >       {
	I0916 10:41:00.462135   47238 command_runner.go:130] >         "Config": {
	I0916 10:41:00.462144   47238 command_runner.go:130] >           "Name": "cni-loopback",
	I0916 10:41:00.462148   47238 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I0916 10:41:00.462159   47238 command_runner.go:130] >           "Plugins": [
	I0916 10:41:00.462163   47238 command_runner.go:130] >             {
	I0916 10:41:00.462168   47238 command_runner.go:130] >               "Network": {
	I0916 10:41:00.462173   47238 command_runner.go:130] >                 "type": "loopback",
	I0916 10:41:00.462179   47238 command_runner.go:130] >                 "ipam": {},
	I0916 10:41:00.462183   47238 command_runner.go:130] >                 "dns": {}
	I0916 10:41:00.462187   47238 command_runner.go:130] >               },
	I0916 10:41:00.462193   47238 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I0916 10:41:00.462199   47238 command_runner.go:130] >             }
	I0916 10:41:00.462205   47238 command_runner.go:130] >           ],
	I0916 10:41:00.462224   47238 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I0916 10:41:00.462230   47238 command_runner.go:130] >         },
	I0916 10:41:00.462236   47238 command_runner.go:130] >         "IFName": "lo"
	I0916 10:41:00.462242   47238 command_runner.go:130] >       },
	I0916 10:41:00.462249   47238 command_runner.go:130] >       {
	I0916 10:41:00.462255   47238 command_runner.go:130] >         "Config": {
	I0916 10:41:00.462266   47238 command_runner.go:130] >           "Name": "kindnet",
	I0916 10:41:00.462272   47238 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I0916 10:41:00.462295   47238 command_runner.go:130] >           "Plugins": [
	I0916 10:41:00.462301   47238 command_runner.go:130] >             {
	I0916 10:41:00.462305   47238 command_runner.go:130] >               "Network": {
	I0916 10:41:00.462313   47238 command_runner.go:130] >                 "type": "ptp",
	I0916 10:41:00.462323   47238 command_runner.go:130] >                 "ipam": {
	I0916 10:41:00.462330   47238 command_runner.go:130] >                   "type": "host-local"
	I0916 10:41:00.462340   47238 command_runner.go:130] >                 },
	I0916 10:41:00.462346   47238 command_runner.go:130] >                 "dns": {}
	I0916 10:41:00.462360   47238 command_runner.go:130] >               },
	I0916 10:41:00.462383   47238 command_runner.go:130] >               "Source": "{\"ipMasq\":false,\"ipam\":{\"dataDir\":\"/run/cni-ipam-state\",\"ranges\":[[{\"subnet\":\"10.244.0.0/24\"}]],\"routes\":[{\"dst\":\"0.0.0.0/0\"}],\"type\":\"host-local\"},\"mtu\":1500,\"type\":\"ptp\"}"
	I0916 10:41:00.462400   47238 command_runner.go:130] >             },
	I0916 10:41:00.462406   47238 command_runner.go:130] >             {
	I0916 10:41:00.462410   47238 command_runner.go:130] >               "Network": {
	I0916 10:41:00.462414   47238 command_runner.go:130] >                 "type": "portmap",
	I0916 10:41:00.462423   47238 command_runner.go:130] >                 "capabilities": {
	I0916 10:41:00.462433   47238 command_runner.go:130] >                   "portMappings": true
	I0916 10:41:00.462442   47238 command_runner.go:130] >                 },
	I0916 10:41:00.462449   47238 command_runner.go:130] >                 "ipam": {},
	I0916 10:41:00.462460   47238 command_runner.go:130] >                 "dns": {}
	I0916 10:41:00.462466   47238 command_runner.go:130] >               },
	I0916 10:41:00.462480   47238 command_runner.go:130] >               "Source": "{\"capabilities\":{\"portMappings\":true},\"type\":\"portmap\"}"
	I0916 10:41:00.462489   47238 command_runner.go:130] >             }
	I0916 10:41:00.462495   47238 command_runner.go:130] >           ],
	I0916 10:41:00.462540   47238 command_runner.go:130] >           "Source": "\n{\n\t\"cniVersion\": \"0.3.1\",\n\t\"name\": \"kindnet\",\n\t\"plugins\": [\n\t{\n\t\t\"type\": \"ptp\",\n\t\t\"ipMasq\": false,\n\t\t\"ipam\": {\n\t\t\t\"type\": \"host-local\",\n\t\t\t\"dataDir\": \"/run/cni-ipam-state\",\n\t\t\t\"routes\": [\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t{ \"dst\": \"0.0.0.0/0\" }\n\t\t\t],\n\t\t\t\"ranges\": [\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t[ { \"subnet\": \"10.244.0.0/24\" } ]\n\t\t\t]\n\t\t}\n\t\t,\n\t\t\"mtu\": 1500\n\t\t\n\t},\n\t{\n\t\t\"type\": \"portmap\",\n\t\t\"capabilities\": {\n\t\t\t\"portMappings\": true\n\t\t}\n\t}\n\t]\n}\n"
	I0916 10:41:00.462552   47238 command_runner.go:130] >         },
	I0916 10:41:00.462559   47238 command_runner.go:130] >         "IFName": "eth0"
	I0916 10:41:00.462564   47238 command_runner.go:130] >       }
	I0916 10:41:00.462571   47238 command_runner.go:130] >     ]
	I0916 10:41:00.462578   47238 command_runner.go:130] >   },
	I0916 10:41:00.462585   47238 command_runner.go:130] >   "config": {
	I0916 10:41:00.462594   47238 command_runner.go:130] >     "containerd": {
	I0916 10:41:00.462602   47238 command_runner.go:130] >       "snapshotter": "overlayfs",
	I0916 10:41:00.462612   47238 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I0916 10:41:00.462622   47238 command_runner.go:130] >       "defaultRuntime": {
	I0916 10:41:00.462631   47238 command_runner.go:130] >         "runtimeType": "",
	I0916 10:41:00.462635   47238 command_runner.go:130] >         "runtimePath": "",
	I0916 10:41:00.462643   47238 command_runner.go:130] >         "runtimeEngine": "",
	I0916 10:41:00.462653   47238 command_runner.go:130] >         "PodAnnotations": null,
	I0916 10:41:00.462663   47238 command_runner.go:130] >         "ContainerAnnotations": null,
	I0916 10:41:00.462673   47238 command_runner.go:130] >         "runtimeRoot": "",
	I0916 10:41:00.462682   47238 command_runner.go:130] >         "options": null,
	I0916 10:41:00.462693   47238 command_runner.go:130] >         "privileged_without_host_devices": false,
	I0916 10:41:00.462706   47238 command_runner.go:130] >         "privileged_without_host_devices_all_devices_allowed": false,
	I0916 10:41:00.462715   47238 command_runner.go:130] >         "baseRuntimeSpec": "",
	I0916 10:41:00.462720   47238 command_runner.go:130] >         "cniConfDir": "",
	I0916 10:41:00.462727   47238 command_runner.go:130] >         "cniMaxConfNum": 0,
	I0916 10:41:00.462734   47238 command_runner.go:130] >         "snapshotter": "",
	I0916 10:41:00.462743   47238 command_runner.go:130] >         "sandboxMode": ""
	I0916 10:41:00.462750   47238 command_runner.go:130] >       },
	I0916 10:41:00.462770   47238 command_runner.go:130] >       "untrustedWorkloadRuntime": {
	I0916 10:41:00.462780   47238 command_runner.go:130] >         "runtimeType": "",
	I0916 10:41:00.462788   47238 command_runner.go:130] >         "runtimePath": "",
	I0916 10:41:00.462797   47238 command_runner.go:130] >         "runtimeEngine": "",
	I0916 10:41:00.462804   47238 command_runner.go:130] >         "PodAnnotations": null,
	I0916 10:41:00.462813   47238 command_runner.go:130] >         "ContainerAnnotations": null,
	I0916 10:41:00.462817   47238 command_runner.go:130] >         "runtimeRoot": "",
	I0916 10:41:00.462822   47238 command_runner.go:130] >         "options": null,
	I0916 10:41:00.462833   47238 command_runner.go:130] >         "privileged_without_host_devices": false,
	I0916 10:41:00.462847   47238 command_runner.go:130] >         "privileged_without_host_devices_all_devices_allowed": false,
	I0916 10:41:00.462854   47238 command_runner.go:130] >         "baseRuntimeSpec": "",
	I0916 10:41:00.462864   47238 command_runner.go:130] >         "cniConfDir": "",
	I0916 10:41:00.462871   47238 command_runner.go:130] >         "cniMaxConfNum": 0,
	I0916 10:41:00.462881   47238 command_runner.go:130] >         "snapshotter": "",
	I0916 10:41:00.462888   47238 command_runner.go:130] >         "sandboxMode": ""
	I0916 10:41:00.462896   47238 command_runner.go:130] >       },
	I0916 10:41:00.462902   47238 command_runner.go:130] >       "runtimes": {
	I0916 10:41:00.462910   47238 command_runner.go:130] >         "runc": {
	I0916 10:41:00.462917   47238 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I0916 10:41:00.462924   47238 command_runner.go:130] >           "runtimePath": "",
	I0916 10:41:00.462933   47238 command_runner.go:130] >           "runtimeEngine": "",
	I0916 10:41:00.462943   47238 command_runner.go:130] >           "PodAnnotations": null,
	I0916 10:41:00.462950   47238 command_runner.go:130] >           "ContainerAnnotations": null,
	I0916 10:41:00.462961   47238 command_runner.go:130] >           "runtimeRoot": "",
	I0916 10:41:00.462970   47238 command_runner.go:130] >           "options": {
	I0916 10:41:00.462983   47238 command_runner.go:130] >             "SystemdCgroup": false
	I0916 10:41:00.462989   47238 command_runner.go:130] >           },
	I0916 10:41:00.463009   47238 command_runner.go:130] >           "privileged_without_host_devices": false,
	I0916 10:41:00.463017   47238 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I0916 10:41:00.463023   47238 command_runner.go:130] >           "baseRuntimeSpec": "",
	I0916 10:41:00.463032   47238 command_runner.go:130] >           "cniConfDir": "",
	I0916 10:41:00.463042   47238 command_runner.go:130] >           "cniMaxConfNum": 0,
	I0916 10:41:00.463053   47238 command_runner.go:130] >           "snapshotter": "",
	I0916 10:41:00.463063   47238 command_runner.go:130] >           "sandboxMode": "podsandbox"
	I0916 10:41:00.463069   47238 command_runner.go:130] >         }
	I0916 10:41:00.463077   47238 command_runner.go:130] >       },
	I0916 10:41:00.463084   47238 command_runner.go:130] >       "noPivot": false,
	I0916 10:41:00.463095   47238 command_runner.go:130] >       "disableSnapshotAnnotations": true,
	I0916 10:41:00.463103   47238 command_runner.go:130] >       "discardUnpackedLayers": true,
	I0916 10:41:00.463109   47238 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I0916 10:41:00.463121   47238 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false
	I0916 10:41:00.463130   47238 command_runner.go:130] >     },
	I0916 10:41:00.463136   47238 command_runner.go:130] >     "cni": {
	I0916 10:41:00.463146   47238 command_runner.go:130] >       "binDir": "/opt/cni/bin",
	I0916 10:41:00.463154   47238 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I0916 10:41:00.463164   47238 command_runner.go:130] >       "maxConfNum": 1,
	I0916 10:41:00.463171   47238 command_runner.go:130] >       "setupSerially": false,
	I0916 10:41:00.463183   47238 command_runner.go:130] >       "confTemplate": "",
	I0916 10:41:00.463193   47238 command_runner.go:130] >       "ipPref": ""
	I0916 10:41:00.463198   47238 command_runner.go:130] >     },
	I0916 10:41:00.463204   47238 command_runner.go:130] >     "registry": {
	I0916 10:41:00.463211   47238 command_runner.go:130] >       "configPath": "/etc/containerd/certs.d",
	I0916 10:41:00.463220   47238 command_runner.go:130] >       "mirrors": null,
	I0916 10:41:00.463231   47238 command_runner.go:130] >       "configs": null,
	I0916 10:41:00.463238   47238 command_runner.go:130] >       "auths": null,
	I0916 10:41:00.463248   47238 command_runner.go:130] >       "headers": null
	I0916 10:41:00.463253   47238 command_runner.go:130] >     },
	I0916 10:41:00.463263   47238 command_runner.go:130] >     "imageDecryption": {
	I0916 10:41:00.463270   47238 command_runner.go:130] >       "keyModel": "node"
	I0916 10:41:00.463276   47238 command_runner.go:130] >     },
	I0916 10:41:00.463297   47238 command_runner.go:130] >     "disableTCPService": true,
	I0916 10:41:00.463302   47238 command_runner.go:130] >     "streamServerAddress": "",
	I0916 10:41:00.463309   47238 command_runner.go:130] >     "streamServerPort": "10010",
	I0916 10:41:00.463317   47238 command_runner.go:130] >     "streamIdleTimeout": "4h0m0s",
	I0916 10:41:00.463326   47238 command_runner.go:130] >     "enableSelinux": false,
	I0916 10:41:00.463334   47238 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I0916 10:41:00.463344   47238 command_runner.go:130] >     "sandboxImage": "registry.k8s.io/pause:3.10",
	I0916 10:41:00.463354   47238 command_runner.go:130] >     "statsCollectPeriod": 10,
	I0916 10:41:00.463365   47238 command_runner.go:130] >     "systemdCgroup": false,
	I0916 10:41:00.463372   47238 command_runner.go:130] >     "enableTLSStreaming": false,
	I0916 10:41:00.463382   47238 command_runner.go:130] >     "x509KeyPairStreaming": {
	I0916 10:41:00.463388   47238 command_runner.go:130] >       "tlsCertFile": "",
	I0916 10:41:00.463396   47238 command_runner.go:130] >       "tlsKeyFile": ""
	I0916 10:41:00.463400   47238 command_runner.go:130] >     },
	I0916 10:41:00.463404   47238 command_runner.go:130] >     "maxContainerLogSize": 16384,
	I0916 10:41:00.463413   47238 command_runner.go:130] >     "disableCgroup": false,
	I0916 10:41:00.463425   47238 command_runner.go:130] >     "disableApparmor": false,
	I0916 10:41:00.463433   47238 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I0916 10:41:00.463443   47238 command_runner.go:130] >     "maxConcurrentDownloads": 3,
	I0916 10:41:00.463450   47238 command_runner.go:130] >     "disableProcMount": false,
	I0916 10:41:00.463459   47238 command_runner.go:130] >     "unsetSeccompProfile": "",
	I0916 10:41:00.463467   47238 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I0916 10:41:00.463477   47238 command_runner.go:130] >     "disableHugetlbController": true,
	I0916 10:41:00.463488   47238 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I0916 10:41:00.463496   47238 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I0916 10:41:00.463501   47238 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I0916 10:41:00.463513   47238 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I0916 10:41:00.463528   47238 command_runner.go:130] >     "enableUnprivilegedICMP": false,
	I0916 10:41:00.463538   47238 command_runner.go:130] >     "enableCDI": false,
	I0916 10:41:00.463545   47238 command_runner.go:130] >     "cdiSpecDirs": [
	I0916 10:41:00.463554   47238 command_runner.go:130] >       "/etc/cdi",
	I0916 10:41:00.463561   47238 command_runner.go:130] >       "/var/run/cdi"
	I0916 10:41:00.463569   47238 command_runner.go:130] >     ],
	I0916 10:41:00.463576   47238 command_runner.go:130] >     "imagePullProgressTimeout": "5m0s",
	I0916 10:41:00.463587   47238 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I0916 10:41:00.463593   47238 command_runner.go:130] >     "imagePullWithSyncFs": false,
	I0916 10:41:00.463628   47238 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I0916 10:41:00.463646   47238 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I0916 10:41:00.463658   47238 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I0916 10:41:00.463670   47238 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I0916 10:41:00.463681   47238 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri"
	I0916 10:41:00.463687   47238 command_runner.go:130] >   },
	I0916 10:41:00.463697   47238 command_runner.go:130] >   "golang": "go1.22.7",
	I0916 10:41:00.463704   47238 command_runner.go:130] >   "lastCNILoadStatus": "OK",
	I0916 10:41:00.463711   47238 command_runner.go:130] >   "lastCNILoadStatus.default": "OK"
	I0916 10:41:00.463714   47238 command_runner.go:130] > }
	I0916 10:41:00.464161   47238 cni.go:84] Creating CNI manager for ""
	I0916 10:41:00.464179   47238 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 10:41:00.464188   47238 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:41:00.464207   47238 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-016570 NodeName:functional-016570 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:41:00.464364   47238 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-016570"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:41:00.464436   47238 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:41:00.472245   47238 command_runner.go:130] > kubeadm
	I0916 10:41:00.472268   47238 command_runner.go:130] > kubectl
	I0916 10:41:00.472273   47238 command_runner.go:130] > kubelet
	I0916 10:41:00.472895   47238 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:41:00.472945   47238 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:41:00.481214   47238 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0916 10:41:00.498427   47238 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:41:00.514918   47238 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2171 bytes)
	I0916 10:41:00.531052   47238 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0916 10:41:00.534425   47238 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I0916 10:41:00.534495   47238 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:41:00.629580   47238 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:41:00.640325   47238 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570 for IP: 192.168.49.2
	I0916 10:41:00.640346   47238 certs.go:194] generating shared ca certs ...
	I0916 10:41:00.640361   47238 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:41:00.640509   47238 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 10:41:00.640567   47238 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 10:41:00.640579   47238 certs.go:256] generating profile certs ...
	I0916 10:41:00.640681   47238 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.key
	I0916 10:41:00.640761   47238 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/apiserver.key.50ed18d6
	I0916 10:41:00.640814   47238 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/proxy-client.key
	I0916 10:41:00.640827   47238 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:41:00.640846   47238 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:41:00.640863   47238 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:41:00.640880   47238 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:41:00.640896   47238 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:41:00.640916   47238 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:41:00.640934   47238 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:41:00.640952   47238 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:41:00.641009   47238 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 10:41:00.641051   47238 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 10:41:00.641064   47238 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:41:00.641093   47238 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 10:41:00.641124   47238 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:41:00.641155   47238 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 10:41:00.641215   47238 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:41:00.641259   47238 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem -> /usr/share/ca-certificates/11189.pem
	I0916 10:41:00.641279   47238 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /usr/share/ca-certificates/111892.pem
	I0916 10:41:00.641297   47238 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:41:00.641915   47238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:41:00.665592   47238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:41:00.688756   47238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:41:00.711937   47238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:41:00.734009   47238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 10:41:00.756765   47238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:41:00.779358   47238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:41:00.801364   47238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:41:00.823784   47238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 10:41:00.845848   47238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 10:41:00.868340   47238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:41:00.890713   47238 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:41:00.906787   47238 ssh_runner.go:195] Run: openssl version
	I0916 10:41:00.911643   47238 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0916 10:41:00.911707   47238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 10:41:00.920393   47238 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 10:41:00.923485   47238 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 10:41:00.923522   47238 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 10:41:00.923560   47238 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 10:41:00.929537   47238 command_runner.go:130] > 3ec20f2e
	I0916 10:41:00.929711   47238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:41:00.937990   47238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:41:00.946871   47238 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:41:00.950159   47238 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:41:00.950221   47238 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:41:00.950267   47238 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:41:00.956676   47238 command_runner.go:130] > b5213941
	I0916 10:41:00.956818   47238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:41:00.965669   47238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 10:41:00.974626   47238 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 10:41:00.977986   47238 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 10:41:00.978034   47238 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 10:41:00.978072   47238 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 10:41:00.984302   47238 command_runner.go:130] > 51391683
	I0916 10:41:00.984552   47238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 10:41:00.993091   47238 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:41:00.996303   47238 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:41:00.996345   47238 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0916 10:41:00.996355   47238 command_runner.go:130] > Device: 801h/2049d	Inode: 557518      Links: 1
	I0916 10:41:00.996366   47238 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:41:00.996379   47238 command_runner.go:130] > Access: 2024-09-16 10:40:29.794515298 +0000
	I0916 10:41:00.996389   47238 command_runner.go:130] > Modify: 2024-09-16 10:40:29.794515298 +0000
	I0916 10:41:00.996417   47238 command_runner.go:130] > Change: 2024-09-16 10:40:29.794515298 +0000
	I0916 10:41:00.996435   47238 command_runner.go:130] >  Birth: 2024-09-16 10:40:29.794515298 +0000
	I0916 10:41:00.996495   47238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 10:41:01.002400   47238 command_runner.go:130] > Certificate will not expire
	I0916 10:41:01.002572   47238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 10:41:01.008849   47238 command_runner.go:130] > Certificate will not expire
	I0916 10:41:01.008920   47238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 10:41:01.014816   47238 command_runner.go:130] > Certificate will not expire
	I0916 10:41:01.015133   47238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 10:41:01.021371   47238 command_runner.go:130] > Certificate will not expire
	I0916 10:41:01.021591   47238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 10:41:01.027476   47238 command_runner.go:130] > Certificate will not expire
	I0916 10:41:01.027703   47238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 10:41:01.033586   47238 command_runner.go:130] > Certificate will not expire
	I0916 10:41:01.033726   47238 kubeadm.go:392] StartCluster: {Name:functional-016570 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-016570 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:41:01.033817   47238 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 10:41:01.033876   47238 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:41:01.067524   47238 command_runner.go:130] > fd0c81e7a39a2566405ad2950426958ab0d7abfe073ce6517f67e87f2cc2dabe
	I0916 10:41:01.067560   47238 command_runner.go:130] > 03ddfa3f2cafc3bb86ca39c37b44d4f722054dbe5b68412154aaff07a0db9267
	I0916 10:41:01.067569   47238 command_runner.go:130] > bf96dac81b725b0cdd05c80d46fccb31fba58eb314cbefaf4fa45648dd564d75
	I0916 10:41:01.067578   47238 command_runner.go:130] > 80095e847084fa5775086821e6be1bb6e7bae0fe6a66745b19cb9b75e266bc3f
	I0916 10:41:01.067586   47238 command_runner.go:130] > 0062114d9f75f428e3b03efa430fa13cefb82159b06c16a18002082c26a43f86
	I0916 10:41:01.067595   47238 command_runner.go:130] > 0906c5e415b9c8b7cf0dc6e35daf1caed07aec8e00dc68457a9203cdeaa0fcee
	I0916 10:41:01.067604   47238 command_runner.go:130] > c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171
	I0916 10:41:01.067623   47238 command_runner.go:130] > b4905826c508e92d29a39ef565f1c838d026ed2e1af8256da846ca2ca7e33d25
	I0916 10:41:01.067651   47238 cri.go:89] found id: "fd0c81e7a39a2566405ad2950426958ab0d7abfe073ce6517f67e87f2cc2dabe"
	I0916 10:41:01.067663   47238 cri.go:89] found id: "03ddfa3f2cafc3bb86ca39c37b44d4f722054dbe5b68412154aaff07a0db9267"
	I0916 10:41:01.067669   47238 cri.go:89] found id: "bf96dac81b725b0cdd05c80d46fccb31fba58eb314cbefaf4fa45648dd564d75"
	I0916 10:41:01.067678   47238 cri.go:89] found id: "80095e847084fa5775086821e6be1bb6e7bae0fe6a66745b19cb9b75e266bc3f"
	I0916 10:41:01.067683   47238 cri.go:89] found id: "0062114d9f75f428e3b03efa430fa13cefb82159b06c16a18002082c26a43f86"
	I0916 10:41:01.067689   47238 cri.go:89] found id: "0906c5e415b9c8b7cf0dc6e35daf1caed07aec8e00dc68457a9203cdeaa0fcee"
	I0916 10:41:01.067695   47238 cri.go:89] found id: "c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171"
	I0916 10:41:01.067700   47238 cri.go:89] found id: "b4905826c508e92d29a39ef565f1c838d026ed2e1af8256da846ca2ca7e33d25"
	I0916 10:41:01.067707   47238 cri.go:89] found id: ""
	I0916 10:41:01.067782   47238 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0916 10:41:01.092260   47238 command_runner.go:130] > [{"ociVersion":"1.0.2-dev","id":"0062114d9f75f428e3b03efa430fa13cefb82159b06c16a18002082c26a43f86","pid":1514,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0062114d9f75f428e3b03efa430fa13cefb82159b06c16a18002082c26a43f86","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0062114d9f75f428e3b03efa430fa13cefb82159b06c16a18002082c26a43f86/rootfs","created":"2024-09-16T10:40:33.248379085Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.31.1","io.kubernetes.cri.sandbox-id":"8b5d37485105016ad047d2bd7badc2c1ccabbe6c13a29075aec1d82c11b6924f","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-016570","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"05bfea671b4b973ad25665da415eb7d0"},"owner":"root"},{"ociVer
sion":"1.0.2-dev","id":"03ddfa3f2cafc3bb86ca39c37b44d4f722054dbe5b68412154aaff07a0db9267","pid":2297,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/03ddfa3f2cafc3bb86ca39c37b44d4f722054dbe5b68412154aaff07a0db9267","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/03ddfa3f2cafc3bb86ca39c37b44d4f722054dbe5b68412154aaff07a0db9267/rootfs","created":"2024-09-16T10:40:44.571062175Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"b81ffde02718d4aa5d690a7b2df31a8489e3a52138ff033bbcd631c55c48cadf","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9924f10d-5beb-43b1-9782-44644a015b56"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0906c5e415b9c8b7cf0dc6e35daf1caed07aec8e00dc68457a9203cdeaa0fcee","pid":1517
,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0906c5e415b9c8b7cf0dc6e35daf1caed07aec8e00dc68457a9203cdeaa0fcee","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0906c5e415b9c8b7cf0dc6e35daf1caed07aec8e00dc68457a9203cdeaa0fcee/rootfs","created":"2024-09-16T10:40:33.245231601Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.31.1","io.kubernetes.cri.sandbox-id":"caa2007696d1b222357e825d8e2d17593495ae89592ab8a4d4d3efc1c5faa1d3","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-016570","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"5c4ebe83a62e176d48c858392b494ba5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2cdebcb8c7807d8f74249d94f4671d3ed0afef05d2c61c91b14d092a2cca0dbf","pid":1361,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2cdebcb8c7807d8f7
4249d94f4671d3ed0afef05d2c61c91b14d092a2cca0dbf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2cdebcb8c7807d8f74249d94f4671d3ed0afef05d2c61c91b14d092a2cca0dbf/rootfs","created":"2024-09-16T10:40:33.026445222Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"2cdebcb8c7807d8f74249d94f4671d3ed0afef05d2c61c91b14d092a2cca0dbf","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-functional-016570_9ff8ce834d4b88cb05c2ce6dadcabd95","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-functional-016570","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9ff8ce834d4b88cb05c2ce6dadcabd95"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3d9a434f8b6e5a2769089045bbe03e5edd8fcdb55e1ebbcbb3906a6f7820100e","pid":2382,"status":"running","bundle":"/run/c
ontainerd/io.containerd.runtime.v2.task/k8s.io/3d9a434f8b6e5a2769089045bbe03e5edd8fcdb55e1ebbcbb3906a6f7820100e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3d9a434f8b6e5a2769089045bbe03e5edd8fcdb55e1ebbcbb3906a6f7820100e/rootfs","created":"2024-09-16T10:40:54.834227465Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"3d9a434f8b6e5a2769089045bbe03e5edd8fcdb55e1ebbcbb3906a6f7820100e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-7c65d6cfc9-59qm7_370e7aff-70ab-43f7-9770-098c21fd013d","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-7c65d6cfc9-59qm7","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"370e7aff-70ab-43f7-9770-098c21fd013d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5b66d77e8e33400b91593c23cc79
092e1262597c431c960d97c2f3351c50e961","pid":1360,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961/rootfs","created":"2024-09-16T10:40:33.024281022Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-functional-016570_5333b7f22b4ca6fa3369f64c875d053e","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-016570","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"5333b7f22b4ca6fa3369f64c875
d053e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"80095e847084fa5775086821e6be1bb6e7bae0fe6a66745b19cb9b75e266bc3f","pid":2009,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/80095e847084fa5775086821e6be1bb6e7bae0fe6a66745b19cb9b75e266bc3f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/80095e847084fa5775086821e6be1bb6e7bae0fe6a66745b19cb9b75e266bc3f/rootfs","created":"2024-09-16T10:40:43.922824123Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-proxy:v1.31.1","io.kubernetes.cri.sandbox-id":"f4ed79f8dffebd8037cefce09832cf889e13430ea89968777ef8fbbf95f977f5","io.kubernetes.cri.sandbox-name":"kube-proxy-w8qkq","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b4a00283-1d69-49c4-8c60-264ef3fd7aca"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8b5d37485105016ad047d2bd7badc2c1ccabbe6c13a29075aec1d82c11b6924f
","pid":1383,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8b5d37485105016ad047d2bd7badc2c1ccabbe6c13a29075aec1d82c11b6924f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8b5d37485105016ad047d2bd7badc2c1ccabbe6c13a29075aec1d82c11b6924f/rootfs","created":"2024-09-16T10:40:33.032806907Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"8b5d37485105016ad047d2bd7badc2c1ccabbe6c13a29075aec1d82c11b6924f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-functional-016570_05bfea671b4b973ad25665da415eb7d0","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-016570","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"05bfea671b4b973ad25665da415eb7d0"},"owner":"r
oot"},{"ociVersion":"1.0.2-dev","id":"b4905826c508e92d29a39ef565f1c838d026ed2e1af8256da846ca2ca7e33d25","pid":1447,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b4905826c508e92d29a39ef565f1c838d026ed2e1af8256da846ca2ca7e33d25","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b4905826c508e92d29a39ef565f1c838d026ed2e1af8256da846ca2ca7e33d25/rootfs","created":"2024-09-16T10:40:33.17291957Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.5.15-0","io.kubernetes.cri.sandbox-id":"2cdebcb8c7807d8f74249d94f4671d3ed0afef05d2c61c91b14d092a2cca0dbf","io.kubernetes.cri.sandbox-name":"etcd-functional-016570","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9ff8ce834d4b88cb05c2ce6dadcabd95"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b81ffde02718d4aa5d690a7b2df31a8489e3a52138ff033bbcd631c55c48cadf","pid":2251,"status":"runni
ng","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b81ffde02718d4aa5d690a7b2df31a8489e3a52138ff033bbcd631c55c48cadf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b81ffde02718d4aa5d690a7b2df31a8489e3a52138ff033bbcd631c55c48cadf/rootfs","created":"2024-09-16T10:40:44.503322931Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"b81ffde02718d4aa5d690a7b2df31a8489e3a52138ff033bbcd631c55c48cadf","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_9924f10d-5beb-43b1-9782-44644a015b56","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9924f10d-5beb-43b1-9782-44644a015b56"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bf96dac81b725b0cdd05c80d46fc
cb31fba58eb314cbefaf4fa45648dd564d75","pid":2058,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bf96dac81b725b0cdd05c80d46fccb31fba58eb314cbefaf4fa45648dd564d75","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bf96dac81b725b0cdd05c80d46fccb31fba58eb314cbefaf4fa45648dd564d75/rootfs","created":"2024-09-16T10:40:44.122183432Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20240813-c6f155d6","io.kubernetes.cri.sandbox-id":"c7f56f796b013d6e4a9b5ce02ee0358acad8efaf96059c2970b319e87e53c060","io.kubernetes.cri.sandbox-name":"kindnet-5qjpd","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"8ee89403-0943-480c-9f48-4b25a0198f6d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171","pid":1522,"status":"running","bundle":"/run/containerd/io.containerd.run
time.v2.task/k8s.io/c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171/rootfs","created":"2024-09-16T10:40:33.25104973Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.31.1","io.kubernetes.cri.sandbox-id":"5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-016570","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"5333b7f22b4ca6fa3369f64c875d053e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c7f56f796b013d6e4a9b5ce02ee0358acad8efaf96059c2970b319e87e53c060","pid":1934,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c7f56f796b013d6e4a9b5ce02ee0358acad8efaf96059c2970b319e87e53c060","rootfs":"/run/
containerd/io.containerd.runtime.v2.task/k8s.io/c7f56f796b013d6e4a9b5ce02ee0358acad8efaf96059c2970b319e87e53c060/rootfs","created":"2024-09-16T10:40:43.727260795Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"c7f56f796b013d6e4a9b5ce02ee0358acad8efaf96059c2970b319e87e53c060","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-5qjpd_8ee89403-0943-480c-9f48-4b25a0198f6d","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-5qjpd","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"8ee89403-0943-480c-9f48-4b25a0198f6d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"caa2007696d1b222357e825d8e2d17593495ae89592ab8a4d4d3efc1c5faa1d3","pid":1381,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/caa2007696d1b222
357e825d8e2d17593495ae89592ab8a4d4d3efc1c5faa1d3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/caa2007696d1b222357e825d8e2d17593495ae89592ab8a4d4d3efc1c5faa1d3/rootfs","created":"2024-09-16T10:40:33.032509578Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"caa2007696d1b222357e825d8e2d17593495ae89592ab8a4d4d3efc1c5faa1d3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-functional-016570_5c4ebe83a62e176d48c858392b494ba5","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-016570","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"5c4ebe83a62e176d48c858392b494ba5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f4ed79f8dffebd8037cefce09832cf889e13430ea89968777ef8fbbf95f977f5","pid":1927,"status":"runn
ing","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f4ed79f8dffebd8037cefce09832cf889e13430ea89968777ef8fbbf95f977f5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f4ed79f8dffebd8037cefce09832cf889e13430ea89968777ef8fbbf95f977f5/rootfs","created":"2024-09-16T10:40:43.632882566Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"f4ed79f8dffebd8037cefce09832cf889e13430ea89968777ef8fbbf95f977f5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-w8qkq_b4a00283-1d69-49c4-8c60-264ef3fd7aca","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-w8qkq","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b4a00283-1d69-49c4-8c60-264ef3fd7aca"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fd0c81e7a39a2566405ad2950426958ab
0d7abfe073ce6517f67e87f2cc2dabe","pid":2413,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fd0c81e7a39a2566405ad2950426958ab0d7abfe073ce6517f67e87f2cc2dabe","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fd0c81e7a39a2566405ad2950426958ab0d7abfe073ce6517f67e87f2cc2dabe/rootfs","created":"2024-09-16T10:40:54.906002595Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/coredns/coredns:v1.11.3","io.kubernetes.cri.sandbox-id":"3d9a434f8b6e5a2769089045bbe03e5edd8fcdb55e1ebbcbb3906a6f7820100e","io.kubernetes.cri.sandbox-name":"coredns-7c65d6cfc9-59qm7","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"370e7aff-70ab-43f7-9770-098c21fd013d"},"owner":"root"}]
	I0916 10:41:01.092720   47238 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"0062114d9f75f428e3b03efa430fa13cefb82159b06c16a18002082c26a43f86","pid":1514,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0062114d9f75f428e3b03efa430fa13cefb82159b06c16a18002082c26a43f86","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0062114d9f75f428e3b03efa430fa13cefb82159b06c16a18002082c26a43f86/rootfs","created":"2024-09-16T10:40:33.248379085Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.31.1","io.kubernetes.cri.sandbox-id":"8b5d37485105016ad047d2bd7badc2c1ccabbe6c13a29075aec1d82c11b6924f","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-016570","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"05bfea671b4b973ad25665da415eb7d0"},"owner":"root"},{"ociVersion":
"1.0.2-dev","id":"03ddfa3f2cafc3bb86ca39c37b44d4f722054dbe5b68412154aaff07a0db9267","pid":2297,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/03ddfa3f2cafc3bb86ca39c37b44d4f722054dbe5b68412154aaff07a0db9267","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/03ddfa3f2cafc3bb86ca39c37b44d4f722054dbe5b68412154aaff07a0db9267/rootfs","created":"2024-09-16T10:40:44.571062175Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"b81ffde02718d4aa5d690a7b2df31a8489e3a52138ff033bbcd631c55c48cadf","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9924f10d-5beb-43b1-9782-44644a015b56"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0906c5e415b9c8b7cf0dc6e35daf1caed07aec8e00dc68457a9203cdeaa0fcee","pid":1517,"stat
us":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0906c5e415b9c8b7cf0dc6e35daf1caed07aec8e00dc68457a9203cdeaa0fcee","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0906c5e415b9c8b7cf0dc6e35daf1caed07aec8e00dc68457a9203cdeaa0fcee/rootfs","created":"2024-09-16T10:40:33.245231601Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.31.1","io.kubernetes.cri.sandbox-id":"caa2007696d1b222357e825d8e2d17593495ae89592ab8a4d4d3efc1c5faa1d3","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-016570","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"5c4ebe83a62e176d48c858392b494ba5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2cdebcb8c7807d8f74249d94f4671d3ed0afef05d2c61c91b14d092a2cca0dbf","pid":1361,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2cdebcb8c7807d8f74249d9
4f4671d3ed0afef05d2c61c91b14d092a2cca0dbf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2cdebcb8c7807d8f74249d94f4671d3ed0afef05d2c61c91b14d092a2cca0dbf/rootfs","created":"2024-09-16T10:40:33.026445222Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"2cdebcb8c7807d8f74249d94f4671d3ed0afef05d2c61c91b14d092a2cca0dbf","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-functional-016570_9ff8ce834d4b88cb05c2ce6dadcabd95","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-functional-016570","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9ff8ce834d4b88cb05c2ce6dadcabd95"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3d9a434f8b6e5a2769089045bbe03e5edd8fcdb55e1ebbcbb3906a6f7820100e","pid":2382,"status":"running","bundle":"/run/contain
erd/io.containerd.runtime.v2.task/k8s.io/3d9a434f8b6e5a2769089045bbe03e5edd8fcdb55e1ebbcbb3906a6f7820100e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3d9a434f8b6e5a2769089045bbe03e5edd8fcdb55e1ebbcbb3906a6f7820100e/rootfs","created":"2024-09-16T10:40:54.834227465Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"3d9a434f8b6e5a2769089045bbe03e5edd8fcdb55e1ebbcbb3906a6f7820100e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-7c65d6cfc9-59qm7_370e7aff-70ab-43f7-9770-098c21fd013d","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-7c65d6cfc9-59qm7","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"370e7aff-70ab-43f7-9770-098c21fd013d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5b66d77e8e33400b91593c23cc79092e12
62597c431c960d97c2f3351c50e961","pid":1360,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961/rootfs","created":"2024-09-16T10:40:33.024281022Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-functional-016570_5333b7f22b4ca6fa3369f64c875d053e","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-016570","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"5333b7f22b4ca6fa3369f64c875d053e"
},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"80095e847084fa5775086821e6be1bb6e7bae0fe6a66745b19cb9b75e266bc3f","pid":2009,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/80095e847084fa5775086821e6be1bb6e7bae0fe6a66745b19cb9b75e266bc3f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/80095e847084fa5775086821e6be1bb6e7bae0fe6a66745b19cb9b75e266bc3f/rootfs","created":"2024-09-16T10:40:43.922824123Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-proxy:v1.31.1","io.kubernetes.cri.sandbox-id":"f4ed79f8dffebd8037cefce09832cf889e13430ea89968777ef8fbbf95f977f5","io.kubernetes.cri.sandbox-name":"kube-proxy-w8qkq","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b4a00283-1d69-49c4-8c60-264ef3fd7aca"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8b5d37485105016ad047d2bd7badc2c1ccabbe6c13a29075aec1d82c11b6924f","pid
":1383,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8b5d37485105016ad047d2bd7badc2c1ccabbe6c13a29075aec1d82c11b6924f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8b5d37485105016ad047d2bd7badc2c1ccabbe6c13a29075aec1d82c11b6924f/rootfs","created":"2024-09-16T10:40:33.032806907Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"8b5d37485105016ad047d2bd7badc2c1ccabbe6c13a29075aec1d82c11b6924f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-functional-016570_05bfea671b4b973ad25665da415eb7d0","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-016570","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"05bfea671b4b973ad25665da415eb7d0"},"owner":"root"},
{"ociVersion":"1.0.2-dev","id":"b4905826c508e92d29a39ef565f1c838d026ed2e1af8256da846ca2ca7e33d25","pid":1447,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b4905826c508e92d29a39ef565f1c838d026ed2e1af8256da846ca2ca7e33d25","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b4905826c508e92d29a39ef565f1c838d026ed2e1af8256da846ca2ca7e33d25/rootfs","created":"2024-09-16T10:40:33.17291957Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.5.15-0","io.kubernetes.cri.sandbox-id":"2cdebcb8c7807d8f74249d94f4671d3ed0afef05d2c61c91b14d092a2cca0dbf","io.kubernetes.cri.sandbox-name":"etcd-functional-016570","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9ff8ce834d4b88cb05c2ce6dadcabd95"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b81ffde02718d4aa5d690a7b2df31a8489e3a52138ff033bbcd631c55c48cadf","pid":2251,"status":"running","b
undle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b81ffde02718d4aa5d690a7b2df31a8489e3a52138ff033bbcd631c55c48cadf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b81ffde02718d4aa5d690a7b2df31a8489e3a52138ff033bbcd631c55c48cadf/rootfs","created":"2024-09-16T10:40:44.503322931Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"b81ffde02718d4aa5d690a7b2df31a8489e3a52138ff033bbcd631c55c48cadf","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_9924f10d-5beb-43b1-9782-44644a015b56","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9924f10d-5beb-43b1-9782-44644a015b56"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bf96dac81b725b0cdd05c80d46fccb31fb
a58eb314cbefaf4fa45648dd564d75","pid":2058,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bf96dac81b725b0cdd05c80d46fccb31fba58eb314cbefaf4fa45648dd564d75","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bf96dac81b725b0cdd05c80d46fccb31fba58eb314cbefaf4fa45648dd564d75/rootfs","created":"2024-09-16T10:40:44.122183432Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20240813-c6f155d6","io.kubernetes.cri.sandbox-id":"c7f56f796b013d6e4a9b5ce02ee0358acad8efaf96059c2970b319e87e53c060","io.kubernetes.cri.sandbox-name":"kindnet-5qjpd","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"8ee89403-0943-480c-9f48-4b25a0198f6d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171","pid":1522,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v
2.task/k8s.io/c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171/rootfs","created":"2024-09-16T10:40:33.25104973Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.31.1","io.kubernetes.cri.sandbox-id":"5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-016570","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"5333b7f22b4ca6fa3369f64c875d053e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c7f56f796b013d6e4a9b5ce02ee0358acad8efaf96059c2970b319e87e53c060","pid":1934,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c7f56f796b013d6e4a9b5ce02ee0358acad8efaf96059c2970b319e87e53c060","rootfs":"/run/contai
nerd/io.containerd.runtime.v2.task/k8s.io/c7f56f796b013d6e4a9b5ce02ee0358acad8efaf96059c2970b319e87e53c060/rootfs","created":"2024-09-16T10:40:43.727260795Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"c7f56f796b013d6e4a9b5ce02ee0358acad8efaf96059c2970b319e87e53c060","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-5qjpd_8ee89403-0943-480c-9f48-4b25a0198f6d","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-5qjpd","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"8ee89403-0943-480c-9f48-4b25a0198f6d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"caa2007696d1b222357e825d8e2d17593495ae89592ab8a4d4d3efc1c5faa1d3","pid":1381,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/caa2007696d1b222357e82
5d8e2d17593495ae89592ab8a4d4d3efc1c5faa1d3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/caa2007696d1b222357e825d8e2d17593495ae89592ab8a4d4d3efc1c5faa1d3/rootfs","created":"2024-09-16T10:40:33.032509578Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"caa2007696d1b222357e825d8e2d17593495ae89592ab8a4d4d3efc1c5faa1d3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-functional-016570_5c4ebe83a62e176d48c858392b494ba5","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-016570","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"5c4ebe83a62e176d48c858392b494ba5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f4ed79f8dffebd8037cefce09832cf889e13430ea89968777ef8fbbf95f977f5","pid":1927,"status":"running","
bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f4ed79f8dffebd8037cefce09832cf889e13430ea89968777ef8fbbf95f977f5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f4ed79f8dffebd8037cefce09832cf889e13430ea89968777ef8fbbf95f977f5/rootfs","created":"2024-09-16T10:40:43.632882566Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"f4ed79f8dffebd8037cefce09832cf889e13430ea89968777ef8fbbf95f977f5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-w8qkq_b4a00283-1d69-49c4-8c60-264ef3fd7aca","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-w8qkq","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b4a00283-1d69-49c4-8c60-264ef3fd7aca"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fd0c81e7a39a2566405ad2950426958ab0d7abf
e073ce6517f67e87f2cc2dabe","pid":2413,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fd0c81e7a39a2566405ad2950426958ab0d7abfe073ce6517f67e87f2cc2dabe","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fd0c81e7a39a2566405ad2950426958ab0d7abfe073ce6517f67e87f2cc2dabe/rootfs","created":"2024-09-16T10:40:54.906002595Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/coredns/coredns:v1.11.3","io.kubernetes.cri.sandbox-id":"3d9a434f8b6e5a2769089045bbe03e5edd8fcdb55e1ebbcbb3906a6f7820100e","io.kubernetes.cri.sandbox-name":"coredns-7c65d6cfc9-59qm7","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"370e7aff-70ab-43f7-9770-098c21fd013d"},"owner":"root"}]
	I0916 10:41:01.092944   47238 cri.go:126] list returned 16 containers
	I0916 10:41:01.092952   47238 cri.go:129] container: {ID:0062114d9f75f428e3b03efa430fa13cefb82159b06c16a18002082c26a43f86 Status:running}
	I0916 10:41:01.092965   47238 cri.go:135] skipping {0062114d9f75f428e3b03efa430fa13cefb82159b06c16a18002082c26a43f86 running}: state = "running", want "paused"
	I0916 10:41:01.092973   47238 cri.go:129] container: {ID:03ddfa3f2cafc3bb86ca39c37b44d4f722054dbe5b68412154aaff07a0db9267 Status:running}
	I0916 10:41:01.092977   47238 cri.go:135] skipping {03ddfa3f2cafc3bb86ca39c37b44d4f722054dbe5b68412154aaff07a0db9267 running}: state = "running", want "paused"
	I0916 10:41:01.092981   47238 cri.go:129] container: {ID:0906c5e415b9c8b7cf0dc6e35daf1caed07aec8e00dc68457a9203cdeaa0fcee Status:running}
	I0916 10:41:01.092985   47238 cri.go:135] skipping {0906c5e415b9c8b7cf0dc6e35daf1caed07aec8e00dc68457a9203cdeaa0fcee running}: state = "running", want "paused"
	I0916 10:41:01.092989   47238 cri.go:129] container: {ID:2cdebcb8c7807d8f74249d94f4671d3ed0afef05d2c61c91b14d092a2cca0dbf Status:running}
	I0916 10:41:01.092995   47238 cri.go:131] skipping 2cdebcb8c7807d8f74249d94f4671d3ed0afef05d2c61c91b14d092a2cca0dbf - not in ps
	I0916 10:41:01.092999   47238 cri.go:129] container: {ID:3d9a434f8b6e5a2769089045bbe03e5edd8fcdb55e1ebbcbb3906a6f7820100e Status:running}
	I0916 10:41:01.093005   47238 cri.go:131] skipping 3d9a434f8b6e5a2769089045bbe03e5edd8fcdb55e1ebbcbb3906a6f7820100e - not in ps
	I0916 10:41:01.093009   47238 cri.go:129] container: {ID:5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961 Status:running}
	I0916 10:41:01.093013   47238 cri.go:131] skipping 5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961 - not in ps
	I0916 10:41:01.093016   47238 cri.go:129] container: {ID:80095e847084fa5775086821e6be1bb6e7bae0fe6a66745b19cb9b75e266bc3f Status:running}
	I0916 10:41:01.093020   47238 cri.go:135] skipping {80095e847084fa5775086821e6be1bb6e7bae0fe6a66745b19cb9b75e266bc3f running}: state = "running", want "paused"
	I0916 10:41:01.093025   47238 cri.go:129] container: {ID:8b5d37485105016ad047d2bd7badc2c1ccabbe6c13a29075aec1d82c11b6924f Status:running}
	I0916 10:41:01.093030   47238 cri.go:131] skipping 8b5d37485105016ad047d2bd7badc2c1ccabbe6c13a29075aec1d82c11b6924f - not in ps
	I0916 10:41:01.093037   47238 cri.go:129] container: {ID:b4905826c508e92d29a39ef565f1c838d026ed2e1af8256da846ca2ca7e33d25 Status:running}
	I0916 10:41:01.093041   47238 cri.go:135] skipping {b4905826c508e92d29a39ef565f1c838d026ed2e1af8256da846ca2ca7e33d25 running}: state = "running", want "paused"
	I0916 10:41:01.093049   47238 cri.go:129] container: {ID:b81ffde02718d4aa5d690a7b2df31a8489e3a52138ff033bbcd631c55c48cadf Status:running}
	I0916 10:41:01.093053   47238 cri.go:131] skipping b81ffde02718d4aa5d690a7b2df31a8489e3a52138ff033bbcd631c55c48cadf - not in ps
	I0916 10:41:01.093057   47238 cri.go:129] container: {ID:bf96dac81b725b0cdd05c80d46fccb31fba58eb314cbefaf4fa45648dd564d75 Status:running}
	I0916 10:41:01.093060   47238 cri.go:135] skipping {bf96dac81b725b0cdd05c80d46fccb31fba58eb314cbefaf4fa45648dd564d75 running}: state = "running", want "paused"
	I0916 10:41:01.093065   47238 cri.go:129] container: {ID:c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171 Status:running}
	I0916 10:41:01.093069   47238 cri.go:135] skipping {c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171 running}: state = "running", want "paused"
	I0916 10:41:01.093075   47238 cri.go:129] container: {ID:c7f56f796b013d6e4a9b5ce02ee0358acad8efaf96059c2970b319e87e53c060 Status:running}
	I0916 10:41:01.093080   47238 cri.go:131] skipping c7f56f796b013d6e4a9b5ce02ee0358acad8efaf96059c2970b319e87e53c060 - not in ps
	I0916 10:41:01.093087   47238 cri.go:129] container: {ID:caa2007696d1b222357e825d8e2d17593495ae89592ab8a4d4d3efc1c5faa1d3 Status:running}
	I0916 10:41:01.093092   47238 cri.go:131] skipping caa2007696d1b222357e825d8e2d17593495ae89592ab8a4d4d3efc1c5faa1d3 - not in ps
	I0916 10:41:01.093098   47238 cri.go:129] container: {ID:f4ed79f8dffebd8037cefce09832cf889e13430ea89968777ef8fbbf95f977f5 Status:running}
	I0916 10:41:01.093101   47238 cri.go:131] skipping f4ed79f8dffebd8037cefce09832cf889e13430ea89968777ef8fbbf95f977f5 - not in ps
	I0916 10:41:01.093105   47238 cri.go:129] container: {ID:fd0c81e7a39a2566405ad2950426958ab0d7abfe073ce6517f67e87f2cc2dabe Status:running}
	I0916 10:41:01.093109   47238 cri.go:135] skipping {fd0c81e7a39a2566405ad2950426958ab0d7abfe073ce6517f67e87f2cc2dabe running}: state = "running", want "paused"
	I0916 10:41:01.093144   47238 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:41:01.100810   47238 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0916 10:41:01.100830   47238 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0916 10:41:01.100837   47238 command_runner.go:130] > /var/lib/minikube/etcd:
	I0916 10:41:01.100840   47238 command_runner.go:130] > member
	I0916 10:41:01.101500   47238 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 10:41:01.101515   47238 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 10:41:01.101555   47238 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0916 10:41:01.109548   47238 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:41:01.110028   47238 kubeconfig.go:125] found "functional-016570" server: "https://192.168.49.2:8441"
	I0916 10:41:01.110447   47238 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:41:01.110695   47238 kapi.go:59] client config for functional-016570: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:41:01.111140   47238 cert_rotation.go:140] Starting client certificate rotation controller
	I0916 10:41:01.111329   47238 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 10:41:01.119477   47238 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.49.2
	I0916 10:41:01.119528   47238 kubeadm.go:597] duration metric: took 18.007161ms to restartPrimaryControlPlane
	I0916 10:41:01.119540   47238 kubeadm.go:394] duration metric: took 85.821653ms to StartCluster
	I0916 10:41:01.119555   47238 settings.go:142] acquiring lock: {Name:mk5f140da961bb985c566f732d611f3e238f1f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:41:01.119637   47238 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:41:01.120636   47238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:41:01.120937   47238 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 10:41:01.121019   47238 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 10:41:01.121148   47238 addons.go:69] Setting storage-provisioner=true in profile "functional-016570"
	I0916 10:41:01.121170   47238 config.go:182] Loaded profile config "functional-016570": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:41:01.121190   47238 addons.go:69] Setting default-storageclass=true in profile "functional-016570"
	I0916 10:41:01.121179   47238 addons.go:234] Setting addon storage-provisioner=true in "functional-016570"
	I0916 10:41:01.121219   47238 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-016570"
	W0916 10:41:01.121230   47238 addons.go:243] addon storage-provisioner should already be in state true
	I0916 10:41:01.121259   47238 host.go:66] Checking if "functional-016570" exists ...
	I0916 10:41:01.121531   47238 cli_runner.go:164] Run: docker container inspect functional-016570 --format={{.State.Status}}
	I0916 10:41:01.121709   47238 cli_runner.go:164] Run: docker container inspect functional-016570 --format={{.State.Status}}
	I0916 10:41:01.123967   47238 out.go:177] * Verifying Kubernetes components...
	I0916 10:41:01.125424   47238 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:41:01.142780   47238 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:41:01.143078   47238 kapi.go:59] client config for functional-016570: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:41:01.143485   47238 addons.go:234] Setting addon default-storageclass=true in "functional-016570"
	W0916 10:41:01.143505   47238 addons.go:243] addon default-storageclass should already be in state true
	I0916 10:41:01.143538   47238 host.go:66] Checking if "functional-016570" exists ...
	I0916 10:41:01.144007   47238 cli_runner.go:164] Run: docker container inspect functional-016570 --format={{.State.Status}}
	I0916 10:41:01.144431   47238 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:41:01.145990   47238 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:41:01.146008   47238 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:41:01.146052   47238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-016570
	I0916 10:41:01.162172   47238 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:41:01.162199   47238 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:41:01.162261   47238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-016570
	I0916 10:41:01.171236   47238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/functional-016570/id_rsa Username:docker}
	I0916 10:41:01.184202   47238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/functional-016570/id_rsa Username:docker}
	I0916 10:41:01.229756   47238 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:41:01.240354   47238 node_ready.go:35] waiting up to 6m0s for node "functional-016570" to be "Ready" ...
	I0916 10:41:01.240484   47238 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-016570
	I0916 10:41:01.240493   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:01.240502   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:01.240509   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:01.247024   47238 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:41:01.247046   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:01.247055   47238 round_trippers.go:580]     Audit-Id: 573c33ef-95d1-46e9-86b3-8fb629398e97
	I0916 10:41:01.247060   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:01.247064   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:01.247068   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:01.247072   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:01.247075   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:01 GMT
	I0916 10:41:01.247201   47238 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-016570","uid":"43403832-3ac5-4a10-b0e5-35f585b23f6d","resourceVersion":"397","creationTimestamp":"2024-09-16T10:40:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-016570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-016570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_40_38_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd
ate","apiVersion":"v1","time":"2024-09-16T10:40:35Z","fieldsType":"Fiel [truncated 5025 chars]
	I0916 10:41:01.248128   47238 node_ready.go:49] node "functional-016570" has status "Ready":"True"
	I0916 10:41:01.248151   47238 node_ready.go:38] duration metric: took 7.761447ms for node "functional-016570" to be "Ready" ...
	I0916 10:41:01.248162   47238 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:41:01.248237   47238 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 10:41:01.248254   47238 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 10:41:01.248339   47238 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods
	I0916 10:41:01.248350   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:01.248359   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:01.248370   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:01.250922   47238 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:01.250955   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:01.250964   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:01.250970   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:01.250975   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:01.250979   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:01.250984   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:01 GMT
	I0916 10:41:01.250989   47238 round_trippers.go:580]     Audit-Id: daa1e234-c42a-44df-b468-9e9da5ebea7d
	I0916 10:41:01.251529   47238 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"422"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-59qm7","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"370e7aff-70ab-43f7-9770-098c21fd013d","resourceVersion":"412","creationTimestamp":"2024-09-16T10:40:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"053910d0-0f80-4b44-85c9-939b3882c87c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:40:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"053910d0-0f80-4b44-85c9-939b3882c87c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 58828 chars]
	I0916 10:41:01.254797   47238 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-59qm7" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:01.254873   47238 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-59qm7
	I0916 10:41:01.254880   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:01.254888   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:01.254891   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:01.256711   47238 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:41:01.256725   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:01.256731   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:01.256735   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:01.256739   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:01.256743   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:01.256746   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:01 GMT
	I0916 10:41:01.256749   47238 round_trippers.go:580]     Audit-Id: c03a4f98-a474-434d-b9f9-43ee485267ba
	I0916 10:41:01.256929   47238 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-59qm7","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"370e7aff-70ab-43f7-9770-098c21fd013d","resourceVersion":"412","creationTimestamp":"2024-09-16T10:40:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"053910d0-0f80-4b44-85c9-939b3882c87c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:40:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"053910d0-0f80-4b44-85c9-939b3882c87c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6481 chars]
	I0916 10:41:01.257355   47238 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-016570
	I0916 10:41:01.257368   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:01.257375   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:01.257378   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:01.258961   47238 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:41:01.258974   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:01.258983   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:01.258988   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:01.258992   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:01.258995   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:01.259001   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:01 GMT
	I0916 10:41:01.259009   47238 round_trippers.go:580]     Audit-Id: 0a4b5378-dfba-4947-b468-629203127bee
	I0916 10:41:01.259148   47238 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-016570","uid":"43403832-3ac5-4a10-b0e5-35f585b23f6d","resourceVersion":"397","creationTimestamp":"2024-09-16T10:40:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-016570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-016570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_40_38_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd
ate","apiVersion":"v1","time":"2024-09-16T10:40:35Z","fieldsType":"Fiel [truncated 5025 chars]
	I0916 10:41:01.259402   47238 pod_ready.go:93] pod "coredns-7c65d6cfc9-59qm7" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:01.259415   47238 pod_ready.go:82] duration metric: took 4.598033ms for pod "coredns-7c65d6cfc9-59qm7" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:01.259424   47238 pod_ready.go:79] waiting up to 6m0s for pod "etcd-functional-016570" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:01.259474   47238 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/etcd-functional-016570
	I0916 10:41:01.259481   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:01.259488   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:01.259493   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:01.261210   47238 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:41:01.261227   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:01.261234   47238 round_trippers.go:580]     Audit-Id: 1611216d-2481-42cf-9752-1a0d294e5c15
	I0916 10:41:01.261239   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:01.261244   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:01.261248   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:01.261253   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:01.261258   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:01 GMT
	I0916 10:41:01.261416   47238 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-016570","namespace":"kube-system","uid":"54625714-0265-4ecf-a4d3-b4ff173d81e0","resourceVersion":"358","creationTimestamp":"2024-09-16T10:40:38Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"9ff8ce834d4b88cb05c2ce6dadcabd95","kubernetes.io/config.mirror":"9ff8ce834d4b88cb05c2ce6dadcabd95","kubernetes.io/config.seen":"2024-09-16T10:40:37.769856809Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-016570","uid":"43403832-3ac5-4a10-b0e5-35f585b23f6d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:40:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6445 chars]
	I0916 10:41:01.261747   47238 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-016570
	I0916 10:41:01.261757   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:01.261764   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:01.261768   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:01.263323   47238 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:41:01.263337   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:01.263344   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:01.263349   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:01 GMT
	I0916 10:41:01.263355   47238 round_trippers.go:580]     Audit-Id: 5fa526da-e662-4c94-924e-0a78af0636c3
	I0916 10:41:01.263360   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:01.263364   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:01.263370   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:01.263516   47238 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-016570","uid":"43403832-3ac5-4a10-b0e5-35f585b23f6d","resourceVersion":"397","creationTimestamp":"2024-09-16T10:40:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-016570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-016570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_40_38_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd
ate","apiVersion":"v1","time":"2024-09-16T10:40:35Z","fieldsType":"Fiel [truncated 5025 chars]
	I0916 10:41:01.263851   47238 pod_ready.go:93] pod "etcd-functional-016570" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:01.263870   47238 pod_ready.go:82] duration metric: took 4.439077ms for pod "etcd-functional-016570" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:01.263890   47238 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-functional-016570" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:01.263943   47238 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-016570
	I0916 10:41:01.263950   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:01.263958   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:01.263966   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:01.265569   47238 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:41:01.265583   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:01.265595   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:01.265598   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:01.265601   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:01.265604   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:01 GMT
	I0916 10:41:01.265609   47238 round_trippers.go:580]     Audit-Id: d527ce4b-44b5-4656-b997-2b47884049f0
	I0916 10:41:01.265615   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:01.265788   47238 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-016570","namespace":"kube-system","uid":"03b56925-37e8-4f4c-947d-8798a9b0b1e8","resourceVersion":"400","creationTimestamp":"2024-09-16T10:40:37Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"5333b7f22b4ca6fa3369f64c875d053e","kubernetes.io/config.mirror":"5333b7f22b4ca6fa3369f64c875d053e","kubernetes.io/config.seen":"2024-09-16T10:40:32.389487986Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-016570","uid":"43403832-3ac5-4a10-b0e5-35f585b23f6d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:40:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8521 chars]
	I0916 10:41:01.266213   47238 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-016570
	I0916 10:41:01.266225   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:01.266232   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:01.266236   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:01.267912   47238 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:41:01.267931   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:01.267940   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:01.267945   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:01.267949   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:01 GMT
	I0916 10:41:01.267954   47238 round_trippers.go:580]     Audit-Id: 2ebfb541-85ed-47ba-b361-34f4f7a41c6d
	I0916 10:41:01.267959   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:01.267965   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:01.268150   47238 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-016570","uid":"43403832-3ac5-4a10-b0e5-35f585b23f6d","resourceVersion":"397","creationTimestamp":"2024-09-16T10:40:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-016570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-016570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_40_38_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd
ate","apiVersion":"v1","time":"2024-09-16T10:40:35Z","fieldsType":"Fiel [truncated 5025 chars]
	I0916 10:41:01.268537   47238 pod_ready.go:93] pod "kube-apiserver-functional-016570" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:01.268558   47238 pod_ready.go:82] duration metric: took 4.657492ms for pod "kube-apiserver-functional-016570" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:01.268574   47238 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-functional-016570" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:01.268648   47238 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-016570
	I0916 10:41:01.268665   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:01.268673   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:01.268681   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:01.270284   47238 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:41:01.270300   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:01.270305   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:01.270310   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:01.270313   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:01.270316   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:01 GMT
	I0916 10:41:01.270321   47238 round_trippers.go:580]     Audit-Id: b341d17e-adf8-4c4e-947e-9221d021c5d2
	I0916 10:41:01.270326   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:01.270514   47238 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-016570","namespace":"kube-system","uid":"ab12e143-7f68-4f92-b30d-82299e1bf5a0","resourceVersion":"403","creationTimestamp":"2024-09-16T10:40:38Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"05bfea671b4b973ad25665da415eb7d0","kubernetes.io/config.mirror":"05bfea671b4b973ad25665da415eb7d0","kubernetes.io/config.seen":"2024-09-16T10:40:37.769863952Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-016570","uid":"43403832-3ac5-4a10-b0e5-35f585b23f6d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:40:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 8096 chars]
	I0916 10:41:01.271079   47238 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-016570
	I0916 10:41:01.271102   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:01.271113   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:01.271122   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:01.272993   47238 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:41:01.273008   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:01.273014   47238 round_trippers.go:580]     Audit-Id: c6c54786-65a9-405d-8a97-8f3f44e34a44
	I0916 10:41:01.273022   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:01.273118   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:01.273143   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:01.273155   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:01.273161   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:01 GMT
	I0916 10:41:01.273300   47238 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-016570","uid":"43403832-3ac5-4a10-b0e5-35f585b23f6d","resourceVersion":"397","creationTimestamp":"2024-09-16T10:40:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-016570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-016570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_40_38_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd
ate","apiVersion":"v1","time":"2024-09-16T10:40:35Z","fieldsType":"Fiel [truncated 5025 chars]
	I0916 10:41:01.273655   47238 pod_ready.go:93] pod "kube-controller-manager-functional-016570" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:01.273676   47238 pod_ready.go:82] duration metric: took 5.090622ms for pod "kube-controller-manager-functional-016570" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:01.273691   47238 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w8qkq" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:01.282800   47238 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:41:01.288620   47238 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:41:01.440913   47238 request.go:632] Waited for 167.153595ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-proxy-w8qkq
	I0916 10:41:01.440995   47238 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-proxy-w8qkq
	I0916 10:41:01.441004   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:01.441014   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:01.441025   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:01.443074   47238 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:01.443099   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:01.443109   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:01.443116   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:01.443121   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:01.443126   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:01.443132   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:01 GMT
	I0916 10:41:01.443137   47238 round_trippers.go:580]     Audit-Id: 851c8443-a6c3-4499-8fc2-314db0590a15
	I0916 10:41:01.443297   47238 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-w8qkq","generateName":"kube-proxy-","namespace":"kube-system","uid":"b4a00283-1d69-49c4-8c60-264ef3fd7aca","resourceVersion":"384","creationTimestamp":"2024-09-16T10:40:43Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f0cb66a7-d42d-4412-b093-c4474ecbce20","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:40:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f0cb66a7-d42d-4412-b093-c4474ecbce20\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6177 chars]
	I0916 10:41:01.595035   47238 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0916 10:41:01.608803   47238 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0916 10:41:01.624127   47238 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0916 10:41:01.638979   47238 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0916 10:41:01.641143   47238 request.go:632] Waited for 197.253294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8441/api/v1/nodes/functional-016570
	I0916 10:41:01.641200   47238 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-016570
	I0916 10:41:01.641210   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:01.641238   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:01.641249   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:01.643109   47238 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:41:01.643131   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:01.643141   47238 round_trippers.go:580]     Audit-Id: b1604cff-69e6-43ca-adbe-1d28c3526947
	I0916 10:41:01.643146   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:01.643152   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:01.643159   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:01.643163   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:01.643167   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:01 GMT
	I0916 10:41:01.643268   47238 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-016570","uid":"43403832-3ac5-4a10-b0e5-35f585b23f6d","resourceVersion":"397","creationTimestamp":"2024-09-16T10:40:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-016570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-016570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_40_38_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd
ate","apiVersion":"v1","time":"2024-09-16T10:40:35Z","fieldsType":"Fiel [truncated 5025 chars]
	I0916 10:41:01.643567   47238 pod_ready.go:93] pod "kube-proxy-w8qkq" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:01.643594   47238 pod_ready.go:82] duration metric: took 369.893905ms for pod "kube-proxy-w8qkq" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:01.643607   47238 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-functional-016570" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:01.707400   47238 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0916 10:41:01.782135   47238 command_runner.go:130] > pod/storage-provisioner configured
	I0916 10:41:01.785562   47238 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0916 10:41:01.785694   47238 round_trippers.go:463] GET https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses
	I0916 10:41:01.785704   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:01.785711   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:01.785715   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:01.787555   47238 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:41:01.787572   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:01.787579   47238 round_trippers.go:580]     Audit-Id: e53fc3ce-0703-47b1-a0e8-0cfee38a1251
	I0916 10:41:01.787582   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:01.787586   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:01.787589   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:01.787592   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:01.787596   47238 round_trippers.go:580]     Content-Length: 1273
	I0916 10:41:01.787599   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:01 GMT
	I0916 10:41:01.787637   47238 request.go:1351] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"422"},"items":[{"metadata":{"name":"standard","uid":"3d18a656-2072-4784-925d-266b7e1a642f","resourceVersion":"348","creationTimestamp":"2024-09-16T10:40:43Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-16T10:40:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0916 10:41:01.788093   47238 request.go:1351] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"3d18a656-2072-4784-925d-266b7e1a642f","resourceVersion":"348","creationTimestamp":"2024-09-16T10:40:43Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-16T10:40:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0916 10:41:01.788147   47238 round_trippers.go:463] PUT https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses/standard
	I0916 10:41:01.788160   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:01.788167   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:01.788175   47238 round_trippers.go:473]     Content-Type: application/json
	I0916 10:41:01.788179   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:01.790450   47238 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:01.790471   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:01.790481   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:01.790486   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:01.790490   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:01.790495   47238 round_trippers.go:580]     Content-Length: 1220
	I0916 10:41:01.790500   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:01 GMT
	I0916 10:41:01.790510   47238 round_trippers.go:580]     Audit-Id: 7e43b097-e60d-4d17-b590-402ea1f59308
	I0916 10:41:01.790515   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:01.790597   47238 request.go:1351] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"3d18a656-2072-4784-925d-266b7e1a642f","resourceVersion":"348","creationTimestamp":"2024-09-16T10:40:43Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-16T10:40:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0916 10:41:01.793395   47238 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0916 10:41:01.794759   47238 addons.go:510] duration metric: took 673.747105ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 10:41:01.840604   47238 request.go:632] Waited for 196.895921ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-016570
	I0916 10:41:01.840704   47238 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-016570
	I0916 10:41:01.840714   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:01.840721   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:01.840725   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:01.842770   47238 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:01.842792   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:01.842798   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:01.842803   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:01 GMT
	I0916 10:41:01.842807   47238 round_trippers.go:580]     Audit-Id: a7467f04-dae9-4af3-841d-2898bbf49041
	I0916 10:41:01.842810   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:01.842812   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:01.842817   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:01.842963   47238 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-016570","namespace":"kube-system","uid":"640affb4-aae3-401b-b06b-fd9e07a9b506","resourceVersion":"394","creationTimestamp":"2024-09-16T10:40:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5c4ebe83a62e176d48c858392b494ba5","kubernetes.io/config.mirror":"5c4ebe83a62e176d48c858392b494ba5","kubernetes.io/config.seen":"2024-09-16T10:40:37.769865268Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-016570","uid":"43403832-3ac5-4a10-b0e5-35f585b23f6d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:40:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 4978 chars]
	I0916 10:41:02.040637   47238 request.go:632] Waited for 197.286296ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8441/api/v1/nodes/functional-016570
	I0916 10:41:02.040710   47238 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-016570
	I0916 10:41:02.040718   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:02.040725   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:02.040731   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:02.042599   47238 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:41:02.042619   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:02.042629   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:02.042634   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:02 GMT
	I0916 10:41:02.042639   47238 round_trippers.go:580]     Audit-Id: f5eee0a1-fb0a-47b9-a62c-cf97733db188
	I0916 10:41:02.042643   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:02.042654   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:02.042660   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:02.042834   47238 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-016570","uid":"43403832-3ac5-4a10-b0e5-35f585b23f6d","resourceVersion":"397","creationTimestamp":"2024-09-16T10:40:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-016570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-016570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_40_38_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd
ate","apiVersion":"v1","time":"2024-09-16T10:40:35Z","fieldsType":"Fiel [truncated 5025 chars]
	I0916 10:41:02.043135   47238 pod_ready.go:93] pod "kube-scheduler-functional-016570" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:02.043150   47238 pod_ready.go:82] duration metric: took 399.536396ms for pod "kube-scheduler-functional-016570" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:02.043161   47238 pod_ready.go:39] duration metric: took 794.989376ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:41:02.043177   47238 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:41:02.043220   47238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:41:02.053563   47238 command_runner.go:130] > 1522
	I0916 10:41:02.054426   47238 api_server.go:72] duration metric: took 933.446089ms to wait for apiserver process to appear ...
	I0916 10:41:02.054446   47238 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:41:02.054468   47238 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0916 10:41:02.058667   47238 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0916 10:41:02.058770   47238 round_trippers.go:463] GET https://192.168.49.2:8441/version
	I0916 10:41:02.058784   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:02.058795   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:02.058802   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:02.059555   47238 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 10:41:02.059573   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:02.059581   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:02 GMT
	I0916 10:41:02.059587   47238 round_trippers.go:580]     Audit-Id: a04ccc0b-8020-427f-9226-d12e984081a1
	I0916 10:41:02.059591   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:02.059595   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:02.059600   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:02.059604   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:02.059609   47238 round_trippers.go:580]     Content-Length: 263
	I0916 10:41:02.059636   47238 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0916 10:41:02.059838   47238 api_server.go:141] control plane version: v1.31.1
	I0916 10:41:02.059877   47238 api_server.go:131] duration metric: took 5.42283ms to wait for apiserver health ...
	I0916 10:41:02.059889   47238 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:41:02.241301   47238 request.go:632] Waited for 181.324964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods
	I0916 10:41:02.241384   47238 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods
	I0916 10:41:02.241395   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:02.241403   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:02.241410   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:02.244245   47238 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:02.244288   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:02.244296   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:02.244319   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:02 GMT
	I0916 10:41:02.244327   47238 round_trippers.go:580]     Audit-Id: 50c2d844-9140-4374-90b4-d0dbb29266f5
	I0916 10:41:02.244332   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:02.244341   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:02.244346   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:02.244973   47238 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"422"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-59qm7","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"370e7aff-70ab-43f7-9770-098c21fd013d","resourceVersion":"412","creationTimestamp":"2024-09-16T10:40:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"053910d0-0f80-4b44-85c9-939b3882c87c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:40:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"053910d0-0f80-4b44-85c9-939b3882c87c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 58828 chars]
	I0916 10:41:02.247096   47238 system_pods.go:59] 8 kube-system pods found
	I0916 10:41:02.247125   47238 system_pods.go:61] "coredns-7c65d6cfc9-59qm7" [370e7aff-70ab-43f7-9770-098c21fd013d] Running
	I0916 10:41:02.247132   47238 system_pods.go:61] "etcd-functional-016570" [54625714-0265-4ecf-a4d3-b4ff173d81e0] Running
	I0916 10:41:02.247138   47238 system_pods.go:61] "kindnet-5qjpd" [8ee89403-0943-480c-9f48-4b25a0198f6d] Running
	I0916 10:41:02.247144   47238 system_pods.go:61] "kube-apiserver-functional-016570" [03b56925-37e8-4f4c-947d-8798a9b0b1e8] Running
	I0916 10:41:02.247151   47238 system_pods.go:61] "kube-controller-manager-functional-016570" [ab12e143-7f68-4f92-b30d-82299e1bf5a0] Running
	I0916 10:41:02.247159   47238 system_pods.go:61] "kube-proxy-w8qkq" [b4a00283-1d69-49c4-8c60-264ef3fd7aca] Running
	I0916 10:41:02.247165   47238 system_pods.go:61] "kube-scheduler-functional-016570" [640affb4-aae3-401b-b06b-fd9e07a9b506] Running
	I0916 10:41:02.247170   47238 system_pods.go:61] "storage-provisioner" [9924f10d-5beb-43b1-9782-44644a015b56] Running
	I0916 10:41:02.247179   47238 system_pods.go:74] duration metric: took 187.28208ms to wait for pod list to return data ...
	I0916 10:41:02.247192   47238 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:41:02.440577   47238 request.go:632] Waited for 193.257431ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8441/api/v1/namespaces/default/serviceaccounts
	I0916 10:41:02.440628   47238 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/default/serviceaccounts
	I0916 10:41:02.440633   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:02.440640   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:02.440643   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:02.442723   47238 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:02.442742   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:02.442751   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:02.442757   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:02.442761   47238 round_trippers.go:580]     Content-Length: 261
	I0916 10:41:02.442765   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:02 GMT
	I0916 10:41:02.442769   47238 round_trippers.go:580]     Audit-Id: f7882051-0b1c-4c4b-aea3-b2fdb85ddfd2
	I0916 10:41:02.442772   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:02.442777   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:02.442800   47238 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"422"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"fc2d74f0-4f11-4ddc-8fef-1bf15c992759","resourceVersion":"295","creationTimestamp":"2024-09-16T10:40:42Z"}}]}
	I0916 10:41:02.443025   47238 default_sa.go:45] found service account: "default"
	I0916 10:41:02.443042   47238 default_sa.go:55] duration metric: took 195.841126ms for default service account to be created ...
	I0916 10:41:02.443052   47238 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:41:02.641511   47238 request.go:632] Waited for 198.386372ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods
	I0916 10:41:02.641588   47238 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods
	I0916 10:41:02.641595   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:02.641606   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:02.641615   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:02.644165   47238 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:02.644187   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:02.644196   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:02.644201   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:02.644207   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:02.644211   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:02 GMT
	I0916 10:41:02.644215   47238 round_trippers.go:580]     Audit-Id: 71bb75ab-57f2-47ad-8f1d-9bf3da582d3b
	I0916 10:41:02.644218   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:02.644981   47238 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"422"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-59qm7","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"370e7aff-70ab-43f7-9770-098c21fd013d","resourceVersion":"412","creationTimestamp":"2024-09-16T10:40:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"053910d0-0f80-4b44-85c9-939b3882c87c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:40:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"053910d0-0f80-4b44-85c9-939b3882c87c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 58828 chars]
	I0916 10:41:02.646926   47238 system_pods.go:86] 8 kube-system pods found
	I0916 10:41:02.646951   47238 system_pods.go:89] "coredns-7c65d6cfc9-59qm7" [370e7aff-70ab-43f7-9770-098c21fd013d] Running
	I0916 10:41:02.646956   47238 system_pods.go:89] "etcd-functional-016570" [54625714-0265-4ecf-a4d3-b4ff173d81e0] Running
	I0916 10:41:02.646959   47238 system_pods.go:89] "kindnet-5qjpd" [8ee89403-0943-480c-9f48-4b25a0198f6d] Running
	I0916 10:41:02.646962   47238 system_pods.go:89] "kube-apiserver-functional-016570" [03b56925-37e8-4f4c-947d-8798a9b0b1e8] Running
	I0916 10:41:02.646966   47238 system_pods.go:89] "kube-controller-manager-functional-016570" [ab12e143-7f68-4f92-b30d-82299e1bf5a0] Running
	I0916 10:41:02.646969   47238 system_pods.go:89] "kube-proxy-w8qkq" [b4a00283-1d69-49c4-8c60-264ef3fd7aca] Running
	I0916 10:41:02.646972   47238 system_pods.go:89] "kube-scheduler-functional-016570" [640affb4-aae3-401b-b06b-fd9e07a9b506] Running
	I0916 10:41:02.646975   47238 system_pods.go:89] "storage-provisioner" [9924f10d-5beb-43b1-9782-44644a015b56] Running
	I0916 10:41:02.646981   47238 system_pods.go:126] duration metric: took 203.923632ms to wait for k8s-apps to be running ...
	I0916 10:41:02.646988   47238 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:41:02.647038   47238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:41:02.657958   47238 system_svc.go:56] duration metric: took 10.957701ms WaitForService to wait for kubelet
	I0916 10:41:02.657990   47238 kubeadm.go:582] duration metric: took 1.537018145s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:41:02.658006   47238 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:41:02.841406   47238 request.go:632] Waited for 183.309623ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8441/api/v1/nodes
	I0916 10:41:02.841457   47238 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes
	I0916 10:41:02.841463   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:02.841470   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:02.841474   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:02.843828   47238 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:02.843847   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:02.843856   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:02.843862   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:02.843868   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:02.843872   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:02 GMT
	I0916 10:41:02.843886   47238 round_trippers.go:580]     Audit-Id: 3b7b97f9-d0fa-455b-b176-2a3a870192bb
	I0916 10:41:02.843894   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:02.844020   47238 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"423"},"items":[{"metadata":{"name":"functional-016570","uid":"43403832-3ac5-4a10-b0e5-35f585b23f6d","resourceVersion":"397","creationTimestamp":"2024-09-16T10:40:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-016570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-016570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_40_38_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"ma
nagedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v [truncated 5078 chars]
	I0916 10:41:02.844354   47238 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:41:02.844379   47238 node_conditions.go:123] node cpu capacity is 8
	I0916 10:41:02.844388   47238 node_conditions.go:105] duration metric: took 186.378223ms to run NodePressure ...
	I0916 10:41:02.844399   47238 start.go:241] waiting for startup goroutines ...
	I0916 10:41:02.844408   47238 start.go:246] waiting for cluster config update ...
	I0916 10:41:02.844423   47238 start.go:255] writing updated cluster config ...
	I0916 10:41:02.844666   47238 ssh_runner.go:195] Run: rm -f paused
	I0916 10:41:02.850472   47238 out.go:177] * Done! kubectl is now configured to use "functional-016570" cluster and "default" namespace by default
	E0916 10:41:02.851928   47238 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fd0c81e7a39a2       c69fa2e9cbf5f       8 seconds ago       Running             coredns                   0                   3d9a434f8b6e5       coredns-7c65d6cfc9-59qm7
	03ddfa3f2cafc       6e38f40d628db       19 seconds ago      Running             storage-provisioner       0                   b81ffde02718d       storage-provisioner
	bf96dac81b725       12968670680f4       19 seconds ago      Running             kindnet-cni               0                   c7f56f796b013       kindnet-5qjpd
	80095e847084f       60c005f310ff3       20 seconds ago      Running             kube-proxy                0                   f4ed79f8dffeb       kube-proxy-w8qkq
	0062114d9f75f       175ffd71cce3d       30 seconds ago      Running             kube-controller-manager   0                   8b5d374851050       kube-controller-manager-functional-016570
	0906c5e415b9c       9aa1fad941575       30 seconds ago      Running             kube-scheduler            0                   caa2007696d1b       kube-scheduler-functional-016570
	c1a0361849f33       6bab7719df100       30 seconds ago      Running             kube-apiserver            0                   5b66d77e8e334       kube-apiserver-functional-016570
	b4905826c508e       2e96e5913fc06       30 seconds ago      Running             etcd                      0                   2cdebcb8c7807       etcd-functional-016570
	
	
	==> containerd <==
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.198876249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.198891975Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.198961660Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.198985894Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.199001899Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.199018793Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.199032443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.199050407Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.199066137Z" level=info msg="NRI interface is disabled by configuration."
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.199082239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.199478661Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRunti
meSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:true IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath:/etc/containerd/certs.d Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress: StreamServerPort:10010 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.10 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissing
HugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:true EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.199966451Z" level=info msg="Connect containerd service"
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.200125138Z" level=info msg="using legacy CRI server"
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.200198751Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.200469956Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.201275863Z" level=info msg="Start subscribing containerd event"
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.201345401Z" level=info msg="Start recovering state"
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.201380943Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.201438744Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.259828531Z" level=info msg="Start event monitor"
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.259866517Z" level=info msg="Start snapshots syncer"
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.259881223Z" level=info msg="Start cni network conf syncer for default"
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.259891423Z" level=info msg="Start streaming server"
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.259985670Z" level=info msg="containerd successfully booted in 0.197059s"
	Sep 16 10:41:00 functional-016570 systemd[1]: Started containerd container runtime.
	
	
	==> coredns [fd0c81e7a39a2566405ad2950426958ab0d7abfe073ce6517f67e87f2cc2dabe] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44994 - 49719 "HINFO IN 5811560446017322614.7472127089541594346. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008166038s
	
	
	==> describe nodes <==
	Name:               functional-016570
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-016570
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=functional-016570
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_40_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:40:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-016570
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:40:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:40:48 +0000   Mon, 16 Sep 2024 10:40:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:40:48 +0000   Mon, 16 Sep 2024 10:40:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:40:48 +0000   Mon, 16 Sep 2024 10:40:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:40:48 +0000   Mon, 16 Sep 2024 10:40:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-016570
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 7e0bbc72eb56431496705823b26dcd4d
	  System UUID:                9e1cbc84-d044-4524-b143-ca93a6dc5aa0
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-59qm7                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     20s
	  kube-system                 etcd-functional-016570                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         25s
	  kube-system                 kindnet-5qjpd                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      20s
	  kube-system                 kube-apiserver-functional-016570             250m (3%)     0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-controller-manager-functional-016570    200m (2%)     0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-proxy-w8qkq                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         20s
	  kube-system                 kube-scheduler-functional-016570             100m (1%)     0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 19s   kube-proxy       
	  Normal   Starting                 26s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 26s   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  26s   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  25s   kubelet          Node functional-016570 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    25s   kubelet          Node functional-016570 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     25s   kubelet          Node functional-016570 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           21s   node-controller  Node functional-016570 event: Registered Node functional-016570 in Controller
	
	
	==> dmesg <==
	[Sep16 10:17]  #2
	[  +0.001391]  #3
	[  +0.000000]  #4
	[  +0.003060] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.003238] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.002037] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.002135]  #5
	[  +0.000696]  #6
	[  +0.003195]  #7
	[  +0.058540] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.423486] i8042: Warning: Keylock active
	[  +0.007424] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.002994] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000654] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000607] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000622] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000638] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000657] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000607] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000608] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.588189] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +7.251152] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [b4905826c508e92d29a39ef565f1c838d026ed2e1af8256da846ca2ca7e33d25] <==
	{"level":"info","ts":"2024-09-16T10:40:33.262701Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T10:40:33.262867Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:40:33.262937Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:40:33.263038Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:40:33.263071Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:40:33.951012Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T10:40:33.951071Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T10:40:33.951103Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-16T10:40:33.951131Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T10:40:33.951143Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-16T10:40:33.951156Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-16T10:40:33.951169Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-16T10:40:33.952133Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-016570 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:40:33.952149Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:40:33.952176Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:40:33.952174Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:40:33.952458Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:40:33.952538Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:40:33.952886Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:40:33.953028Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:40:33.953056Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:40:33.953277Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:40:33.954142Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:40:33.955433Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:40:33.956074Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> kernel <==
	 10:41:03 up 23 min,  0 users,  load average: 1.91, 0.92, 0.56
	Linux functional-016570 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [bf96dac81b725b0cdd05c80d46fccb31fba58eb314cbefaf4fa45648dd564d75] <==
	I0916 10:40:44.322865       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 10:40:44.323089       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0916 10:40:44.323250       1 main.go:148] setting mtu 1500 for CNI 
	I0916 10:40:44.323270       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 10:40:44.323292       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 10:40:44.644895       1 controller.go:334] Starting controller kube-network-policies
	I0916 10:40:44.644910       1 controller.go:338] Waiting for informer caches to sync
	I0916 10:40:44.644915       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 10:40:45.019853       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 10:40:45.019909       1 metrics.go:61] Registering metrics
	I0916 10:40:45.019969       1 controller.go:374] Syncing nftables rules
	I0916 10:40:54.644919       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:40:54.644955       1 main.go:299] handling current node
	
	
	==> kube-apiserver [c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171] <==
	I0916 10:40:35.620322       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:40:35.621313       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:40:35.621352       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:40:35.621382       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:40:35.621425       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:40:35.621453       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 10:40:35.621585       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 10:40:35.622013       1 controller.go:615] quota admission added evaluator for: namespaces
	E0916 10:40:35.627641       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0916 10:40:35.831014       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:40:36.457455       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0916 10:40:36.461248       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0916 10:40:36.461271       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 10:40:37.010774       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:40:37.051271       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 10:40:37.130025       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0916 10:40:37.137482       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0916 10:40:37.138581       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:40:37.143484       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:40:37.469770       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:40:37.933347       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:40:37.944494       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 10:40:37.955455       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:40:43.026924       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0916 10:40:43.229838       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [0062114d9f75f428e3b03efa430fa13cefb82159b06c16a18002082c26a43f86] <==
	I0916 10:40:42.326198       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:40:42.346002       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-016570"
	I0916 10:40:42.370119       1 shared_informer.go:320] Caches are synced for attach detach
	I0916 10:40:42.376598       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:40:42.409847       1 shared_informer.go:320] Caches are synced for PV protection
	I0916 10:40:42.419622       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0916 10:40:42.420786       1 shared_informer.go:320] Caches are synced for persistent volume
	I0916 10:40:42.842030       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:40:42.920000       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:40:42.920038       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:40:43.141796       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-016570"
	I0916 10:40:43.434546       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="197.686254ms"
	I0916 10:40:43.442835       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="8.220985ms"
	I0916 10:40:43.442928       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="55.644µs"
	I0916 10:40:43.525949       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="66.647µs"
	I0916 10:40:43.923991       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="79.808124ms"
	I0916 10:40:43.931522       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="7.483524ms"
	I0916 10:40:43.931666       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="96.772µs"
	I0916 10:40:44.880006       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="75.649µs"
	I0916 10:40:44.885322       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="66.511µs"
	I0916 10:40:44.888055       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="62.771µs"
	I0916 10:40:48.140209       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-016570"
	I0916 10:40:55.879590       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="84.161µs"
	I0916 10:40:55.896034       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.235562ms"
	I0916 10:40:55.896140       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="68.028µs"
	
	
	==> kube-proxy [80095e847084fa5775086821e6be1bb6e7bae0fe6a66745b19cb9b75e266bc3f] <==
	I0916 10:40:44.050300       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:40:44.179052       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:40:44.179113       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:40:44.196854       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:40:44.196926       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:40:44.198841       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:40:44.199284       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:40:44.199313       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:40:44.200417       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:40:44.200434       1 config.go:199] "Starting service config controller"
	I0916 10:40:44.200460       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:40:44.200461       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:40:44.200579       1 config.go:328] "Starting node config controller"
	I0916 10:40:44.200592       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:40:44.300956       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:40:44.300944       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:40:44.300945       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [0906c5e415b9c8b7cf0dc6e35daf1caed07aec8e00dc68457a9203cdeaa0fcee] <==
	E0916 10:40:35.625697       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 10:40:35.625947       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:35.625923       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:40:35.625982       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:35.626126       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:40:35.626272       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:35.626178       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 10:40:35.627185       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.485194       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:40:36.485283       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.534754       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:40:36.534802       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.553649       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 10:40:36.553693       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.739511       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 10:40:36.739572       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.763525       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 10:40:36.763583       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.768041       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 10:40:36.768094       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.782504       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 10:40:36.782564       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.919306       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:40:36.919357       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 10:40:39.247701       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:40:43 functional-016570 kubelet[1610]: I0916 10:40:43.430896    1610 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drftz\" (UniqueName: \"kubernetes.io/projected/370e7aff-70ab-43f7-9770-098c21fd013d-kube-api-access-drftz\") pod \"coredns-7c65d6cfc9-59qm7\" (UID: \"370e7aff-70ab-43f7-9770-098c21fd013d\") " pod="kube-system/coredns-7c65d6cfc9-59qm7"
	Sep 16 10:40:43 functional-016570 kubelet[1610]: I0916 10:40:43.430929    1610 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9911055-0a8b-4dea-9377-95c0203b4a4f-config-volume\") pod \"coredns-7c65d6cfc9-rwqzc\" (UID: \"c9911055-0a8b-4dea-9377-95c0203b4a4f\") " pod="kube-system/coredns-7c65d6cfc9-rwqzc"
	Sep 16 10:40:43 functional-016570 kubelet[1610]: E0916 10:40:43.830271    1610 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1243ca1996d54c6e5a27f6b3c7af87136f4be90bf44edcc3ff7c11a95e79cecc\": failed to find network info for sandbox \"1243ca1996d54c6e5a27f6b3c7af87136f4be90bf44edcc3ff7c11a95e79cecc\""
	Sep 16 10:40:43 functional-016570 kubelet[1610]: E0916 10:40:43.830382    1610 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1243ca1996d54c6e5a27f6b3c7af87136f4be90bf44edcc3ff7c11a95e79cecc\": failed to find network info for sandbox \"1243ca1996d54c6e5a27f6b3c7af87136f4be90bf44edcc3ff7c11a95e79cecc\"" pod="kube-system/coredns-7c65d6cfc9-rwqzc"
	Sep 16 10:40:43 functional-016570 kubelet[1610]: E0916 10:40:43.830611    1610 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1243ca1996d54c6e5a27f6b3c7af87136f4be90bf44edcc3ff7c11a95e79cecc\": failed to find network info for sandbox \"1243ca1996d54c6e5a27f6b3c7af87136f4be90bf44edcc3ff7c11a95e79cecc\"" pod="kube-system/coredns-7c65d6cfc9-rwqzc"
	Sep 16 10:40:43 functional-016570 kubelet[1610]: E0916 10:40:43.830694    1610 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-rwqzc_kube-system(c9911055-0a8b-4dea-9377-95c0203b4a4f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-rwqzc_kube-system(c9911055-0a8b-4dea-9377-95c0203b4a4f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1243ca1996d54c6e5a27f6b3c7af87136f4be90bf44edcc3ff7c11a95e79cecc\\\": failed to find network info for sandbox \\\"1243ca1996d54c6e5a27f6b3c7af87136f4be90bf44edcc3ff7c11a95e79cecc\\\"\"" pod="kube-system/coredns-7c65d6cfc9-rwqzc" podUID="c9911055-0a8b-4dea-9377-95c0203b4a4f"
	Sep 16 10:40:43 functional-016570 kubelet[1610]: E0916 10:40:43.841916    1610 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42993f922a85423d8507c1624fbc8d585acdbf71f24477870448fb925be6dec9\": failed to find network info for sandbox \"42993f922a85423d8507c1624fbc8d585acdbf71f24477870448fb925be6dec9\""
	Sep 16 10:40:43 functional-016570 kubelet[1610]: E0916 10:40:43.841966    1610 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42993f922a85423d8507c1624fbc8d585acdbf71f24477870448fb925be6dec9\": failed to find network info for sandbox \"42993f922a85423d8507c1624fbc8d585acdbf71f24477870448fb925be6dec9\"" pod="kube-system/coredns-7c65d6cfc9-59qm7"
	Sep 16 10:40:43 functional-016570 kubelet[1610]: E0916 10:40:43.841987    1610 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42993f922a85423d8507c1624fbc8d585acdbf71f24477870448fb925be6dec9\": failed to find network info for sandbox \"42993f922a85423d8507c1624fbc8d585acdbf71f24477870448fb925be6dec9\"" pod="kube-system/coredns-7c65d6cfc9-59qm7"
	Sep 16 10:40:43 functional-016570 kubelet[1610]: E0916 10:40:43.842036    1610 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-59qm7_kube-system(370e7aff-70ab-43f7-9770-098c21fd013d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-59qm7_kube-system(370e7aff-70ab-43f7-9770-098c21fd013d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"42993f922a85423d8507c1624fbc8d585acdbf71f24477870448fb925be6dec9\\\": failed to find network info for sandbox \\\"42993f922a85423d8507c1624fbc8d585acdbf71f24477870448fb925be6dec9\\\"\"" pod="kube-system/coredns-7c65d6cfc9-59qm7" podUID="370e7aff-70ab-43f7-9770-098c21fd013d"
	Sep 16 10:40:44 functional-016570 kubelet[1610]: I0916 10:40:44.036355    1610 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9911055-0a8b-4dea-9377-95c0203b4a4f-config-volume\") pod \"c9911055-0a8b-4dea-9377-95c0203b4a4f\" (UID: \"c9911055-0a8b-4dea-9377-95c0203b4a4f\") "
	Sep 16 10:40:44 functional-016570 kubelet[1610]: I0916 10:40:44.036413    1610 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8c54\" (UniqueName: \"kubernetes.io/projected/c9911055-0a8b-4dea-9377-95c0203b4a4f-kube-api-access-r8c54\") pod \"c9911055-0a8b-4dea-9377-95c0203b4a4f\" (UID: \"c9911055-0a8b-4dea-9377-95c0203b4a4f\") "
	Sep 16 10:40:44 functional-016570 kubelet[1610]: I0916 10:40:44.036813    1610 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9911055-0a8b-4dea-9377-95c0203b4a4f-config-volume" (OuterVolumeSpecName: "config-volume") pod "c9911055-0a8b-4dea-9377-95c0203b4a4f" (UID: "c9911055-0a8b-4dea-9377-95c0203b4a4f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Sep 16 10:40:44 functional-016570 kubelet[1610]: I0916 10:40:44.039236    1610 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9911055-0a8b-4dea-9377-95c0203b4a4f-kube-api-access-r8c54" (OuterVolumeSpecName: "kube-api-access-r8c54") pod "c9911055-0a8b-4dea-9377-95c0203b4a4f" (UID: "c9911055-0a8b-4dea-9377-95c0203b4a4f"). InnerVolumeSpecName "kube-api-access-r8c54". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:40:44 functional-016570 kubelet[1610]: I0916 10:40:44.137201    1610 reconciler_common.go:288] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9911055-0a8b-4dea-9377-95c0203b4a4f-config-volume\") on node \"functional-016570\" DevicePath \"\""
	Sep 16 10:40:44 functional-016570 kubelet[1610]: I0916 10:40:44.137238    1610 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-r8c54\" (UniqueName: \"kubernetes.io/projected/c9911055-0a8b-4dea-9377-95c0203b4a4f-kube-api-access-r8c54\") on node \"functional-016570\" DevicePath \"\""
	Sep 16 10:40:44 functional-016570 kubelet[1610]: I0916 10:40:44.338572    1610 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvpkq\" (UniqueName: \"kubernetes.io/projected/9924f10d-5beb-43b1-9782-44644a015b56-kube-api-access-bvpkq\") pod \"storage-provisioner\" (UID: \"9924f10d-5beb-43b1-9782-44644a015b56\") " pod="kube-system/storage-provisioner"
	Sep 16 10:40:44 functional-016570 kubelet[1610]: I0916 10:40:44.338623    1610 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9924f10d-5beb-43b1-9782-44644a015b56-tmp\") pod \"storage-provisioner\" (UID: \"9924f10d-5beb-43b1-9782-44644a015b56\") " pod="kube-system/storage-provisioner"
	Sep 16 10:40:44 functional-016570 kubelet[1610]: I0916 10:40:44.851111    1610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=0.851088151 podStartE2EDuration="851.088151ms" podCreationTimestamp="2024-09-16 10:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:40:44.850855071 +0000 UTC m=+7.148175382" watchObservedRunningTime="2024-09-16 10:40:44.851088151 +0000 UTC m=+7.148408464"
	Sep 16 10:40:44 functional-016570 kubelet[1610]: I0916 10:40:44.859288    1610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-w8qkq" podStartSLOduration=1.859265894 podStartE2EDuration="1.859265894s" podCreationTimestamp="2024-09-16 10:40:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:40:44.858994711 +0000 UTC m=+7.156315022" watchObservedRunningTime="2024-09-16 10:40:44.859265894 +0000 UTC m=+7.156586204"
	Sep 16 10:40:44 functional-016570 kubelet[1610]: I0916 10:40:44.880239    1610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-5qjpd" podStartSLOduration=1.880217383 podStartE2EDuration="1.880217383s" podCreationTimestamp="2024-09-16 10:40:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:40:44.869563495 +0000 UTC m=+7.166883786" watchObservedRunningTime="2024-09-16 10:40:44.880217383 +0000 UTC m=+7.177537693"
	Sep 16 10:40:45 functional-016570 kubelet[1610]: I0916 10:40:45.780903    1610 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9911055-0a8b-4dea-9377-95c0203b4a4f" path="/var/lib/kubelet/pods/c9911055-0a8b-4dea-9377-95c0203b4a4f/volumes"
	Sep 16 10:40:48 functional-016570 kubelet[1610]: I0916 10:40:48.117509    1610 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 16 10:40:48 functional-016570 kubelet[1610]: I0916 10:40:48.118369    1610 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 16 10:40:55 functional-016570 kubelet[1610]: I0916 10:40:55.889287    1610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-59qm7" podStartSLOduration=12.889266902 podStartE2EDuration="12.889266902s" podCreationTimestamp="2024-09-16 10:40:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:40:55.878958676 +0000 UTC m=+18.176278988" watchObservedRunningTime="2024-09-16 10:40:55.889266902 +0000 UTC m=+18.186587203"
	
	
	==> storage-provisioner [03ddfa3f2cafc3bb86ca39c37b44d4f722054dbe5b68412154aaff07a0db9267] <==
	I0916 10:40:44.592162       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:40:44.598921       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:40:44.598961       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:40:44.605075       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:40:44.605207       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-016570_34b5a6aa-cad3-4b7c-8e2b-f70c513bb4eb!
	I0916 10:40:44.605217       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1e3e2c42-8555-41e5-b1cf-7a6ddf78f6d7", APIVersion:"v1", ResourceVersion:"381", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-016570_34b5a6aa-cad3-4b7c-8e2b-f70c513bb4eb became leader
	I0916 10:40:44.706132       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-016570_34b5a6aa-cad3-4b7c-8e2b-f70c513bb4eb!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-016570 -n functional-016570
helpers_test.go:261: (dbg) Run:  kubectl --context functional-016570 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context functional-016570 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (536.654µs)
helpers_test.go:263: kubectl --context functional-016570 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestFunctional/serial/KubeContext (1.73s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-016570 get po -A
functional_test.go:696: (dbg) Non-zero exit: kubectl --context functional-016570 get po -A: fork/exec /usr/local/bin/kubectl: exec format error (344.739µs)
functional_test.go:698: failed to get kubectl pods: args "kubectl --context functional-016570 get po -A" : fork/exec /usr/local/bin/kubectl: exec format error
functional_test.go:705: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-016570 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-016570
helpers_test.go:235: (dbg) docker inspect functional-016570:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b",
	        "Created": "2024-09-16T10:40:22.437729239Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 44343,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:40:22.549860744Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b/hostname",
	        "HostsPath": "/var/lib/docker/containers/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b/hosts",
	        "LogPath": "/var/lib/docker/containers/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b-json.log",
	        "Name": "/functional-016570",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-016570:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-016570",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/89f1441ae4b7c160cc54fa69d84178c0aef1412348cbc8b5731e08fc84a1cd48-init/diff:/var/lib/docker/overlay2/e391a31d00d345931d18da68f8a7d7338830f6d9b98fe203461225d03a65d594/diff",
	                "MergedDir": "/var/lib/docker/overlay2/89f1441ae4b7c160cc54fa69d84178c0aef1412348cbc8b5731e08fc84a1cd48/merged",
	                "UpperDir": "/var/lib/docker/overlay2/89f1441ae4b7c160cc54fa69d84178c0aef1412348cbc8b5731e08fc84a1cd48/diff",
	                "WorkDir": "/var/lib/docker/overlay2/89f1441ae4b7c160cc54fa69d84178c0aef1412348cbc8b5731e08fc84a1cd48/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-016570",
	                "Source": "/var/lib/docker/volumes/functional-016570/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-016570",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-016570",
	                "name.minikube.sigs.k8s.io": "functional-016570",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cea48287f6ef85c384c2d0fd7e890db0e03a4817385b96024c8e0a0688fd7962",
	            "SandboxKey": "/var/run/docker/netns/cea48287f6ef",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-016570": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "0f0536305092a99e49a50f7adb5b668b5d3bb1c3c21d82e43be0c155c4e7cfe5",
	                    "EndpointID": "9201dc1ccce436b0c1f5c3cef087a9b08f43da4627ab9f3717ae9bbdfb5de9f3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-016570",
	                        "389fc216adb5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-016570 -n functional-016570
helpers_test.go:244: <<< TestFunctional/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-016570 logs -n 25: (1.078135635s)
helpers_test.go:252: TestFunctional/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	|------------|--------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command   |              Args              |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|------------|--------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| addons     | addons-191972 addons           | addons-191972     | jenkins | v1.34.0 | 16 Sep 24 10:38 UTC | 16 Sep 24 10:38 UTC |
	|            | disable metrics-server         |                   |         |         |                     |                     |
	|            | --alsologtostderr -v=1         |                   |         |         |                     |                     |
	| stop       | -p addons-191972               | addons-191972     | jenkins | v1.34.0 | 16 Sep 24 10:38 UTC | 16 Sep 24 10:38 UTC |
	| addons     | enable dashboard -p            | addons-191972     | jenkins | v1.34.0 | 16 Sep 24 10:38 UTC | 16 Sep 24 10:38 UTC |
	|            | addons-191972                  |                   |         |         |                     |                     |
	| addons     | disable dashboard -p           | addons-191972     | jenkins | v1.34.0 | 16 Sep 24 10:38 UTC | 16 Sep 24 10:38 UTC |
	|            | addons-191972                  |                   |         |         |                     |                     |
	| addons     | disable gvisor -p              | addons-191972     | jenkins | v1.34.0 | 16 Sep 24 10:38 UTC | 16 Sep 24 10:38 UTC |
	|            | addons-191972                  |                   |         |         |                     |                     |
	| delete     | -p addons-191972               | addons-191972     | jenkins | v1.34.0 | 16 Sep 24 10:38 UTC | 16 Sep 24 10:39 UTC |
	| start      | -p dockerenv-042187            | dockerenv-042187  | jenkins | v1.34.0 | 16 Sep 24 10:39 UTC | 16 Sep 24 10:39 UTC |
	|            | --driver=docker                |                   |         |         |                     |                     |
	|            | --container-runtime=containerd |                   |         |         |                     |                     |
	| docker-env | --ssh-host --ssh-add -p        | dockerenv-042187  | jenkins | v1.34.0 | 16 Sep 24 10:39 UTC | 16 Sep 24 10:39 UTC |
	|            | dockerenv-042187               |                   |         |         |                     |                     |
	| delete     | -p dockerenv-042187            | dockerenv-042187  | jenkins | v1.34.0 | 16 Sep 24 10:39 UTC | 16 Sep 24 10:39 UTC |
	| start      | -p nospam-421019 -n=1          | nospam-421019     | jenkins | v1.34.0 | 16 Sep 24 10:39 UTC | 16 Sep 24 10:40 UTC |
	|            | --memory=2250 --wait=false     |                   |         |         |                     |                     |
	|            | --log_dir=/tmp/nospam-421019   |                   |         |         |                     |                     |
	|            | --driver=docker                |                   |         |         |                     |                     |
	|            | --container-runtime=containerd |                   |         |         |                     |                     |
	| start      | nospam-421019 --log_dir        | nospam-421019     | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC |                     |
	|            | /tmp/nospam-421019 start       |                   |         |         |                     |                     |
	|            | --dry-run                      |                   |         |         |                     |                     |
	| start      | nospam-421019 --log_dir        | nospam-421019     | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC |                     |
	|            | /tmp/nospam-421019 start       |                   |         |         |                     |                     |
	|            | --dry-run                      |                   |         |         |                     |                     |
	| start      | nospam-421019 --log_dir        | nospam-421019     | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC |                     |
	|            | /tmp/nospam-421019 start       |                   |         |         |                     |                     |
	|            | --dry-run                      |                   |         |         |                     |                     |
	| pause      | nospam-421019 --log_dir        | nospam-421019     | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|            | /tmp/nospam-421019 pause       |                   |         |         |                     |                     |
	| pause      | nospam-421019 --log_dir        | nospam-421019     | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|            | /tmp/nospam-421019 pause       |                   |         |         |                     |                     |
	| pause      | nospam-421019 --log_dir        | nospam-421019     | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|            | /tmp/nospam-421019 pause       |                   |         |         |                     |                     |
	| unpause    | nospam-421019 --log_dir        | nospam-421019     | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|            | /tmp/nospam-421019 unpause     |                   |         |         |                     |                     |
	| unpause    | nospam-421019 --log_dir        | nospam-421019     | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|            | /tmp/nospam-421019 unpause     |                   |         |         |                     |                     |
	| unpause    | nospam-421019 --log_dir        | nospam-421019     | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|            | /tmp/nospam-421019 unpause     |                   |         |         |                     |                     |
	| stop       | nospam-421019 --log_dir        | nospam-421019     | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|            | /tmp/nospam-421019 stop        |                   |         |         |                     |                     |
	| stop       | nospam-421019 --log_dir        | nospam-421019     | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|            | /tmp/nospam-421019 stop        |                   |         |         |                     |                     |
	| stop       | nospam-421019 --log_dir        | nospam-421019     | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|            | /tmp/nospam-421019 stop        |                   |         |         |                     |                     |
	| delete     | -p nospam-421019               | nospam-421019     | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	| start      | -p functional-016570           | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|            | --memory=4000                  |                   |         |         |                     |                     |
	|            | --apiserver-port=8441          |                   |         |         |                     |                     |
	|            | --wait=all --driver=docker     |                   |         |         |                     |                     |
	|            | --container-runtime=containerd |                   |         |         |                     |                     |
	| start      | -p functional-016570           | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:41 UTC |
	|            | --alsologtostderr -v=8         |                   |         |         |                     |                     |
	|------------|--------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:40:57
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:40:57.801064   47238 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:40:57.801186   47238 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:40:57.801195   47238 out.go:358] Setting ErrFile to fd 2...
	I0916 10:40:57.801199   47238 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:40:57.801394   47238 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 10:40:57.801941   47238 out.go:352] Setting JSON to false
	I0916 10:40:57.802953   47238 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1402,"bootTime":1726481856,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:40:57.803057   47238 start.go:139] virtualization: kvm guest
	I0916 10:40:57.805403   47238 out.go:177] * [functional-016570] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:40:57.806637   47238 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:40:57.806666   47238 notify.go:220] Checking for updates...
	I0916 10:40:57.809259   47238 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:40:57.810762   47238 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:40:57.812069   47238 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 10:40:57.813514   47238 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:40:57.815019   47238 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:40:57.816973   47238 config.go:182] Loaded profile config "functional-016570": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:40:57.817077   47238 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:40:57.839585   47238 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:40:57.839718   47238 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:40:57.886950   47238 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-16 10:40:57.877803547 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:40:57.887149   47238 docker.go:318] overlay module found
	I0916 10:40:57.890164   47238 out.go:177] * Using the docker driver based on existing profile
	I0916 10:40:57.891300   47238 start.go:297] selected driver: docker
	I0916 10:40:57.891312   47238 start.go:901] validating driver "docker" against &{Name:functional-016570 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-016570 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:40:57.891411   47238 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:40:57.891484   47238 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:40:57.938679   47238 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-16 10:40:57.929626713 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:40:57.939314   47238 cni.go:84] Creating CNI manager for ""
	I0916 10:40:57.939371   47238 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 10:40:57.939433   47238 start.go:340] cluster config:
	{Name:functional-016570 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-016570 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:40:57.941267   47238 out.go:177] * Starting "functional-016570" primary control-plane node in "functional-016570" cluster
	I0916 10:40:57.942673   47238 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 10:40:57.943952   47238 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:40:57.945225   47238 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:40:57.945275   47238 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4
	I0916 10:40:57.945284   47238 cache.go:56] Caching tarball of preloaded images
	I0916 10:40:57.945345   47238 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:40:57.945363   47238 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 10:40:57.945371   47238 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 10:40:57.945475   47238 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/config.json ...
	W0916 10:40:57.966210   47238 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:40:57.966232   47238 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:40:57.966331   47238 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:40:57.966350   47238 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:40:57.966359   47238 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:40:57.966370   47238 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:40:57.966381   47238 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:40:57.967807   47238 image.go:273] response: 
	I0916 10:40:58.024435   47238 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:40:58.024502   47238 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:40:58.024538   47238 start.go:360] acquireMachinesLock for functional-016570: {Name:mkd69bbb7ce10518607df066fca58f5ba9fc9f29 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:40:58.024634   47238 start.go:364] duration metric: took 58.863µs to acquireMachinesLock for "functional-016570"
	I0916 10:40:58.024659   47238 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:40:58.024672   47238 fix.go:54] fixHost starting: 
	I0916 10:40:58.024900   47238 cli_runner.go:164] Run: docker container inspect functional-016570 --format={{.State.Status}}
	I0916 10:40:58.041479   47238 fix.go:112] recreateIfNeeded on functional-016570: state=Running err=<nil>
	W0916 10:40:58.041524   47238 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:40:58.043986   47238 out.go:177] * Updating the running docker "functional-016570" container ...
	I0916 10:40:58.045535   47238 machine.go:93] provisionDockerMachine start ...
	I0916 10:40:58.045628   47238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-016570
	I0916 10:40:58.064374   47238 main.go:141] libmachine: Using SSH client type: native
	I0916 10:40:58.064598   47238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 10:40:58.064611   47238 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:40:58.195127   47238 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-016570
	
	I0916 10:40:58.195154   47238 ubuntu.go:169] provisioning hostname "functional-016570"
	I0916 10:40:58.195228   47238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-016570
	I0916 10:40:58.213677   47238 main.go:141] libmachine: Using SSH client type: native
	I0916 10:40:58.213872   47238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 10:40:58.213890   47238 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-016570 && echo "functional-016570" | sudo tee /etc/hostname
	I0916 10:40:58.358080   47238 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-016570
	
	I0916 10:40:58.358159   47238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-016570
	I0916 10:40:58.375262   47238 main.go:141] libmachine: Using SSH client type: native
	I0916 10:40:58.375442   47238 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 10:40:58.375459   47238 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-016570' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-016570/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-016570' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:40:58.508210   47238 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:40:58.508244   47238 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 10:40:58.508301   47238 ubuntu.go:177] setting up certificates
	I0916 10:40:58.508311   47238 provision.go:84] configureAuth start
	I0916 10:40:58.508363   47238 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-016570
	I0916 10:40:58.526211   47238 provision.go:143] copyHostCerts
	I0916 10:40:58.526258   47238 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:40:58.526285   47238 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 10:40:58.526293   47238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:40:58.526345   47238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 10:40:58.526433   47238 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:40:58.526451   47238 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 10:40:58.526455   47238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:40:58.526473   47238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 10:40:58.526526   47238 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:40:58.526542   47238 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 10:40:58.526545   47238 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:40:58.526562   47238 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 10:40:58.526633   47238 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.functional-016570 san=[127.0.0.1 192.168.49.2 functional-016570 localhost minikube]
	I0916 10:40:58.629211   47238 provision.go:177] copyRemoteCerts
	I0916 10:40:58.629273   47238 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:40:58.629309   47238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-016570
	I0916 10:40:58.646436   47238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/functional-016570/id_rsa Username:docker}
	I0916 10:40:58.740118   47238 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:40:58.740185   47238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:40:58.762366   47238 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:40:58.762424   47238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 10:40:58.785467   47238 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:40:58.785521   47238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0916 10:40:58.807085   47238 provision.go:87] duration metric: took 298.760725ms to configureAuth
	I0916 10:40:58.807111   47238 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:40:58.807264   47238 config.go:182] Loaded profile config "functional-016570": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:40:58.807275   47238 machine.go:96] duration metric: took 761.722306ms to provisionDockerMachine
	I0916 10:40:58.807282   47238 start.go:293] postStartSetup for "functional-016570" (driver="docker")
	I0916 10:40:58.807291   47238 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:40:58.807342   47238 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:40:58.807375   47238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-016570
	I0916 10:40:58.824359   47238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/functional-016570/id_rsa Username:docker}
	I0916 10:40:58.920537   47238 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:40:58.924164   47238 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.4 LTS"
	I0916 10:40:58.924189   47238 command_runner.go:130] > NAME="Ubuntu"
	I0916 10:40:58.924197   47238 command_runner.go:130] > VERSION_ID="22.04"
	I0916 10:40:58.924202   47238 command_runner.go:130] > VERSION="22.04.4 LTS (Jammy Jellyfish)"
	I0916 10:40:58.924207   47238 command_runner.go:130] > VERSION_CODENAME=jammy
	I0916 10:40:58.924211   47238 command_runner.go:130] > ID=ubuntu
	I0916 10:40:58.924215   47238 command_runner.go:130] > ID_LIKE=debian
	I0916 10:40:58.924219   47238 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0916 10:40:58.924223   47238 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0916 10:40:58.924231   47238 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0916 10:40:58.924242   47238 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0916 10:40:58.924252   47238 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0916 10:40:58.924323   47238 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:40:58.924352   47238 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:40:58.924363   47238 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:40:58.924370   47238 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:40:58.924381   47238 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 10:40:58.924428   47238 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 10:40:58.924494   47238 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 10:40:58.924503   47238 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /etc/ssl/certs/111892.pem
	I0916 10:40:58.924564   47238 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/test/nested/copy/11189/hosts -> hosts in /etc/test/nested/copy/11189
	I0916 10:40:58.924571   47238 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/test/nested/copy/11189/hosts -> /etc/test/nested/copy/11189/hosts
	I0916 10:40:58.924603   47238 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/11189
	I0916 10:40:58.932694   47238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:40:58.955302   47238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/test/nested/copy/11189/hosts --> /etc/test/nested/copy/11189/hosts (40 bytes)
	I0916 10:40:58.978430   47238 start.go:296] duration metric: took 171.129668ms for postStartSetup
	I0916 10:40:58.978509   47238 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:40:58.978603   47238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-016570
	I0916 10:40:58.996303   47238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/functional-016570/id_rsa Username:docker}
	I0916 10:40:59.088555   47238 command_runner.go:130] > 31%
	I0916 10:40:59.088652   47238 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:40:59.092936   47238 command_runner.go:130] > 203G
	I0916 10:40:59.093107   47238 fix.go:56] duration metric: took 1.068428938s for fixHost
	I0916 10:40:59.093132   47238 start.go:83] releasing machines lock for "functional-016570", held for 1.068483187s
	I0916 10:40:59.093197   47238 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-016570
	I0916 10:40:59.110531   47238 ssh_runner.go:195] Run: cat /version.json
	I0916 10:40:59.110583   47238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-016570
	I0916 10:40:59.110622   47238 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:40:59.110674   47238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-016570
	I0916 10:40:59.130492   47238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/functional-016570/id_rsa Username:docker}
	I0916 10:40:59.130489   47238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/functional-016570/id_rsa Username:docker}
	I0916 10:40:59.219785   47238 command_runner.go:130] > {"iso_version": "v1.34.0-1726281733-19643", "kicbase_version": "v0.0.45-1726358845-19644", "minikube_version": "v1.34.0", "commit": "f890713149c79cf50e25c13e6a5c0470aa0f0450"}
	I0916 10:40:59.295890   47238 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0916 10:40:59.298245   47238 ssh_runner.go:195] Run: systemctl --version
	I0916 10:40:59.302144   47238 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.12)
	I0916 10:40:59.302179   47238 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0916 10:40:59.302248   47238 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:40:59.305912   47238 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0916 10:40:59.305935   47238 command_runner.go:130] >   Size: 78        	Blocks: 8          IO Block: 4096   regular file
	I0916 10:40:59.305945   47238 command_runner.go:130] > Device: 35h/53d	Inode: 557469      Links: 1
	I0916 10:40:59.305955   47238 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:40:59.305966   47238 command_runner.go:130] > Access: 2024-09-16 10:40:27.358300495 +0000
	I0916 10:40:59.305970   47238 command_runner.go:130] > Modify: 2024-09-16 10:40:27.330298025 +0000
	I0916 10:40:59.305975   47238 command_runner.go:130] > Change: 2024-09-16 10:40:27.330298025 +0000
	I0916 10:40:59.305980   47238 command_runner.go:130] >  Birth: 2024-09-16 10:40:27.330298025 +0000
	I0916 10:40:59.306236   47238 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 10:40:59.322521   47238 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:40:59.322585   47238 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:40:59.330495   47238 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:40:59.330513   47238 start.go:495] detecting cgroup driver to use...
	I0916 10:40:59.330547   47238 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:40:59.330596   47238 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 10:40:59.341280   47238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 10:40:59.351393   47238 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:40:59.351465   47238 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:40:59.363231   47238 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:40:59.373985   47238 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:40:59.466615   47238 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:40:59.561571   47238 docker.go:233] disabling docker service ...
	I0916 10:40:59.561641   47238 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:40:59.573455   47238 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:40:59.583564   47238 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:40:59.677173   47238 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:40:59.773169   47238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:40:59.784178   47238 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:40:59.798434   47238 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0916 10:40:59.799426   47238 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 10:40:59.808973   47238 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:40:59.818252   47238 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:40:59.818311   47238 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:40:59.827654   47238 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:40:59.836704   47238 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:40:59.845834   47238 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:40:59.855319   47238 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:40:59.864271   47238 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:40:59.873845   47238 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:40:59.883580   47238 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:40:59.893035   47238 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:40:59.900923   47238 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0916 10:40:59.900991   47238 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:40:59.909078   47238 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:41:00.006315   47238 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 10:41:00.261815   47238 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 10:41:00.261911   47238 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 10:41:00.265423   47238 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I0916 10:41:00.265443   47238 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0916 10:41:00.265449   47238 command_runner.go:130] > Device: 41h/65d	Inode: 599         Links: 1
	I0916 10:41:00.265456   47238 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:41:00.265462   47238 command_runner.go:130] > Access: 2024-09-16 10:41:00.197195840 +0000
	I0916 10:41:00.265466   47238 command_runner.go:130] > Modify: 2024-09-16 10:41:00.197195840 +0000
	I0916 10:41:00.265472   47238 command_runner.go:130] > Change: 2024-09-16 10:41:00.197195840 +0000
	I0916 10:41:00.265476   47238 command_runner.go:130] >  Birth: -
	I0916 10:41:00.265499   47238 start.go:563] Will wait 60s for crictl version
	I0916 10:41:00.265532   47238 ssh_runner.go:195] Run: which crictl
	I0916 10:41:00.268687   47238 command_runner.go:130] > /usr/bin/crictl
	I0916 10:41:00.268830   47238 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:41:00.297784   47238 command_runner.go:130] > Version:  0.1.0
	I0916 10:41:00.297805   47238 command_runner.go:130] > RuntimeName:  containerd
	I0916 10:41:00.297814   47238 command_runner.go:130] > RuntimeVersion:  1.7.22
	I0916 10:41:00.297819   47238 command_runner.go:130] > RuntimeApiVersion:  v1
	I0916 10:41:00.299901   47238 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 10:41:00.299951   47238 ssh_runner.go:195] Run: containerd --version
	I0916 10:41:00.320188   47238 command_runner.go:130] > containerd containerd.io 1.7.22 7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c
	I0916 10:41:00.320265   47238 ssh_runner.go:195] Run: containerd --version
	I0916 10:41:00.339092   47238 command_runner.go:130] > containerd containerd.io 1.7.22 7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c
	I0916 10:41:00.343304   47238 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 10:41:00.344502   47238 cli_runner.go:164] Run: docker network inspect functional-016570 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:41:00.361653   47238 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:41:00.365244   47238 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I0916 10:41:00.365343   47238 kubeadm.go:883] updating cluster {Name:functional-016570 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-016570 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:41:00.365442   47238 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:41:00.365483   47238 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:41:00.395218   47238 command_runner.go:130] > {
	I0916 10:41:00.395243   47238 command_runner.go:130] >   "images": [
	I0916 10:41:00.395252   47238 command_runner.go:130] >     {
	I0916 10:41:00.395263   47238 command_runner.go:130] >       "id": "sha256:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 10:41:00.395272   47238 command_runner.go:130] >       "repoTags": [
	I0916 10:41:00.395281   47238 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 10:41:00.395287   47238 command_runner.go:130] >       ],
	I0916 10:41:00.395295   47238 command_runner.go:130] >       "repoDigests": [
	I0916 10:41:00.395309   47238 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 10:41:00.395319   47238 command_runner.go:130] >       ],
	I0916 10:41:00.395324   47238 command_runner.go:130] >       "size": "36793393",
	I0916 10:41:00.395331   47238 command_runner.go:130] >       "uid": null,
	I0916 10:41:00.395342   47238 command_runner.go:130] >       "username": "",
	I0916 10:41:00.395350   47238 command_runner.go:130] >       "spec": null,
	I0916 10:41:00.395360   47238 command_runner.go:130] >       "pinned": false
	I0916 10:41:00.395367   47238 command_runner.go:130] >     },
	I0916 10:41:00.395374   47238 command_runner.go:130] >     {
	I0916 10:41:00.395390   47238 command_runner.go:130] >       "id": "sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 10:41:00.395400   47238 command_runner.go:130] >       "repoTags": [
	I0916 10:41:00.395410   47238 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 10:41:00.395419   47238 command_runner.go:130] >       ],
	I0916 10:41:00.395426   47238 command_runner.go:130] >       "repoDigests": [
	I0916 10:41:00.395443   47238 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0916 10:41:00.395450   47238 command_runner.go:130] >       ],
	I0916 10:41:00.395458   47238 command_runner.go:130] >       "size": "9058936",
	I0916 10:41:00.395468   47238 command_runner.go:130] >       "uid": null,
	I0916 10:41:00.395475   47238 command_runner.go:130] >       "username": "",
	I0916 10:41:00.395484   47238 command_runner.go:130] >       "spec": null,
	I0916 10:41:00.395491   47238 command_runner.go:130] >       "pinned": false
	I0916 10:41:00.395500   47238 command_runner.go:130] >     },
	I0916 10:41:00.395506   47238 command_runner.go:130] >     {
	I0916 10:41:00.395520   47238 command_runner.go:130] >       "id": "sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 10:41:00.395529   47238 command_runner.go:130] >       "repoTags": [
	I0916 10:41:00.395542   47238 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 10:41:00.395553   47238 command_runner.go:130] >       ],
	I0916 10:41:00.395563   47238 command_runner.go:130] >       "repoDigests": [
	I0916 10:41:00.395584   47238 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"
	I0916 10:41:00.395592   47238 command_runner.go:130] >       ],
	I0916 10:41:00.395600   47238 command_runner.go:130] >       "size": "18562039",
	I0916 10:41:00.395609   47238 command_runner.go:130] >       "uid": null,
	I0916 10:41:00.395617   47238 command_runner.go:130] >       "username": "nonroot",
	I0916 10:41:00.395626   47238 command_runner.go:130] >       "spec": null,
	I0916 10:41:00.395633   47238 command_runner.go:130] >       "pinned": false
	I0916 10:41:00.395641   47238 command_runner.go:130] >     },
	I0916 10:41:00.395647   47238 command_runner.go:130] >     {
	I0916 10:41:00.395661   47238 command_runner.go:130] >       "id": "sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 10:41:00.395670   47238 command_runner.go:130] >       "repoTags": [
	I0916 10:41:00.395678   47238 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 10:41:00.395686   47238 command_runner.go:130] >       ],
	I0916 10:41:00.395693   47238 command_runner.go:130] >       "repoDigests": [
	I0916 10:41:00.395707   47238 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 10:41:00.395716   47238 command_runner.go:130] >       ],
	I0916 10:41:00.395724   47238 command_runner.go:130] >       "size": "56909194",
	I0916 10:41:00.395733   47238 command_runner.go:130] >       "uid": {
	I0916 10:41:00.395759   47238 command_runner.go:130] >         "value": "0"
	I0916 10:41:00.395768   47238 command_runner.go:130] >       },
	I0916 10:41:00.395776   47238 command_runner.go:130] >       "username": "",
	I0916 10:41:00.395785   47238 command_runner.go:130] >       "spec": null,
	I0916 10:41:00.395792   47238 command_runner.go:130] >       "pinned": false
	I0916 10:41:00.395800   47238 command_runner.go:130] >     },
	I0916 10:41:00.395807   47238 command_runner.go:130] >     {
	I0916 10:41:00.395822   47238 command_runner.go:130] >       "id": "sha256:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 10:41:00.395832   47238 command_runner.go:130] >       "repoTags": [
	I0916 10:41:00.395841   47238 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 10:41:00.395849   47238 command_runner.go:130] >       ],
	I0916 10:41:00.395856   47238 command_runner.go:130] >       "repoDigests": [
	I0916 10:41:00.395873   47238 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 10:41:00.395883   47238 command_runner.go:130] >       ],
	I0916 10:41:00.395892   47238 command_runner.go:130] >       "size": "28047142",
	I0916 10:41:00.395901   47238 command_runner.go:130] >       "uid": {
	I0916 10:41:00.395910   47238 command_runner.go:130] >         "value": "0"
	I0916 10:41:00.395918   47238 command_runner.go:130] >       },
	I0916 10:41:00.395925   47238 command_runner.go:130] >       "username": "",
	I0916 10:41:00.395934   47238 command_runner.go:130] >       "spec": null,
	I0916 10:41:00.395943   47238 command_runner.go:130] >       "pinned": false
	I0916 10:41:00.395951   47238 command_runner.go:130] >     },
	I0916 10:41:00.395958   47238 command_runner.go:130] >     {
	I0916 10:41:00.395971   47238 command_runner.go:130] >       "id": "sha256:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 10:41:00.395980   47238 command_runner.go:130] >       "repoTags": [
	I0916 10:41:00.395992   47238 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 10:41:00.396000   47238 command_runner.go:130] >       ],
	I0916 10:41:00.396008   47238 command_runner.go:130] >       "repoDigests": [
	I0916 10:41:00.396021   47238 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1"
	I0916 10:41:00.396027   47238 command_runner.go:130] >       ],
	I0916 10:41:00.396037   47238 command_runner.go:130] >       "size": "26221554",
	I0916 10:41:00.396042   47238 command_runner.go:130] >       "uid": {
	I0916 10:41:00.396047   47238 command_runner.go:130] >         "value": "0"
	I0916 10:41:00.396052   47238 command_runner.go:130] >       },
	I0916 10:41:00.396057   47238 command_runner.go:130] >       "username": "",
	I0916 10:41:00.396063   47238 command_runner.go:130] >       "spec": null,
	I0916 10:41:00.396070   47238 command_runner.go:130] >       "pinned": false
	I0916 10:41:00.396077   47238 command_runner.go:130] >     },
	I0916 10:41:00.396085   47238 command_runner.go:130] >     {
	I0916 10:41:00.396111   47238 command_runner.go:130] >       "id": "sha256:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 10:41:00.396122   47238 command_runner.go:130] >       "repoTags": [
	I0916 10:41:00.396127   47238 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 10:41:00.396130   47238 command_runner.go:130] >       ],
	I0916 10:41:00.396135   47238 command_runner.go:130] >       "repoDigests": [
	I0916 10:41:00.396142   47238 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"
	I0916 10:41:00.396148   47238 command_runner.go:130] >       ],
	I0916 10:41:00.396153   47238 command_runner.go:130] >       "size": "30211884",
	I0916 10:41:00.396157   47238 command_runner.go:130] >       "uid": null,
	I0916 10:41:00.396161   47238 command_runner.go:130] >       "username": "",
	I0916 10:41:00.396164   47238 command_runner.go:130] >       "spec": null,
	I0916 10:41:00.396168   47238 command_runner.go:130] >       "pinned": false
	I0916 10:41:00.396172   47238 command_runner.go:130] >     },
	I0916 10:41:00.396175   47238 command_runner.go:130] >     {
	I0916 10:41:00.396182   47238 command_runner.go:130] >       "id": "sha256:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 10:41:00.396188   47238 command_runner.go:130] >       "repoTags": [
	I0916 10:41:00.396193   47238 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 10:41:00.396196   47238 command_runner.go:130] >       ],
	I0916 10:41:00.396200   47238 command_runner.go:130] >       "repoDigests": [
	I0916 10:41:00.396207   47238 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"
	I0916 10:41:00.396213   47238 command_runner.go:130] >       ],
	I0916 10:41:00.396216   47238 command_runner.go:130] >       "size": "20177215",
	I0916 10:41:00.396220   47238 command_runner.go:130] >       "uid": {
	I0916 10:41:00.396224   47238 command_runner.go:130] >         "value": "0"
	I0916 10:41:00.396230   47238 command_runner.go:130] >       },
	I0916 10:41:00.396236   47238 command_runner.go:130] >       "username": "",
	I0916 10:41:00.396240   47238 command_runner.go:130] >       "spec": null,
	I0916 10:41:00.396244   47238 command_runner.go:130] >       "pinned": false
	I0916 10:41:00.396248   47238 command_runner.go:130] >     },
	I0916 10:41:00.396251   47238 command_runner.go:130] >     {
	I0916 10:41:00.396257   47238 command_runner.go:130] >       "id": "sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 10:41:00.396264   47238 command_runner.go:130] >       "repoTags": [
	I0916 10:41:00.396269   47238 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 10:41:00.396272   47238 command_runner.go:130] >       ],
	I0916 10:41:00.396276   47238 command_runner.go:130] >       "repoDigests": [
	I0916 10:41:00.396283   47238 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 10:41:00.396295   47238 command_runner.go:130] >       ],
	I0916 10:41:00.396303   47238 command_runner.go:130] >       "size": "320368",
	I0916 10:41:00.396307   47238 command_runner.go:130] >       "uid": {
	I0916 10:41:00.396311   47238 command_runner.go:130] >         "value": "65535"
	I0916 10:41:00.396315   47238 command_runner.go:130] >       },
	I0916 10:41:00.396319   47238 command_runner.go:130] >       "username": "",
	I0916 10:41:00.396323   47238 command_runner.go:130] >       "spec": null,
	I0916 10:41:00.396327   47238 command_runner.go:130] >       "pinned": true
	I0916 10:41:00.396330   47238 command_runner.go:130] >     }
	I0916 10:41:00.396334   47238 command_runner.go:130] >   ]
	I0916 10:41:00.396337   47238 command_runner.go:130] > }
	I0916 10:41:00.397229   47238 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 10:41:00.397246   47238 containerd.go:534] Images already preloaded, skipping extraction
	I0916 10:41:00.397300   47238 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:41:00.426822   47238 command_runner.go:130] > {
	I0916 10:41:00.426840   47238 command_runner.go:130] >   "images": [
	I0916 10:41:00.426844   47238 command_runner.go:130] >     {
	I0916 10:41:00.426854   47238 command_runner.go:130] >       "id": "sha256:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 10:41:00.426861   47238 command_runner.go:130] >       "repoTags": [
	I0916 10:41:00.426866   47238 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 10:41:00.426870   47238 command_runner.go:130] >       ],
	I0916 10:41:00.426877   47238 command_runner.go:130] >       "repoDigests": [
	I0916 10:41:00.426893   47238 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 10:41:00.426903   47238 command_runner.go:130] >       ],
	I0916 10:41:00.426911   47238 command_runner.go:130] >       "size": "36793393",
	I0916 10:41:00.426917   47238 command_runner.go:130] >       "uid": null,
	I0916 10:41:00.426925   47238 command_runner.go:130] >       "username": "",
	I0916 10:41:00.426929   47238 command_runner.go:130] >       "spec": null,
	I0916 10:41:00.426936   47238 command_runner.go:130] >       "pinned": false
	I0916 10:41:00.426940   47238 command_runner.go:130] >     },
	I0916 10:41:00.426943   47238 command_runner.go:130] >     {
	I0916 10:41:00.426960   47238 command_runner.go:130] >       "id": "sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 10:41:00.426970   47238 command_runner.go:130] >       "repoTags": [
	I0916 10:41:00.426978   47238 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 10:41:00.426985   47238 command_runner.go:130] >       ],
	I0916 10:41:00.426992   47238 command_runner.go:130] >       "repoDigests": [
	I0916 10:41:00.427007   47238 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0916 10:41:00.427016   47238 command_runner.go:130] >       ],
	I0916 10:41:00.427025   47238 command_runner.go:130] >       "size": "9058936",
	I0916 10:41:00.427034   47238 command_runner.go:130] >       "uid": null,
	I0916 10:41:00.427041   47238 command_runner.go:130] >       "username": "",
	I0916 10:41:00.427047   47238 command_runner.go:130] >       "spec": null,
	I0916 10:41:00.427051   47238 command_runner.go:130] >       "pinned": false
	I0916 10:41:00.427055   47238 command_runner.go:130] >     },
	I0916 10:41:00.427058   47238 command_runner.go:130] >     {
	I0916 10:41:00.427068   47238 command_runner.go:130] >       "id": "sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 10:41:00.427078   47238 command_runner.go:130] >       "repoTags": [
	I0916 10:41:00.427091   47238 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 10:41:00.427100   47238 command_runner.go:130] >       ],
	I0916 10:41:00.427107   47238 command_runner.go:130] >       "repoDigests": [
	I0916 10:41:00.427121   47238 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"
	I0916 10:41:00.427129   47238 command_runner.go:130] >       ],
	I0916 10:41:00.427136   47238 command_runner.go:130] >       "size": "18562039",
	I0916 10:41:00.427144   47238 command_runner.go:130] >       "uid": null,
	I0916 10:41:00.427150   47238 command_runner.go:130] >       "username": "nonroot",
	I0916 10:41:00.427155   47238 command_runner.go:130] >       "spec": null,
	I0916 10:41:00.427162   47238 command_runner.go:130] >       "pinned": false
	I0916 10:41:00.427169   47238 command_runner.go:130] >     },
	I0916 10:41:00.427174   47238 command_runner.go:130] >     {
	I0916 10:41:00.427188   47238 command_runner.go:130] >       "id": "sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 10:41:00.427196   47238 command_runner.go:130] >       "repoTags": [
	I0916 10:41:00.427205   47238 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 10:41:00.427213   47238 command_runner.go:130] >       ],
	I0916 10:41:00.427220   47238 command_runner.go:130] >       "repoDigests": [
	I0916 10:41:00.427234   47238 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 10:41:00.427240   47238 command_runner.go:130] >       ],
	I0916 10:41:00.427246   47238 command_runner.go:130] >       "size": "56909194",
	I0916 10:41:00.427255   47238 command_runner.go:130] >       "uid": {
	I0916 10:41:00.427261   47238 command_runner.go:130] >         "value": "0"
	I0916 10:41:00.427271   47238 command_runner.go:130] >       },
	I0916 10:41:00.427280   47238 command_runner.go:130] >       "username": "",
	I0916 10:41:00.427288   47238 command_runner.go:130] >       "spec": null,
	I0916 10:41:00.427297   47238 command_runner.go:130] >       "pinned": false
	I0916 10:41:00.427304   47238 command_runner.go:130] >     },
	I0916 10:41:00.427313   47238 command_runner.go:130] >     {
	I0916 10:41:00.427322   47238 command_runner.go:130] >       "id": "sha256:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 10:41:00.427328   47238 command_runner.go:130] >       "repoTags": [
	I0916 10:41:00.427335   47238 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 10:41:00.427344   47238 command_runner.go:130] >       ],
	I0916 10:41:00.427351   47238 command_runner.go:130] >       "repoDigests": [
	I0916 10:41:00.427380   47238 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 10:41:00.427389   47238 command_runner.go:130] >       ],
	I0916 10:41:00.427396   47238 command_runner.go:130] >       "size": "28047142",
	I0916 10:41:00.427405   47238 command_runner.go:130] >       "uid": {
	I0916 10:41:00.427409   47238 command_runner.go:130] >         "value": "0"
	I0916 10:41:00.427414   47238 command_runner.go:130] >       },
	I0916 10:41:00.427420   47238 command_runner.go:130] >       "username": "",
	I0916 10:41:00.427429   47238 command_runner.go:130] >       "spec": null,
	I0916 10:41:00.427436   47238 command_runner.go:130] >       "pinned": false
	I0916 10:41:00.427445   47238 command_runner.go:130] >     },
	I0916 10:41:00.427450   47238 command_runner.go:130] >     {
	I0916 10:41:00.427464   47238 command_runner.go:130] >       "id": "sha256:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 10:41:00.427472   47238 command_runner.go:130] >       "repoTags": [
	I0916 10:41:00.427481   47238 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 10:41:00.427490   47238 command_runner.go:130] >       ],
	I0916 10:41:00.427496   47238 command_runner.go:130] >       "repoDigests": [
	I0916 10:41:00.427508   47238 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1"
	I0916 10:41:00.427517   47238 command_runner.go:130] >       ],
	I0916 10:41:00.427524   47238 command_runner.go:130] >       "size": "26221554",
	I0916 10:41:00.427533   47238 command_runner.go:130] >       "uid": {
	I0916 10:41:00.427547   47238 command_runner.go:130] >         "value": "0"
	I0916 10:41:00.427554   47238 command_runner.go:130] >       },
	I0916 10:41:00.427561   47238 command_runner.go:130] >       "username": "",
	I0916 10:41:00.427570   47238 command_runner.go:130] >       "spec": null,
	I0916 10:41:00.427578   47238 command_runner.go:130] >       "pinned": false
	I0916 10:41:00.427585   47238 command_runner.go:130] >     },
	I0916 10:41:00.427589   47238 command_runner.go:130] >     {
	I0916 10:41:00.427595   47238 command_runner.go:130] >       "id": "sha256:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 10:41:00.427603   47238 command_runner.go:130] >       "repoTags": [
	I0916 10:41:00.427611   47238 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 10:41:00.427621   47238 command_runner.go:130] >       ],
	I0916 10:41:00.427627   47238 command_runner.go:130] >       "repoDigests": [
	I0916 10:41:00.427638   47238 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"
	I0916 10:41:00.427649   47238 command_runner.go:130] >       ],
	I0916 10:41:00.427656   47238 command_runner.go:130] >       "size": "30211884",
	I0916 10:41:00.427662   47238 command_runner.go:130] >       "uid": null,
	I0916 10:41:00.427668   47238 command_runner.go:130] >       "username": "",
	I0916 10:41:00.427674   47238 command_runner.go:130] >       "spec": null,
	I0916 10:41:00.427681   47238 command_runner.go:130] >       "pinned": false
	I0916 10:41:00.427687   47238 command_runner.go:130] >     },
	I0916 10:41:00.427695   47238 command_runner.go:130] >     {
	I0916 10:41:00.427708   47238 command_runner.go:130] >       "id": "sha256:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 10:41:00.427717   47238 command_runner.go:130] >       "repoTags": [
	I0916 10:41:00.427726   47238 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 10:41:00.427731   47238 command_runner.go:130] >       ],
	I0916 10:41:00.427751   47238 command_runner.go:130] >       "repoDigests": [
	I0916 10:41:00.427767   47238 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"
	I0916 10:41:00.427776   47238 command_runner.go:130] >       ],
	I0916 10:41:00.427783   47238 command_runner.go:130] >       "size": "20177215",
	I0916 10:41:00.427792   47238 command_runner.go:130] >       "uid": {
	I0916 10:41:00.427799   47238 command_runner.go:130] >         "value": "0"
	I0916 10:41:00.427806   47238 command_runner.go:130] >       },
	I0916 10:41:00.427817   47238 command_runner.go:130] >       "username": "",
	I0916 10:41:00.427824   47238 command_runner.go:130] >       "spec": null,
	I0916 10:41:00.427835   47238 command_runner.go:130] >       "pinned": false
	I0916 10:41:00.427844   47238 command_runner.go:130] >     },
	I0916 10:41:00.427852   47238 command_runner.go:130] >     {
	I0916 10:41:00.427867   47238 command_runner.go:130] >       "id": "sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 10:41:00.427877   47238 command_runner.go:130] >       "repoTags": [
	I0916 10:41:00.427885   47238 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 10:41:00.427891   47238 command_runner.go:130] >       ],
	I0916 10:41:00.427897   47238 command_runner.go:130] >       "repoDigests": [
	I0916 10:41:00.427907   47238 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 10:41:00.427913   47238 command_runner.go:130] >       ],
	I0916 10:41:00.427920   47238 command_runner.go:130] >       "size": "320368",
	I0916 10:41:00.427925   47238 command_runner.go:130] >       "uid": {
	I0916 10:41:00.427932   47238 command_runner.go:130] >         "value": "65535"
	I0916 10:41:00.427938   47238 command_runner.go:130] >       },
	I0916 10:41:00.427944   47238 command_runner.go:130] >       "username": "",
	I0916 10:41:00.427950   47238 command_runner.go:130] >       "spec": null,
	I0916 10:41:00.427959   47238 command_runner.go:130] >       "pinned": true
	I0916 10:41:00.427967   47238 command_runner.go:130] >     }
	I0916 10:41:00.427974   47238 command_runner.go:130] >   ]
	I0916 10:41:00.427977   47238 command_runner.go:130] > }
	I0916 10:41:00.428100   47238 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 10:41:00.428111   47238 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:41:00.428118   47238 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.31.1 containerd true true} ...
	I0916 10:41:00.428243   47238 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-016570 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:functional-016570 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:41:00.428307   47238 ssh_runner.go:195] Run: sudo crictl info
	I0916 10:41:00.461913   47238 command_runner.go:130] > {
	I0916 10:41:00.461938   47238 command_runner.go:130] >   "status": {
	I0916 10:41:00.461943   47238 command_runner.go:130] >     "conditions": [
	I0916 10:41:00.461947   47238 command_runner.go:130] >       {
	I0916 10:41:00.461953   47238 command_runner.go:130] >         "type": "RuntimeReady",
	I0916 10:41:00.461958   47238 command_runner.go:130] >         "status": true,
	I0916 10:41:00.461962   47238 command_runner.go:130] >         "reason": "",
	I0916 10:41:00.461967   47238 command_runner.go:130] >         "message": ""
	I0916 10:41:00.461972   47238 command_runner.go:130] >       },
	I0916 10:41:00.461988   47238 command_runner.go:130] >       {
	I0916 10:41:00.461994   47238 command_runner.go:130] >         "type": "NetworkReady",
	I0916 10:41:00.462000   47238 command_runner.go:130] >         "status": true,
	I0916 10:41:00.462005   47238 command_runner.go:130] >         "reason": "",
	I0916 10:41:00.462020   47238 command_runner.go:130] >         "message": ""
	I0916 10:41:00.462025   47238 command_runner.go:130] >       },
	I0916 10:41:00.462035   47238 command_runner.go:130] >       {
	I0916 10:41:00.462042   47238 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings",
	I0916 10:41:00.462047   47238 command_runner.go:130] >         "status": true,
	I0916 10:41:00.462051   47238 command_runner.go:130] >         "reason": "",
	I0916 10:41:00.462055   47238 command_runner.go:130] >         "message": ""
	I0916 10:41:00.462059   47238 command_runner.go:130] >       }
	I0916 10:41:00.462062   47238 command_runner.go:130] >     ]
	I0916 10:41:00.462065   47238 command_runner.go:130] >   },
	I0916 10:41:00.462071   47238 command_runner.go:130] >   "cniconfig": {
	I0916 10:41:00.462075   47238 command_runner.go:130] >     "PluginDirs": [
	I0916 10:41:00.462079   47238 command_runner.go:130] >       "/opt/cni/bin"
	I0916 10:41:00.462082   47238 command_runner.go:130] >     ],
	I0916 10:41:00.462096   47238 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I0916 10:41:00.462106   47238 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I0916 10:41:00.462113   47238 command_runner.go:130] >     "Prefix": "eth",
	I0916 10:41:00.462120   47238 command_runner.go:130] >     "Networks": [
	I0916 10:41:00.462129   47238 command_runner.go:130] >       {
	I0916 10:41:00.462135   47238 command_runner.go:130] >         "Config": {
	I0916 10:41:00.462144   47238 command_runner.go:130] >           "Name": "cni-loopback",
	I0916 10:41:00.462148   47238 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I0916 10:41:00.462159   47238 command_runner.go:130] >           "Plugins": [
	I0916 10:41:00.462163   47238 command_runner.go:130] >             {
	I0916 10:41:00.462168   47238 command_runner.go:130] >               "Network": {
	I0916 10:41:00.462173   47238 command_runner.go:130] >                 "type": "loopback",
	I0916 10:41:00.462179   47238 command_runner.go:130] >                 "ipam": {},
	I0916 10:41:00.462183   47238 command_runner.go:130] >                 "dns": {}
	I0916 10:41:00.462187   47238 command_runner.go:130] >               },
	I0916 10:41:00.462193   47238 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I0916 10:41:00.462199   47238 command_runner.go:130] >             }
	I0916 10:41:00.462205   47238 command_runner.go:130] >           ],
	I0916 10:41:00.462224   47238 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I0916 10:41:00.462230   47238 command_runner.go:130] >         },
	I0916 10:41:00.462236   47238 command_runner.go:130] >         "IFName": "lo"
	I0916 10:41:00.462242   47238 command_runner.go:130] >       },
	I0916 10:41:00.462249   47238 command_runner.go:130] >       {
	I0916 10:41:00.462255   47238 command_runner.go:130] >         "Config": {
	I0916 10:41:00.462266   47238 command_runner.go:130] >           "Name": "kindnet",
	I0916 10:41:00.462272   47238 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I0916 10:41:00.462295   47238 command_runner.go:130] >           "Plugins": [
	I0916 10:41:00.462301   47238 command_runner.go:130] >             {
	I0916 10:41:00.462305   47238 command_runner.go:130] >               "Network": {
	I0916 10:41:00.462313   47238 command_runner.go:130] >                 "type": "ptp",
	I0916 10:41:00.462323   47238 command_runner.go:130] >                 "ipam": {
	I0916 10:41:00.462330   47238 command_runner.go:130] >                   "type": "host-local"
	I0916 10:41:00.462340   47238 command_runner.go:130] >                 },
	I0916 10:41:00.462346   47238 command_runner.go:130] >                 "dns": {}
	I0916 10:41:00.462360   47238 command_runner.go:130] >               },
	I0916 10:41:00.462383   47238 command_runner.go:130] >               "Source": "{\"ipMasq\":false,\"ipam\":{\"dataDir\":\"/run/cni-ipam-state\",\"ranges\":[[{\"subnet\":\"10.244.0.0/24\"}]],\"routes\":[{\"dst\":\"0.0.0.0/0\"}],\"type\":\"host-local\"},\"mtu\":1500,\"type\":\"ptp\"}"
	I0916 10:41:00.462400   47238 command_runner.go:130] >             },
	I0916 10:41:00.462406   47238 command_runner.go:130] >             {
	I0916 10:41:00.462410   47238 command_runner.go:130] >               "Network": {
	I0916 10:41:00.462414   47238 command_runner.go:130] >                 "type": "portmap",
	I0916 10:41:00.462423   47238 command_runner.go:130] >                 "capabilities": {
	I0916 10:41:00.462433   47238 command_runner.go:130] >                   "portMappings": true
	I0916 10:41:00.462442   47238 command_runner.go:130] >                 },
	I0916 10:41:00.462449   47238 command_runner.go:130] >                 "ipam": {},
	I0916 10:41:00.462460   47238 command_runner.go:130] >                 "dns": {}
	I0916 10:41:00.462466   47238 command_runner.go:130] >               },
	I0916 10:41:00.462480   47238 command_runner.go:130] >               "Source": "{\"capabilities\":{\"portMappings\":true},\"type\":\"portmap\"}"
	I0916 10:41:00.462489   47238 command_runner.go:130] >             }
	I0916 10:41:00.462495   47238 command_runner.go:130] >           ],
	I0916 10:41:00.462540   47238 command_runner.go:130] >           "Source": "\n{\n\t\"cniVersion\": \"0.3.1\",\n\t\"name\": \"kindnet\",\n\t\"plugins\": [\n\t{\n\t\t\"type\": \"ptp\",\n\t\t\"ipMasq\": false,\n\t\t\"ipam\": {\n\t\t\t\"type\": \"host-local\",\n\t\t\t\"dataDir\": \"/run/cni-ipam-state\",\n\t\t\t\"routes\": [\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t{ \"dst\": \"0.0.0.0/0\" }\n\t\t\t],\n\t\t\t\"ranges\": [\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t[ { \"subnet\": \"10.244.0.0/24\" } ]\n\t\t\t]\n\t\t}\n\t\t,\n\t\t\"mtu\": 1500\n\t\t\n\t},\n\t{\n\t\t\"type\": \"portmap\",\n\t\t\"capabilities\": {\n\t\t\t\"portMappings\": true\n\t\t}\n\t}\n\t]\n}\n"
	I0916 10:41:00.462552   47238 command_runner.go:130] >         },
	I0916 10:41:00.462559   47238 command_runner.go:130] >         "IFName": "eth0"
	I0916 10:41:00.462564   47238 command_runner.go:130] >       }
	I0916 10:41:00.462571   47238 command_runner.go:130] >     ]
	I0916 10:41:00.462578   47238 command_runner.go:130] >   },
	I0916 10:41:00.462585   47238 command_runner.go:130] >   "config": {
	I0916 10:41:00.462594   47238 command_runner.go:130] >     "containerd": {
	I0916 10:41:00.462602   47238 command_runner.go:130] >       "snapshotter": "overlayfs",
	I0916 10:41:00.462612   47238 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I0916 10:41:00.462622   47238 command_runner.go:130] >       "defaultRuntime": {
	I0916 10:41:00.462631   47238 command_runner.go:130] >         "runtimeType": "",
	I0916 10:41:00.462635   47238 command_runner.go:130] >         "runtimePath": "",
	I0916 10:41:00.462643   47238 command_runner.go:130] >         "runtimeEngine": "",
	I0916 10:41:00.462653   47238 command_runner.go:130] >         "PodAnnotations": null,
	I0916 10:41:00.462663   47238 command_runner.go:130] >         "ContainerAnnotations": null,
	I0916 10:41:00.462673   47238 command_runner.go:130] >         "runtimeRoot": "",
	I0916 10:41:00.462682   47238 command_runner.go:130] >         "options": null,
	I0916 10:41:00.462693   47238 command_runner.go:130] >         "privileged_without_host_devices": false,
	I0916 10:41:00.462706   47238 command_runner.go:130] >         "privileged_without_host_devices_all_devices_allowed": false,
	I0916 10:41:00.462715   47238 command_runner.go:130] >         "baseRuntimeSpec": "",
	I0916 10:41:00.462720   47238 command_runner.go:130] >         "cniConfDir": "",
	I0916 10:41:00.462727   47238 command_runner.go:130] >         "cniMaxConfNum": 0,
	I0916 10:41:00.462734   47238 command_runner.go:130] >         "snapshotter": "",
	I0916 10:41:00.462743   47238 command_runner.go:130] >         "sandboxMode": ""
	I0916 10:41:00.462750   47238 command_runner.go:130] >       },
	I0916 10:41:00.462770   47238 command_runner.go:130] >       "untrustedWorkloadRuntime": {
	I0916 10:41:00.462780   47238 command_runner.go:130] >         "runtimeType": "",
	I0916 10:41:00.462788   47238 command_runner.go:130] >         "runtimePath": "",
	I0916 10:41:00.462797   47238 command_runner.go:130] >         "runtimeEngine": "",
	I0916 10:41:00.462804   47238 command_runner.go:130] >         "PodAnnotations": null,
	I0916 10:41:00.462813   47238 command_runner.go:130] >         "ContainerAnnotations": null,
	I0916 10:41:00.462817   47238 command_runner.go:130] >         "runtimeRoot": "",
	I0916 10:41:00.462822   47238 command_runner.go:130] >         "options": null,
	I0916 10:41:00.462833   47238 command_runner.go:130] >         "privileged_without_host_devices": false,
	I0916 10:41:00.462847   47238 command_runner.go:130] >         "privileged_without_host_devices_all_devices_allowed": false,
	I0916 10:41:00.462854   47238 command_runner.go:130] >         "baseRuntimeSpec": "",
	I0916 10:41:00.462864   47238 command_runner.go:130] >         "cniConfDir": "",
	I0916 10:41:00.462871   47238 command_runner.go:130] >         "cniMaxConfNum": 0,
	I0916 10:41:00.462881   47238 command_runner.go:130] >         "snapshotter": "",
	I0916 10:41:00.462888   47238 command_runner.go:130] >         "sandboxMode": ""
	I0916 10:41:00.462896   47238 command_runner.go:130] >       },
	I0916 10:41:00.462902   47238 command_runner.go:130] >       "runtimes": {
	I0916 10:41:00.462910   47238 command_runner.go:130] >         "runc": {
	I0916 10:41:00.462917   47238 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I0916 10:41:00.462924   47238 command_runner.go:130] >           "runtimePath": "",
	I0916 10:41:00.462933   47238 command_runner.go:130] >           "runtimeEngine": "",
	I0916 10:41:00.462943   47238 command_runner.go:130] >           "PodAnnotations": null,
	I0916 10:41:00.462950   47238 command_runner.go:130] >           "ContainerAnnotations": null,
	I0916 10:41:00.462961   47238 command_runner.go:130] >           "runtimeRoot": "",
	I0916 10:41:00.462970   47238 command_runner.go:130] >           "options": {
	I0916 10:41:00.462983   47238 command_runner.go:130] >             "SystemdCgroup": false
	I0916 10:41:00.462989   47238 command_runner.go:130] >           },
	I0916 10:41:00.463009   47238 command_runner.go:130] >           "privileged_without_host_devices": false,
	I0916 10:41:00.463017   47238 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I0916 10:41:00.463023   47238 command_runner.go:130] >           "baseRuntimeSpec": "",
	I0916 10:41:00.463032   47238 command_runner.go:130] >           "cniConfDir": "",
	I0916 10:41:00.463042   47238 command_runner.go:130] >           "cniMaxConfNum": 0,
	I0916 10:41:00.463053   47238 command_runner.go:130] >           "snapshotter": "",
	I0916 10:41:00.463063   47238 command_runner.go:130] >           "sandboxMode": "podsandbox"
	I0916 10:41:00.463069   47238 command_runner.go:130] >         }
	I0916 10:41:00.463077   47238 command_runner.go:130] >       },
	I0916 10:41:00.463084   47238 command_runner.go:130] >       "noPivot": false,
	I0916 10:41:00.463095   47238 command_runner.go:130] >       "disableSnapshotAnnotations": true,
	I0916 10:41:00.463103   47238 command_runner.go:130] >       "discardUnpackedLayers": true,
	I0916 10:41:00.463109   47238 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I0916 10:41:00.463121   47238 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false
	I0916 10:41:00.463130   47238 command_runner.go:130] >     },
	I0916 10:41:00.463136   47238 command_runner.go:130] >     "cni": {
	I0916 10:41:00.463146   47238 command_runner.go:130] >       "binDir": "/opt/cni/bin",
	I0916 10:41:00.463154   47238 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I0916 10:41:00.463164   47238 command_runner.go:130] >       "maxConfNum": 1,
	I0916 10:41:00.463171   47238 command_runner.go:130] >       "setupSerially": false,
	I0916 10:41:00.463183   47238 command_runner.go:130] >       "confTemplate": "",
	I0916 10:41:00.463193   47238 command_runner.go:130] >       "ipPref": ""
	I0916 10:41:00.463198   47238 command_runner.go:130] >     },
	I0916 10:41:00.463204   47238 command_runner.go:130] >     "registry": {
	I0916 10:41:00.463211   47238 command_runner.go:130] >       "configPath": "/etc/containerd/certs.d",
	I0916 10:41:00.463220   47238 command_runner.go:130] >       "mirrors": null,
	I0916 10:41:00.463231   47238 command_runner.go:130] >       "configs": null,
	I0916 10:41:00.463238   47238 command_runner.go:130] >       "auths": null,
	I0916 10:41:00.463248   47238 command_runner.go:130] >       "headers": null
	I0916 10:41:00.463253   47238 command_runner.go:130] >     },
	I0916 10:41:00.463263   47238 command_runner.go:130] >     "imageDecryption": {
	I0916 10:41:00.463270   47238 command_runner.go:130] >       "keyModel": "node"
	I0916 10:41:00.463276   47238 command_runner.go:130] >     },
	I0916 10:41:00.463297   47238 command_runner.go:130] >     "disableTCPService": true,
	I0916 10:41:00.463302   47238 command_runner.go:130] >     "streamServerAddress": "",
	I0916 10:41:00.463309   47238 command_runner.go:130] >     "streamServerPort": "10010",
	I0916 10:41:00.463317   47238 command_runner.go:130] >     "streamIdleTimeout": "4h0m0s",
	I0916 10:41:00.463326   47238 command_runner.go:130] >     "enableSelinux": false,
	I0916 10:41:00.463334   47238 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I0916 10:41:00.463344   47238 command_runner.go:130] >     "sandboxImage": "registry.k8s.io/pause:3.10",
	I0916 10:41:00.463354   47238 command_runner.go:130] >     "statsCollectPeriod": 10,
	I0916 10:41:00.463365   47238 command_runner.go:130] >     "systemdCgroup": false,
	I0916 10:41:00.463372   47238 command_runner.go:130] >     "enableTLSStreaming": false,
	I0916 10:41:00.463382   47238 command_runner.go:130] >     "x509KeyPairStreaming": {
	I0916 10:41:00.463388   47238 command_runner.go:130] >       "tlsCertFile": "",
	I0916 10:41:00.463396   47238 command_runner.go:130] >       "tlsKeyFile": ""
	I0916 10:41:00.463400   47238 command_runner.go:130] >     },
	I0916 10:41:00.463404   47238 command_runner.go:130] >     "maxContainerLogSize": 16384,
	I0916 10:41:00.463413   47238 command_runner.go:130] >     "disableCgroup": false,
	I0916 10:41:00.463425   47238 command_runner.go:130] >     "disableApparmor": false,
	I0916 10:41:00.463433   47238 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I0916 10:41:00.463443   47238 command_runner.go:130] >     "maxConcurrentDownloads": 3,
	I0916 10:41:00.463450   47238 command_runner.go:130] >     "disableProcMount": false,
	I0916 10:41:00.463459   47238 command_runner.go:130] >     "unsetSeccompProfile": "",
	I0916 10:41:00.463467   47238 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I0916 10:41:00.463477   47238 command_runner.go:130] >     "disableHugetlbController": true,
	I0916 10:41:00.463488   47238 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I0916 10:41:00.463496   47238 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I0916 10:41:00.463501   47238 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I0916 10:41:00.463513   47238 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I0916 10:41:00.463528   47238 command_runner.go:130] >     "enableUnprivilegedICMP": false,
	I0916 10:41:00.463538   47238 command_runner.go:130] >     "enableCDI": false,
	I0916 10:41:00.463545   47238 command_runner.go:130] >     "cdiSpecDirs": [
	I0916 10:41:00.463554   47238 command_runner.go:130] >       "/etc/cdi",
	I0916 10:41:00.463561   47238 command_runner.go:130] >       "/var/run/cdi"
	I0916 10:41:00.463569   47238 command_runner.go:130] >     ],
	I0916 10:41:00.463576   47238 command_runner.go:130] >     "imagePullProgressTimeout": "5m0s",
	I0916 10:41:00.463587   47238 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I0916 10:41:00.463593   47238 command_runner.go:130] >     "imagePullWithSyncFs": false,
	I0916 10:41:00.463628   47238 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I0916 10:41:00.463646   47238 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I0916 10:41:00.463658   47238 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I0916 10:41:00.463670   47238 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I0916 10:41:00.463681   47238 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri"
	I0916 10:41:00.463687   47238 command_runner.go:130] >   },
	I0916 10:41:00.463697   47238 command_runner.go:130] >   "golang": "go1.22.7",
	I0916 10:41:00.463704   47238 command_runner.go:130] >   "lastCNILoadStatus": "OK",
	I0916 10:41:00.463711   47238 command_runner.go:130] >   "lastCNILoadStatus.default": "OK"
	I0916 10:41:00.463714   47238 command_runner.go:130] > }
	I0916 10:41:00.464161   47238 cni.go:84] Creating CNI manager for ""
	I0916 10:41:00.464179   47238 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 10:41:00.464188   47238 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:41:00.464207   47238 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-016570 NodeName:functional-016570 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:41:00.464364   47238 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-016570"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:41:00.464436   47238 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:41:00.472245   47238 command_runner.go:130] > kubeadm
	I0916 10:41:00.472268   47238 command_runner.go:130] > kubectl
	I0916 10:41:00.472273   47238 command_runner.go:130] > kubelet
	I0916 10:41:00.472895   47238 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:41:00.472945   47238 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:41:00.481214   47238 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0916 10:41:00.498427   47238 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:41:00.514918   47238 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2171 bytes)
	I0916 10:41:00.531052   47238 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0916 10:41:00.534425   47238 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I0916 10:41:00.534495   47238 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:41:00.629580   47238 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:41:00.640325   47238 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570 for IP: 192.168.49.2
	I0916 10:41:00.640346   47238 certs.go:194] generating shared ca certs ...
	I0916 10:41:00.640361   47238 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:41:00.640509   47238 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 10:41:00.640567   47238 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 10:41:00.640579   47238 certs.go:256] generating profile certs ...
	I0916 10:41:00.640681   47238 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.key
	I0916 10:41:00.640761   47238 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/apiserver.key.50ed18d6
	I0916 10:41:00.640814   47238 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/proxy-client.key
	I0916 10:41:00.640827   47238 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:41:00.640846   47238 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:41:00.640863   47238 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:41:00.640880   47238 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:41:00.640896   47238 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:41:00.640916   47238 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:41:00.640934   47238 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:41:00.640952   47238 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:41:00.641009   47238 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 10:41:00.641051   47238 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 10:41:00.641064   47238 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:41:00.641093   47238 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 10:41:00.641124   47238 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:41:00.641155   47238 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 10:41:00.641215   47238 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:41:00.641259   47238 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem -> /usr/share/ca-certificates/11189.pem
	I0916 10:41:00.641279   47238 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /usr/share/ca-certificates/111892.pem
	I0916 10:41:00.641297   47238 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:41:00.641915   47238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:41:00.665592   47238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:41:00.688756   47238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:41:00.711937   47238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:41:00.734009   47238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 10:41:00.756765   47238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:41:00.779358   47238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:41:00.801364   47238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:41:00.823784   47238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 10:41:00.845848   47238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 10:41:00.868340   47238 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:41:00.890713   47238 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:41:00.906787   47238 ssh_runner.go:195] Run: openssl version
	I0916 10:41:00.911643   47238 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0916 10:41:00.911707   47238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 10:41:00.920393   47238 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 10:41:00.923485   47238 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 10:41:00.923522   47238 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 10:41:00.923560   47238 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 10:41:00.929537   47238 command_runner.go:130] > 3ec20f2e
	I0916 10:41:00.929711   47238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:41:00.937990   47238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:41:00.946871   47238 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:41:00.950159   47238 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:41:00.950221   47238 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:41:00.950267   47238 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:41:00.956676   47238 command_runner.go:130] > b5213941
	I0916 10:41:00.956818   47238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:41:00.965669   47238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 10:41:00.974626   47238 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 10:41:00.977986   47238 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 10:41:00.978034   47238 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 10:41:00.978072   47238 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 10:41:00.984302   47238 command_runner.go:130] > 51391683
	I0916 10:41:00.984552   47238 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 10:41:00.993091   47238 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:41:00.996303   47238 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:41:00.996345   47238 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0916 10:41:00.996355   47238 command_runner.go:130] > Device: 801h/2049d	Inode: 557518      Links: 1
	I0916 10:41:00.996366   47238 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:41:00.996379   47238 command_runner.go:130] > Access: 2024-09-16 10:40:29.794515298 +0000
	I0916 10:41:00.996389   47238 command_runner.go:130] > Modify: 2024-09-16 10:40:29.794515298 +0000
	I0916 10:41:00.996417   47238 command_runner.go:130] > Change: 2024-09-16 10:40:29.794515298 +0000
	I0916 10:41:00.996435   47238 command_runner.go:130] >  Birth: 2024-09-16 10:40:29.794515298 +0000
	I0916 10:41:00.996495   47238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 10:41:01.002400   47238 command_runner.go:130] > Certificate will not expire
	I0916 10:41:01.002572   47238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 10:41:01.008849   47238 command_runner.go:130] > Certificate will not expire
	I0916 10:41:01.008920   47238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 10:41:01.014816   47238 command_runner.go:130] > Certificate will not expire
	I0916 10:41:01.015133   47238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 10:41:01.021371   47238 command_runner.go:130] > Certificate will not expire
	I0916 10:41:01.021591   47238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 10:41:01.027476   47238 command_runner.go:130] > Certificate will not expire
	I0916 10:41:01.027703   47238 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 10:41:01.033586   47238 command_runner.go:130] > Certificate will not expire
	I0916 10:41:01.033726   47238 kubeadm.go:392] StartCluster: {Name:functional-016570 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-016570 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:41:01.033817   47238 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 10:41:01.033876   47238 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:41:01.067524   47238 command_runner.go:130] > fd0c81e7a39a2566405ad2950426958ab0d7abfe073ce6517f67e87f2cc2dabe
	I0916 10:41:01.067560   47238 command_runner.go:130] > 03ddfa3f2cafc3bb86ca39c37b44d4f722054dbe5b68412154aaff07a0db9267
	I0916 10:41:01.067569   47238 command_runner.go:130] > bf96dac81b725b0cdd05c80d46fccb31fba58eb314cbefaf4fa45648dd564d75
	I0916 10:41:01.067578   47238 command_runner.go:130] > 80095e847084fa5775086821e6be1bb6e7bae0fe6a66745b19cb9b75e266bc3f
	I0916 10:41:01.067586   47238 command_runner.go:130] > 0062114d9f75f428e3b03efa430fa13cefb82159b06c16a18002082c26a43f86
	I0916 10:41:01.067595   47238 command_runner.go:130] > 0906c5e415b9c8b7cf0dc6e35daf1caed07aec8e00dc68457a9203cdeaa0fcee
	I0916 10:41:01.067604   47238 command_runner.go:130] > c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171
	I0916 10:41:01.067623   47238 command_runner.go:130] > b4905826c508e92d29a39ef565f1c838d026ed2e1af8256da846ca2ca7e33d25
	I0916 10:41:01.067651   47238 cri.go:89] found id: "fd0c81e7a39a2566405ad2950426958ab0d7abfe073ce6517f67e87f2cc2dabe"
	I0916 10:41:01.067663   47238 cri.go:89] found id: "03ddfa3f2cafc3bb86ca39c37b44d4f722054dbe5b68412154aaff07a0db9267"
	I0916 10:41:01.067669   47238 cri.go:89] found id: "bf96dac81b725b0cdd05c80d46fccb31fba58eb314cbefaf4fa45648dd564d75"
	I0916 10:41:01.067678   47238 cri.go:89] found id: "80095e847084fa5775086821e6be1bb6e7bae0fe6a66745b19cb9b75e266bc3f"
	I0916 10:41:01.067683   47238 cri.go:89] found id: "0062114d9f75f428e3b03efa430fa13cefb82159b06c16a18002082c26a43f86"
	I0916 10:41:01.067689   47238 cri.go:89] found id: "0906c5e415b9c8b7cf0dc6e35daf1caed07aec8e00dc68457a9203cdeaa0fcee"
	I0916 10:41:01.067695   47238 cri.go:89] found id: "c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171"
	I0916 10:41:01.067700   47238 cri.go:89] found id: "b4905826c508e92d29a39ef565f1c838d026ed2e1af8256da846ca2ca7e33d25"
	I0916 10:41:01.067707   47238 cri.go:89] found id: ""
	I0916 10:41:01.067782   47238 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0916 10:41:01.092260   47238 command_runner.go:130] > [{"ociVersion":"1.0.2-dev","id":"0062114d9f75f428e3b03efa430fa13cefb82159b06c16a18002082c26a43f86","pid":1514,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0062114d9f75f428e3b03efa430fa13cefb82159b06c16a18002082c26a43f86","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0062114d9f75f428e3b03efa430fa13cefb82159b06c16a18002082c26a43f86/rootfs","created":"2024-09-16T10:40:33.248379085Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.31.1","io.kubernetes.cri.sandbox-id":"8b5d37485105016ad047d2bd7badc2c1ccabbe6c13a29075aec1d82c11b6924f","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-016570","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"05bfea671b4b973ad25665da415eb7d0"},"owner":"root"},{"ociVer
sion":"1.0.2-dev","id":"03ddfa3f2cafc3bb86ca39c37b44d4f722054dbe5b68412154aaff07a0db9267","pid":2297,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/03ddfa3f2cafc3bb86ca39c37b44d4f722054dbe5b68412154aaff07a0db9267","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/03ddfa3f2cafc3bb86ca39c37b44d4f722054dbe5b68412154aaff07a0db9267/rootfs","created":"2024-09-16T10:40:44.571062175Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"b81ffde02718d4aa5d690a7b2df31a8489e3a52138ff033bbcd631c55c48cadf","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9924f10d-5beb-43b1-9782-44644a015b56"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0906c5e415b9c8b7cf0dc6e35daf1caed07aec8e00dc68457a9203cdeaa0fcee","pid":1517
,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0906c5e415b9c8b7cf0dc6e35daf1caed07aec8e00dc68457a9203cdeaa0fcee","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0906c5e415b9c8b7cf0dc6e35daf1caed07aec8e00dc68457a9203cdeaa0fcee/rootfs","created":"2024-09-16T10:40:33.245231601Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.31.1","io.kubernetes.cri.sandbox-id":"caa2007696d1b222357e825d8e2d17593495ae89592ab8a4d4d3efc1c5faa1d3","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-016570","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"5c4ebe83a62e176d48c858392b494ba5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2cdebcb8c7807d8f74249d94f4671d3ed0afef05d2c61c91b14d092a2cca0dbf","pid":1361,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2cdebcb8c7807d8f7
4249d94f4671d3ed0afef05d2c61c91b14d092a2cca0dbf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2cdebcb8c7807d8f74249d94f4671d3ed0afef05d2c61c91b14d092a2cca0dbf/rootfs","created":"2024-09-16T10:40:33.026445222Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"2cdebcb8c7807d8f74249d94f4671d3ed0afef05d2c61c91b14d092a2cca0dbf","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-functional-016570_9ff8ce834d4b88cb05c2ce6dadcabd95","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-functional-016570","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9ff8ce834d4b88cb05c2ce6dadcabd95"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3d9a434f8b6e5a2769089045bbe03e5edd8fcdb55e1ebbcbb3906a6f7820100e","pid":2382,"status":"running","bundle":"/run/c
ontainerd/io.containerd.runtime.v2.task/k8s.io/3d9a434f8b6e5a2769089045bbe03e5edd8fcdb55e1ebbcbb3906a6f7820100e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3d9a434f8b6e5a2769089045bbe03e5edd8fcdb55e1ebbcbb3906a6f7820100e/rootfs","created":"2024-09-16T10:40:54.834227465Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"3d9a434f8b6e5a2769089045bbe03e5edd8fcdb55e1ebbcbb3906a6f7820100e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-7c65d6cfc9-59qm7_370e7aff-70ab-43f7-9770-098c21fd013d","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-7c65d6cfc9-59qm7","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"370e7aff-70ab-43f7-9770-098c21fd013d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5b66d77e8e33400b91593c23cc79
092e1262597c431c960d97c2f3351c50e961","pid":1360,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961/rootfs","created":"2024-09-16T10:40:33.024281022Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-functional-016570_5333b7f22b4ca6fa3369f64c875d053e","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-016570","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"5333b7f22b4ca6fa3369f64c875
d053e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"80095e847084fa5775086821e6be1bb6e7bae0fe6a66745b19cb9b75e266bc3f","pid":2009,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/80095e847084fa5775086821e6be1bb6e7bae0fe6a66745b19cb9b75e266bc3f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/80095e847084fa5775086821e6be1bb6e7bae0fe6a66745b19cb9b75e266bc3f/rootfs","created":"2024-09-16T10:40:43.922824123Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-proxy:v1.31.1","io.kubernetes.cri.sandbox-id":"f4ed79f8dffebd8037cefce09832cf889e13430ea89968777ef8fbbf95f977f5","io.kubernetes.cri.sandbox-name":"kube-proxy-w8qkq","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b4a00283-1d69-49c4-8c60-264ef3fd7aca"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8b5d37485105016ad047d2bd7badc2c1ccabbe6c13a29075aec1d82c11b6924f
","pid":1383,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8b5d37485105016ad047d2bd7badc2c1ccabbe6c13a29075aec1d82c11b6924f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8b5d37485105016ad047d2bd7badc2c1ccabbe6c13a29075aec1d82c11b6924f/rootfs","created":"2024-09-16T10:40:33.032806907Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"8b5d37485105016ad047d2bd7badc2c1ccabbe6c13a29075aec1d82c11b6924f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-functional-016570_05bfea671b4b973ad25665da415eb7d0","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-016570","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"05bfea671b4b973ad25665da415eb7d0"},"owner":"r
oot"},{"ociVersion":"1.0.2-dev","id":"b4905826c508e92d29a39ef565f1c838d026ed2e1af8256da846ca2ca7e33d25","pid":1447,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b4905826c508e92d29a39ef565f1c838d026ed2e1af8256da846ca2ca7e33d25","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b4905826c508e92d29a39ef565f1c838d026ed2e1af8256da846ca2ca7e33d25/rootfs","created":"2024-09-16T10:40:33.17291957Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.5.15-0","io.kubernetes.cri.sandbox-id":"2cdebcb8c7807d8f74249d94f4671d3ed0afef05d2c61c91b14d092a2cca0dbf","io.kubernetes.cri.sandbox-name":"etcd-functional-016570","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9ff8ce834d4b88cb05c2ce6dadcabd95"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b81ffde02718d4aa5d690a7b2df31a8489e3a52138ff033bbcd631c55c48cadf","pid":2251,"status":"runni
ng","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b81ffde02718d4aa5d690a7b2df31a8489e3a52138ff033bbcd631c55c48cadf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b81ffde02718d4aa5d690a7b2df31a8489e3a52138ff033bbcd631c55c48cadf/rootfs","created":"2024-09-16T10:40:44.503322931Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"b81ffde02718d4aa5d690a7b2df31a8489e3a52138ff033bbcd631c55c48cadf","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_9924f10d-5beb-43b1-9782-44644a015b56","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9924f10d-5beb-43b1-9782-44644a015b56"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bf96dac81b725b0cdd05c80d46fc
cb31fba58eb314cbefaf4fa45648dd564d75","pid":2058,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bf96dac81b725b0cdd05c80d46fccb31fba58eb314cbefaf4fa45648dd564d75","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bf96dac81b725b0cdd05c80d46fccb31fba58eb314cbefaf4fa45648dd564d75/rootfs","created":"2024-09-16T10:40:44.122183432Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20240813-c6f155d6","io.kubernetes.cri.sandbox-id":"c7f56f796b013d6e4a9b5ce02ee0358acad8efaf96059c2970b319e87e53c060","io.kubernetes.cri.sandbox-name":"kindnet-5qjpd","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"8ee89403-0943-480c-9f48-4b25a0198f6d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171","pid":1522,"status":"running","bundle":"/run/containerd/io.containerd.run
time.v2.task/k8s.io/c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171/rootfs","created":"2024-09-16T10:40:33.25104973Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.31.1","io.kubernetes.cri.sandbox-id":"5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-016570","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"5333b7f22b4ca6fa3369f64c875d053e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c7f56f796b013d6e4a9b5ce02ee0358acad8efaf96059c2970b319e87e53c060","pid":1934,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c7f56f796b013d6e4a9b5ce02ee0358acad8efaf96059c2970b319e87e53c060","rootfs":"/run/
containerd/io.containerd.runtime.v2.task/k8s.io/c7f56f796b013d6e4a9b5ce02ee0358acad8efaf96059c2970b319e87e53c060/rootfs","created":"2024-09-16T10:40:43.727260795Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"c7f56f796b013d6e4a9b5ce02ee0358acad8efaf96059c2970b319e87e53c060","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-5qjpd_8ee89403-0943-480c-9f48-4b25a0198f6d","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-5qjpd","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"8ee89403-0943-480c-9f48-4b25a0198f6d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"caa2007696d1b222357e825d8e2d17593495ae89592ab8a4d4d3efc1c5faa1d3","pid":1381,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/caa2007696d1b222
357e825d8e2d17593495ae89592ab8a4d4d3efc1c5faa1d3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/caa2007696d1b222357e825d8e2d17593495ae89592ab8a4d4d3efc1c5faa1d3/rootfs","created":"2024-09-16T10:40:33.032509578Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"caa2007696d1b222357e825d8e2d17593495ae89592ab8a4d4d3efc1c5faa1d3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-functional-016570_5c4ebe83a62e176d48c858392b494ba5","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-016570","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"5c4ebe83a62e176d48c858392b494ba5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f4ed79f8dffebd8037cefce09832cf889e13430ea89968777ef8fbbf95f977f5","pid":1927,"status":"runn
ing","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f4ed79f8dffebd8037cefce09832cf889e13430ea89968777ef8fbbf95f977f5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f4ed79f8dffebd8037cefce09832cf889e13430ea89968777ef8fbbf95f977f5/rootfs","created":"2024-09-16T10:40:43.632882566Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"f4ed79f8dffebd8037cefce09832cf889e13430ea89968777ef8fbbf95f977f5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-w8qkq_b4a00283-1d69-49c4-8c60-264ef3fd7aca","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-w8qkq","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b4a00283-1d69-49c4-8c60-264ef3fd7aca"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fd0c81e7a39a2566405ad2950426958ab
0d7abfe073ce6517f67e87f2cc2dabe","pid":2413,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fd0c81e7a39a2566405ad2950426958ab0d7abfe073ce6517f67e87f2cc2dabe","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fd0c81e7a39a2566405ad2950426958ab0d7abfe073ce6517f67e87f2cc2dabe/rootfs","created":"2024-09-16T10:40:54.906002595Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/coredns/coredns:v1.11.3","io.kubernetes.cri.sandbox-id":"3d9a434f8b6e5a2769089045bbe03e5edd8fcdb55e1ebbcbb3906a6f7820100e","io.kubernetes.cri.sandbox-name":"coredns-7c65d6cfc9-59qm7","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"370e7aff-70ab-43f7-9770-098c21fd013d"},"owner":"root"}]
	I0916 10:41:01.092720   47238 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"0062114d9f75f428e3b03efa430fa13cefb82159b06c16a18002082c26a43f86","pid":1514,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0062114d9f75f428e3b03efa430fa13cefb82159b06c16a18002082c26a43f86","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0062114d9f75f428e3b03efa430fa13cefb82159b06c16a18002082c26a43f86/rootfs","created":"2024-09-16T10:40:33.248379085Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.31.1","io.kubernetes.cri.sandbox-id":"8b5d37485105016ad047d2bd7badc2c1ccabbe6c13a29075aec1d82c11b6924f","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-016570","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"05bfea671b4b973ad25665da415eb7d0"},"owner":"root"},{"ociVersion":
"1.0.2-dev","id":"03ddfa3f2cafc3bb86ca39c37b44d4f722054dbe5b68412154aaff07a0db9267","pid":2297,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/03ddfa3f2cafc3bb86ca39c37b44d4f722054dbe5b68412154aaff07a0db9267","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/03ddfa3f2cafc3bb86ca39c37b44d4f722054dbe5b68412154aaff07a0db9267/rootfs","created":"2024-09-16T10:40:44.571062175Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"b81ffde02718d4aa5d690a7b2df31a8489e3a52138ff033bbcd631c55c48cadf","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9924f10d-5beb-43b1-9782-44644a015b56"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0906c5e415b9c8b7cf0dc6e35daf1caed07aec8e00dc68457a9203cdeaa0fcee","pid":1517,"stat
us":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0906c5e415b9c8b7cf0dc6e35daf1caed07aec8e00dc68457a9203cdeaa0fcee","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0906c5e415b9c8b7cf0dc6e35daf1caed07aec8e00dc68457a9203cdeaa0fcee/rootfs","created":"2024-09-16T10:40:33.245231601Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.31.1","io.kubernetes.cri.sandbox-id":"caa2007696d1b222357e825d8e2d17593495ae89592ab8a4d4d3efc1c5faa1d3","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-016570","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"5c4ebe83a62e176d48c858392b494ba5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2cdebcb8c7807d8f74249d94f4671d3ed0afef05d2c61c91b14d092a2cca0dbf","pid":1361,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2cdebcb8c7807d8f74249d9
4f4671d3ed0afef05d2c61c91b14d092a2cca0dbf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2cdebcb8c7807d8f74249d94f4671d3ed0afef05d2c61c91b14d092a2cca0dbf/rootfs","created":"2024-09-16T10:40:33.026445222Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"2cdebcb8c7807d8f74249d94f4671d3ed0afef05d2c61c91b14d092a2cca0dbf","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-functional-016570_9ff8ce834d4b88cb05c2ce6dadcabd95","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-functional-016570","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9ff8ce834d4b88cb05c2ce6dadcabd95"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3d9a434f8b6e5a2769089045bbe03e5edd8fcdb55e1ebbcbb3906a6f7820100e","pid":2382,"status":"running","bundle":"/run/contain
erd/io.containerd.runtime.v2.task/k8s.io/3d9a434f8b6e5a2769089045bbe03e5edd8fcdb55e1ebbcbb3906a6f7820100e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3d9a434f8b6e5a2769089045bbe03e5edd8fcdb55e1ebbcbb3906a6f7820100e/rootfs","created":"2024-09-16T10:40:54.834227465Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"3d9a434f8b6e5a2769089045bbe03e5edd8fcdb55e1ebbcbb3906a6f7820100e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-7c65d6cfc9-59qm7_370e7aff-70ab-43f7-9770-098c21fd013d","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-7c65d6cfc9-59qm7","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"370e7aff-70ab-43f7-9770-098c21fd013d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5b66d77e8e33400b91593c23cc79092e12
62597c431c960d97c2f3351c50e961","pid":1360,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961/rootfs","created":"2024-09-16T10:40:33.024281022Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-functional-016570_5333b7f22b4ca6fa3369f64c875d053e","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-016570","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"5333b7f22b4ca6fa3369f64c875d053e"
},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"80095e847084fa5775086821e6be1bb6e7bae0fe6a66745b19cb9b75e266bc3f","pid":2009,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/80095e847084fa5775086821e6be1bb6e7bae0fe6a66745b19cb9b75e266bc3f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/80095e847084fa5775086821e6be1bb6e7bae0fe6a66745b19cb9b75e266bc3f/rootfs","created":"2024-09-16T10:40:43.922824123Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-proxy:v1.31.1","io.kubernetes.cri.sandbox-id":"f4ed79f8dffebd8037cefce09832cf889e13430ea89968777ef8fbbf95f977f5","io.kubernetes.cri.sandbox-name":"kube-proxy-w8qkq","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b4a00283-1d69-49c4-8c60-264ef3fd7aca"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8b5d37485105016ad047d2bd7badc2c1ccabbe6c13a29075aec1d82c11b6924f","pid
":1383,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8b5d37485105016ad047d2bd7badc2c1ccabbe6c13a29075aec1d82c11b6924f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8b5d37485105016ad047d2bd7badc2c1ccabbe6c13a29075aec1d82c11b6924f/rootfs","created":"2024-09-16T10:40:33.032806907Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"8b5d37485105016ad047d2bd7badc2c1ccabbe6c13a29075aec1d82c11b6924f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-functional-016570_05bfea671b4b973ad25665da415eb7d0","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-016570","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"05bfea671b4b973ad25665da415eb7d0"},"owner":"root"},
{"ociVersion":"1.0.2-dev","id":"b4905826c508e92d29a39ef565f1c838d026ed2e1af8256da846ca2ca7e33d25","pid":1447,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b4905826c508e92d29a39ef565f1c838d026ed2e1af8256da846ca2ca7e33d25","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b4905826c508e92d29a39ef565f1c838d026ed2e1af8256da846ca2ca7e33d25/rootfs","created":"2024-09-16T10:40:33.17291957Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.5.15-0","io.kubernetes.cri.sandbox-id":"2cdebcb8c7807d8f74249d94f4671d3ed0afef05d2c61c91b14d092a2cca0dbf","io.kubernetes.cri.sandbox-name":"etcd-functional-016570","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9ff8ce834d4b88cb05c2ce6dadcabd95"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b81ffde02718d4aa5d690a7b2df31a8489e3a52138ff033bbcd631c55c48cadf","pid":2251,"status":"running","b
undle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b81ffde02718d4aa5d690a7b2df31a8489e3a52138ff033bbcd631c55c48cadf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b81ffde02718d4aa5d690a7b2df31a8489e3a52138ff033bbcd631c55c48cadf/rootfs","created":"2024-09-16T10:40:44.503322931Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"b81ffde02718d4aa5d690a7b2df31a8489e3a52138ff033bbcd631c55c48cadf","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_9924f10d-5beb-43b1-9782-44644a015b56","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9924f10d-5beb-43b1-9782-44644a015b56"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bf96dac81b725b0cdd05c80d46fccb31fb
a58eb314cbefaf4fa45648dd564d75","pid":2058,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bf96dac81b725b0cdd05c80d46fccb31fba58eb314cbefaf4fa45648dd564d75","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bf96dac81b725b0cdd05c80d46fccb31fba58eb314cbefaf4fa45648dd564d75/rootfs","created":"2024-09-16T10:40:44.122183432Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20240813-c6f155d6","io.kubernetes.cri.sandbox-id":"c7f56f796b013d6e4a9b5ce02ee0358acad8efaf96059c2970b319e87e53c060","io.kubernetes.cri.sandbox-name":"kindnet-5qjpd","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"8ee89403-0943-480c-9f48-4b25a0198f6d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171","pid":1522,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v
2.task/k8s.io/c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171/rootfs","created":"2024-09-16T10:40:33.25104973Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.31.1","io.kubernetes.cri.sandbox-id":"5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-016570","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"5333b7f22b4ca6fa3369f64c875d053e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c7f56f796b013d6e4a9b5ce02ee0358acad8efaf96059c2970b319e87e53c060","pid":1934,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c7f56f796b013d6e4a9b5ce02ee0358acad8efaf96059c2970b319e87e53c060","rootfs":"/run/contai
nerd/io.containerd.runtime.v2.task/k8s.io/c7f56f796b013d6e4a9b5ce02ee0358acad8efaf96059c2970b319e87e53c060/rootfs","created":"2024-09-16T10:40:43.727260795Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"c7f56f796b013d6e4a9b5ce02ee0358acad8efaf96059c2970b319e87e53c060","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-5qjpd_8ee89403-0943-480c-9f48-4b25a0198f6d","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-5qjpd","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"8ee89403-0943-480c-9f48-4b25a0198f6d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"caa2007696d1b222357e825d8e2d17593495ae89592ab8a4d4d3efc1c5faa1d3","pid":1381,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/caa2007696d1b222357e82
5d8e2d17593495ae89592ab8a4d4d3efc1c5faa1d3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/caa2007696d1b222357e825d8e2d17593495ae89592ab8a4d4d3efc1c5faa1d3/rootfs","created":"2024-09-16T10:40:33.032509578Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"caa2007696d1b222357e825d8e2d17593495ae89592ab8a4d4d3efc1c5faa1d3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-functional-016570_5c4ebe83a62e176d48c858392b494ba5","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-016570","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"5c4ebe83a62e176d48c858392b494ba5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f4ed79f8dffebd8037cefce09832cf889e13430ea89968777ef8fbbf95f977f5","pid":1927,"status":"running","
bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f4ed79f8dffebd8037cefce09832cf889e13430ea89968777ef8fbbf95f977f5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f4ed79f8dffebd8037cefce09832cf889e13430ea89968777ef8fbbf95f977f5/rootfs","created":"2024-09-16T10:40:43.632882566Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"f4ed79f8dffebd8037cefce09832cf889e13430ea89968777ef8fbbf95f977f5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-w8qkq_b4a00283-1d69-49c4-8c60-264ef3fd7aca","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-w8qkq","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b4a00283-1d69-49c4-8c60-264ef3fd7aca"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fd0c81e7a39a2566405ad2950426958ab0d7abf
e073ce6517f67e87f2cc2dabe","pid":2413,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fd0c81e7a39a2566405ad2950426958ab0d7abfe073ce6517f67e87f2cc2dabe","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fd0c81e7a39a2566405ad2950426958ab0d7abfe073ce6517f67e87f2cc2dabe/rootfs","created":"2024-09-16T10:40:54.906002595Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/coredns/coredns:v1.11.3","io.kubernetes.cri.sandbox-id":"3d9a434f8b6e5a2769089045bbe03e5edd8fcdb55e1ebbcbb3906a6f7820100e","io.kubernetes.cri.sandbox-name":"coredns-7c65d6cfc9-59qm7","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"370e7aff-70ab-43f7-9770-098c21fd013d"},"owner":"root"}]
	I0916 10:41:01.092944   47238 cri.go:126] list returned 16 containers
	I0916 10:41:01.092952   47238 cri.go:129] container: {ID:0062114d9f75f428e3b03efa430fa13cefb82159b06c16a18002082c26a43f86 Status:running}
	I0916 10:41:01.092965   47238 cri.go:135] skipping {0062114d9f75f428e3b03efa430fa13cefb82159b06c16a18002082c26a43f86 running}: state = "running", want "paused"
	I0916 10:41:01.092973   47238 cri.go:129] container: {ID:03ddfa3f2cafc3bb86ca39c37b44d4f722054dbe5b68412154aaff07a0db9267 Status:running}
	I0916 10:41:01.092977   47238 cri.go:135] skipping {03ddfa3f2cafc3bb86ca39c37b44d4f722054dbe5b68412154aaff07a0db9267 running}: state = "running", want "paused"
	I0916 10:41:01.092981   47238 cri.go:129] container: {ID:0906c5e415b9c8b7cf0dc6e35daf1caed07aec8e00dc68457a9203cdeaa0fcee Status:running}
	I0916 10:41:01.092985   47238 cri.go:135] skipping {0906c5e415b9c8b7cf0dc6e35daf1caed07aec8e00dc68457a9203cdeaa0fcee running}: state = "running", want "paused"
	I0916 10:41:01.092989   47238 cri.go:129] container: {ID:2cdebcb8c7807d8f74249d94f4671d3ed0afef05d2c61c91b14d092a2cca0dbf Status:running}
	I0916 10:41:01.092995   47238 cri.go:131] skipping 2cdebcb8c7807d8f74249d94f4671d3ed0afef05d2c61c91b14d092a2cca0dbf - not in ps
	I0916 10:41:01.092999   47238 cri.go:129] container: {ID:3d9a434f8b6e5a2769089045bbe03e5edd8fcdb55e1ebbcbb3906a6f7820100e Status:running}
	I0916 10:41:01.093005   47238 cri.go:131] skipping 3d9a434f8b6e5a2769089045bbe03e5edd8fcdb55e1ebbcbb3906a6f7820100e - not in ps
	I0916 10:41:01.093009   47238 cri.go:129] container: {ID:5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961 Status:running}
	I0916 10:41:01.093013   47238 cri.go:131] skipping 5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961 - not in ps
	I0916 10:41:01.093016   47238 cri.go:129] container: {ID:80095e847084fa5775086821e6be1bb6e7bae0fe6a66745b19cb9b75e266bc3f Status:running}
	I0916 10:41:01.093020   47238 cri.go:135] skipping {80095e847084fa5775086821e6be1bb6e7bae0fe6a66745b19cb9b75e266bc3f running}: state = "running", want "paused"
	I0916 10:41:01.093025   47238 cri.go:129] container: {ID:8b5d37485105016ad047d2bd7badc2c1ccabbe6c13a29075aec1d82c11b6924f Status:running}
	I0916 10:41:01.093030   47238 cri.go:131] skipping 8b5d37485105016ad047d2bd7badc2c1ccabbe6c13a29075aec1d82c11b6924f - not in ps
	I0916 10:41:01.093037   47238 cri.go:129] container: {ID:b4905826c508e92d29a39ef565f1c838d026ed2e1af8256da846ca2ca7e33d25 Status:running}
	I0916 10:41:01.093041   47238 cri.go:135] skipping {b4905826c508e92d29a39ef565f1c838d026ed2e1af8256da846ca2ca7e33d25 running}: state = "running", want "paused"
	I0916 10:41:01.093049   47238 cri.go:129] container: {ID:b81ffde02718d4aa5d690a7b2df31a8489e3a52138ff033bbcd631c55c48cadf Status:running}
	I0916 10:41:01.093053   47238 cri.go:131] skipping b81ffde02718d4aa5d690a7b2df31a8489e3a52138ff033bbcd631c55c48cadf - not in ps
	I0916 10:41:01.093057   47238 cri.go:129] container: {ID:bf96dac81b725b0cdd05c80d46fccb31fba58eb314cbefaf4fa45648dd564d75 Status:running}
	I0916 10:41:01.093060   47238 cri.go:135] skipping {bf96dac81b725b0cdd05c80d46fccb31fba58eb314cbefaf4fa45648dd564d75 running}: state = "running", want "paused"
	I0916 10:41:01.093065   47238 cri.go:129] container: {ID:c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171 Status:running}
	I0916 10:41:01.093069   47238 cri.go:135] skipping {c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171 running}: state = "running", want "paused"
	I0916 10:41:01.093075   47238 cri.go:129] container: {ID:c7f56f796b013d6e4a9b5ce02ee0358acad8efaf96059c2970b319e87e53c060 Status:running}
	I0916 10:41:01.093080   47238 cri.go:131] skipping c7f56f796b013d6e4a9b5ce02ee0358acad8efaf96059c2970b319e87e53c060 - not in ps
	I0916 10:41:01.093087   47238 cri.go:129] container: {ID:caa2007696d1b222357e825d8e2d17593495ae89592ab8a4d4d3efc1c5faa1d3 Status:running}
	I0916 10:41:01.093092   47238 cri.go:131] skipping caa2007696d1b222357e825d8e2d17593495ae89592ab8a4d4d3efc1c5faa1d3 - not in ps
	I0916 10:41:01.093098   47238 cri.go:129] container: {ID:f4ed79f8dffebd8037cefce09832cf889e13430ea89968777ef8fbbf95f977f5 Status:running}
	I0916 10:41:01.093101   47238 cri.go:131] skipping f4ed79f8dffebd8037cefce09832cf889e13430ea89968777ef8fbbf95f977f5 - not in ps
	I0916 10:41:01.093105   47238 cri.go:129] container: {ID:fd0c81e7a39a2566405ad2950426958ab0d7abfe073ce6517f67e87f2cc2dabe Status:running}
	I0916 10:41:01.093109   47238 cri.go:135] skipping {fd0c81e7a39a2566405ad2950426958ab0d7abfe073ce6517f67e87f2cc2dabe running}: state = "running", want "paused"
	I0916 10:41:01.093144   47238 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:41:01.100810   47238 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0916 10:41:01.100830   47238 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0916 10:41:01.100837   47238 command_runner.go:130] > /var/lib/minikube/etcd:
	I0916 10:41:01.100840   47238 command_runner.go:130] > member
	I0916 10:41:01.101500   47238 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 10:41:01.101515   47238 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 10:41:01.101555   47238 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0916 10:41:01.109548   47238 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:41:01.110028   47238 kubeconfig.go:125] found "functional-016570" server: "https://192.168.49.2:8441"
	I0916 10:41:01.110447   47238 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:41:01.110695   47238 kapi.go:59] client config for functional-016570: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:41:01.111140   47238 cert_rotation.go:140] Starting client certificate rotation controller
	I0916 10:41:01.111329   47238 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 10:41:01.119477   47238 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.49.2
	I0916 10:41:01.119528   47238 kubeadm.go:597] duration metric: took 18.007161ms to restartPrimaryControlPlane
	I0916 10:41:01.119540   47238 kubeadm.go:394] duration metric: took 85.821653ms to StartCluster
	I0916 10:41:01.119555   47238 settings.go:142] acquiring lock: {Name:mk5f140da961bb985c566f732d611f3e238f1f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:41:01.119637   47238 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:41:01.120636   47238 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:41:01.120937   47238 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 10:41:01.121019   47238 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 10:41:01.121148   47238 addons.go:69] Setting storage-provisioner=true in profile "functional-016570"
	I0916 10:41:01.121170   47238 config.go:182] Loaded profile config "functional-016570": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:41:01.121190   47238 addons.go:69] Setting default-storageclass=true in profile "functional-016570"
	I0916 10:41:01.121179   47238 addons.go:234] Setting addon storage-provisioner=true in "functional-016570"
	I0916 10:41:01.121219   47238 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-016570"
	W0916 10:41:01.121230   47238 addons.go:243] addon storage-provisioner should already be in state true
	I0916 10:41:01.121259   47238 host.go:66] Checking if "functional-016570" exists ...
	I0916 10:41:01.121531   47238 cli_runner.go:164] Run: docker container inspect functional-016570 --format={{.State.Status}}
	I0916 10:41:01.121709   47238 cli_runner.go:164] Run: docker container inspect functional-016570 --format={{.State.Status}}
	I0916 10:41:01.123967   47238 out.go:177] * Verifying Kubernetes components...
	I0916 10:41:01.125424   47238 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:41:01.142780   47238 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:41:01.143078   47238 kapi.go:59] client config for functional-016570: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:41:01.143485   47238 addons.go:234] Setting addon default-storageclass=true in "functional-016570"
	W0916 10:41:01.143505   47238 addons.go:243] addon default-storageclass should already be in state true
	I0916 10:41:01.143538   47238 host.go:66] Checking if "functional-016570" exists ...
	I0916 10:41:01.144007   47238 cli_runner.go:164] Run: docker container inspect functional-016570 --format={{.State.Status}}
	I0916 10:41:01.144431   47238 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:41:01.145990   47238 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:41:01.146008   47238 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:41:01.146052   47238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-016570
	I0916 10:41:01.162172   47238 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:41:01.162199   47238 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:41:01.162261   47238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-016570
	I0916 10:41:01.171236   47238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/functional-016570/id_rsa Username:docker}
	I0916 10:41:01.184202   47238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/functional-016570/id_rsa Username:docker}
	I0916 10:41:01.229756   47238 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:41:01.240354   47238 node_ready.go:35] waiting up to 6m0s for node "functional-016570" to be "Ready" ...
	I0916 10:41:01.240484   47238 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-016570
	I0916 10:41:01.240493   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:01.240502   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:01.240509   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:01.247024   47238 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:41:01.247046   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:01.247055   47238 round_trippers.go:580]     Audit-Id: 573c33ef-95d1-46e9-86b3-8fb629398e97
	I0916 10:41:01.247060   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:01.247064   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:01.247068   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:01.247072   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:01.247075   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:01 GMT
	I0916 10:41:01.247201   47238 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-016570","uid":"43403832-3ac5-4a10-b0e5-35f585b23f6d","resourceVersion":"397","creationTimestamp":"2024-09-16T10:40:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-016570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-016570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_40_38_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd
ate","apiVersion":"v1","time":"2024-09-16T10:40:35Z","fieldsType":"Fiel [truncated 5025 chars]
	I0916 10:41:01.248128   47238 node_ready.go:49] node "functional-016570" has status "Ready":"True"
	I0916 10:41:01.248151   47238 node_ready.go:38] duration metric: took 7.761447ms for node "functional-016570" to be "Ready" ...
	I0916 10:41:01.248162   47238 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:41:01.248237   47238 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 10:41:01.248254   47238 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 10:41:01.248339   47238 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods
	I0916 10:41:01.248350   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:01.248359   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:01.248370   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:01.250922   47238 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:01.250955   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:01.250964   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:01.250970   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:01.250975   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:01.250979   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:01.250984   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:01 GMT
	I0916 10:41:01.250989   47238 round_trippers.go:580]     Audit-Id: daa1e234-c42a-44df-b468-9e9da5ebea7d
	I0916 10:41:01.251529   47238 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"422"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-59qm7","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"370e7aff-70ab-43f7-9770-098c21fd013d","resourceVersion":"412","creationTimestamp":"2024-09-16T10:40:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"053910d0-0f80-4b44-85c9-939b3882c87c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:40:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"053910d0-0f80-4b44-85c9-939b3882c87c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 58828 chars]
	I0916 10:41:01.254797   47238 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-59qm7" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:01.254873   47238 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-59qm7
	I0916 10:41:01.254880   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:01.254888   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:01.254891   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:01.256711   47238 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:41:01.256725   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:01.256731   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:01.256735   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:01.256739   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:01.256743   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:01.256746   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:01 GMT
	I0916 10:41:01.256749   47238 round_trippers.go:580]     Audit-Id: c03a4f98-a474-434d-b9f9-43ee485267ba
	I0916 10:41:01.256929   47238 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-59qm7","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"370e7aff-70ab-43f7-9770-098c21fd013d","resourceVersion":"412","creationTimestamp":"2024-09-16T10:40:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"053910d0-0f80-4b44-85c9-939b3882c87c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:40:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"053910d0-0f80-4b44-85c9-939b3882c87c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6481 chars]
	I0916 10:41:01.257355   47238 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-016570
	I0916 10:41:01.257368   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:01.257375   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:01.257378   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:01.258961   47238 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:41:01.258974   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:01.258983   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:01.258988   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:01.258992   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:01.258995   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:01.259001   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:01 GMT
	I0916 10:41:01.259009   47238 round_trippers.go:580]     Audit-Id: 0a4b5378-dfba-4947-b468-629203127bee
	I0916 10:41:01.259148   47238 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-016570","uid":"43403832-3ac5-4a10-b0e5-35f585b23f6d","resourceVersion":"397","creationTimestamp":"2024-09-16T10:40:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-016570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-016570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_40_38_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd
ate","apiVersion":"v1","time":"2024-09-16T10:40:35Z","fieldsType":"Fiel [truncated 5025 chars]
	I0916 10:41:01.259402   47238 pod_ready.go:93] pod "coredns-7c65d6cfc9-59qm7" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:01.259415   47238 pod_ready.go:82] duration metric: took 4.598033ms for pod "coredns-7c65d6cfc9-59qm7" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:01.259424   47238 pod_ready.go:79] waiting up to 6m0s for pod "etcd-functional-016570" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:01.259474   47238 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/etcd-functional-016570
	I0916 10:41:01.259481   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:01.259488   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:01.259493   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:01.261210   47238 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:41:01.261227   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:01.261234   47238 round_trippers.go:580]     Audit-Id: 1611216d-2481-42cf-9752-1a0d294e5c15
	I0916 10:41:01.261239   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:01.261244   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:01.261248   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:01.261253   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:01.261258   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:01 GMT
	I0916 10:41:01.261416   47238 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-016570","namespace":"kube-system","uid":"54625714-0265-4ecf-a4d3-b4ff173d81e0","resourceVersion":"358","creationTimestamp":"2024-09-16T10:40:38Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"9ff8ce834d4b88cb05c2ce6dadcabd95","kubernetes.io/config.mirror":"9ff8ce834d4b88cb05c2ce6dadcabd95","kubernetes.io/config.seen":"2024-09-16T10:40:37.769856809Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-016570","uid":"43403832-3ac5-4a10-b0e5-35f585b23f6d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:40:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6445 chars]
	I0916 10:41:01.261747   47238 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-016570
	I0916 10:41:01.261757   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:01.261764   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:01.261768   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:01.263323   47238 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:41:01.263337   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:01.263344   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:01.263349   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:01 GMT
	I0916 10:41:01.263355   47238 round_trippers.go:580]     Audit-Id: 5fa526da-e662-4c94-924e-0a78af0636c3
	I0916 10:41:01.263360   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:01.263364   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:01.263370   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:01.263516   47238 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-016570","uid":"43403832-3ac5-4a10-b0e5-35f585b23f6d","resourceVersion":"397","creationTimestamp":"2024-09-16T10:40:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-016570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-016570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_40_38_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd
ate","apiVersion":"v1","time":"2024-09-16T10:40:35Z","fieldsType":"Fiel [truncated 5025 chars]
	I0916 10:41:01.263851   47238 pod_ready.go:93] pod "etcd-functional-016570" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:01.263870   47238 pod_ready.go:82] duration metric: took 4.439077ms for pod "etcd-functional-016570" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:01.263890   47238 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-functional-016570" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:01.263943   47238 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-016570
	I0916 10:41:01.263950   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:01.263958   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:01.263966   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:01.265569   47238 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:41:01.265583   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:01.265595   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:01.265598   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:01.265601   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:01.265604   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:01 GMT
	I0916 10:41:01.265609   47238 round_trippers.go:580]     Audit-Id: d527ce4b-44b5-4656-b997-2b47884049f0
	I0916 10:41:01.265615   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:01.265788   47238 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-016570","namespace":"kube-system","uid":"03b56925-37e8-4f4c-947d-8798a9b0b1e8","resourceVersion":"400","creationTimestamp":"2024-09-16T10:40:37Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8441","kubernetes.io/config.hash":"5333b7f22b4ca6fa3369f64c875d053e","kubernetes.io/config.mirror":"5333b7f22b4ca6fa3369f64c875d053e","kubernetes.io/config.seen":"2024-09-16T10:40:32.389487986Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-016570","uid":"43403832-3ac5-4a10-b0e5-35f585b23f6d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:40:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 8521 chars]
	I0916 10:41:01.266213   47238 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-016570
	I0916 10:41:01.266225   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:01.266232   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:01.266236   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:01.267912   47238 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:41:01.267931   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:01.267940   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:01.267945   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:01.267949   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:01 GMT
	I0916 10:41:01.267954   47238 round_trippers.go:580]     Audit-Id: 2ebfb541-85ed-47ba-b361-34f4f7a41c6d
	I0916 10:41:01.267959   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:01.267965   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:01.268150   47238 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-016570","uid":"43403832-3ac5-4a10-b0e5-35f585b23f6d","resourceVersion":"397","creationTimestamp":"2024-09-16T10:40:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-016570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-016570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_40_38_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd
ate","apiVersion":"v1","time":"2024-09-16T10:40:35Z","fieldsType":"Fiel [truncated 5025 chars]
	I0916 10:41:01.268537   47238 pod_ready.go:93] pod "kube-apiserver-functional-016570" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:01.268558   47238 pod_ready.go:82] duration metric: took 4.657492ms for pod "kube-apiserver-functional-016570" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:01.268574   47238 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-functional-016570" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:01.268648   47238 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-016570
	I0916 10:41:01.268665   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:01.268673   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:01.268681   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:01.270284   47238 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:41:01.270300   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:01.270305   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:01.270310   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:01.270313   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:01.270316   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:01 GMT
	I0916 10:41:01.270321   47238 round_trippers.go:580]     Audit-Id: b341d17e-adf8-4c4e-947e-9221d021c5d2
	I0916 10:41:01.270326   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:01.270514   47238 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-016570","namespace":"kube-system","uid":"ab12e143-7f68-4f92-b30d-82299e1bf5a0","resourceVersion":"403","creationTimestamp":"2024-09-16T10:40:38Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"05bfea671b4b973ad25665da415eb7d0","kubernetes.io/config.mirror":"05bfea671b4b973ad25665da415eb7d0","kubernetes.io/config.seen":"2024-09-16T10:40:37.769863952Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-016570","uid":"43403832-3ac5-4a10-b0e5-35f585b23f6d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:40:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 8096 chars]
	I0916 10:41:01.271079   47238 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-016570
	I0916 10:41:01.271102   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:01.271113   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:01.271122   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:01.272993   47238 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:41:01.273008   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:01.273014   47238 round_trippers.go:580]     Audit-Id: c6c54786-65a9-405d-8a97-8f3f44e34a44
	I0916 10:41:01.273022   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:01.273118   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:01.273143   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:01.273155   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:01.273161   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:01 GMT
	I0916 10:41:01.273300   47238 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-016570","uid":"43403832-3ac5-4a10-b0e5-35f585b23f6d","resourceVersion":"397","creationTimestamp":"2024-09-16T10:40:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-016570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-016570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_40_38_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd
ate","apiVersion":"v1","time":"2024-09-16T10:40:35Z","fieldsType":"Fiel [truncated 5025 chars]
	I0916 10:41:01.273655   47238 pod_ready.go:93] pod "kube-controller-manager-functional-016570" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:01.273676   47238 pod_ready.go:82] duration metric: took 5.090622ms for pod "kube-controller-manager-functional-016570" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:01.273691   47238 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w8qkq" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:01.282800   47238 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:41:01.288620   47238 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:41:01.440913   47238 request.go:632] Waited for 167.153595ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-proxy-w8qkq
	I0916 10:41:01.440995   47238 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-proxy-w8qkq
	I0916 10:41:01.441004   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:01.441014   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:01.441025   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:01.443074   47238 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:01.443099   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:01.443109   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:01.443116   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:01.443121   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:01.443126   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:01.443132   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:01 GMT
	I0916 10:41:01.443137   47238 round_trippers.go:580]     Audit-Id: 851c8443-a6c3-4499-8fc2-314db0590a15
	I0916 10:41:01.443297   47238 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-w8qkq","generateName":"kube-proxy-","namespace":"kube-system","uid":"b4a00283-1d69-49c4-8c60-264ef3fd7aca","resourceVersion":"384","creationTimestamp":"2024-09-16T10:40:43Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"f0cb66a7-d42d-4412-b093-c4474ecbce20","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:40:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f0cb66a7-d42d-4412-b093-c4474ecbce20\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6177 chars]
	I0916 10:41:01.595035   47238 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0916 10:41:01.608803   47238 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0916 10:41:01.624127   47238 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0916 10:41:01.638979   47238 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0916 10:41:01.641143   47238 request.go:632] Waited for 197.253294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8441/api/v1/nodes/functional-016570
	I0916 10:41:01.641200   47238 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-016570
	I0916 10:41:01.641210   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:01.641238   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:01.641249   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:01.643109   47238 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:41:01.643131   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:01.643141   47238 round_trippers.go:580]     Audit-Id: b1604cff-69e6-43ca-adbe-1d28c3526947
	I0916 10:41:01.643146   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:01.643152   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:01.643159   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:01.643163   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:01.643167   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:01 GMT
	I0916 10:41:01.643268   47238 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-016570","uid":"43403832-3ac5-4a10-b0e5-35f585b23f6d","resourceVersion":"397","creationTimestamp":"2024-09-16T10:40:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-016570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-016570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_40_38_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd
ate","apiVersion":"v1","time":"2024-09-16T10:40:35Z","fieldsType":"Fiel [truncated 5025 chars]
	I0916 10:41:01.643567   47238 pod_ready.go:93] pod "kube-proxy-w8qkq" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:01.643594   47238 pod_ready.go:82] duration metric: took 369.893905ms for pod "kube-proxy-w8qkq" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:01.643607   47238 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-functional-016570" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:01.707400   47238 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0916 10:41:01.782135   47238 command_runner.go:130] > pod/storage-provisioner configured
	I0916 10:41:01.785562   47238 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0916 10:41:01.785694   47238 round_trippers.go:463] GET https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses
	I0916 10:41:01.785704   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:01.785711   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:01.785715   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:01.787555   47238 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:41:01.787572   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:01.787579   47238 round_trippers.go:580]     Audit-Id: e53fc3ce-0703-47b1-a0e8-0cfee38a1251
	I0916 10:41:01.787582   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:01.787586   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:01.787589   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:01.787592   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:01.787596   47238 round_trippers.go:580]     Content-Length: 1273
	I0916 10:41:01.787599   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:01 GMT
	I0916 10:41:01.787637   47238 request.go:1351] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"422"},"items":[{"metadata":{"name":"standard","uid":"3d18a656-2072-4784-925d-266b7e1a642f","resourceVersion":"348","creationTimestamp":"2024-09-16T10:40:43Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-16T10:40:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0916 10:41:01.788093   47238 request.go:1351] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"3d18a656-2072-4784-925d-266b7e1a642f","resourceVersion":"348","creationTimestamp":"2024-09-16T10:40:43Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-16T10:40:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0916 10:41:01.788147   47238 round_trippers.go:463] PUT https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses/standard
	I0916 10:41:01.788160   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:01.788167   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:01.788175   47238 round_trippers.go:473]     Content-Type: application/json
	I0916 10:41:01.788179   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:01.790450   47238 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:01.790471   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:01.790481   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:01.790486   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:01.790490   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:01.790495   47238 round_trippers.go:580]     Content-Length: 1220
	I0916 10:41:01.790500   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:01 GMT
	I0916 10:41:01.790510   47238 round_trippers.go:580]     Audit-Id: 7e43b097-e60d-4d17-b590-402ea1f59308
	I0916 10:41:01.790515   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:01.790597   47238 request.go:1351] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"3d18a656-2072-4784-925d-266b7e1a642f","resourceVersion":"348","creationTimestamp":"2024-09-16T10:40:43Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-16T10:40:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0916 10:41:01.793395   47238 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0916 10:41:01.794759   47238 addons.go:510] duration metric: took 673.747105ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 10:41:01.840604   47238 request.go:632] Waited for 196.895921ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-016570
	I0916 10:41:01.840704   47238 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-016570
	I0916 10:41:01.840714   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:01.840721   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:01.840725   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:01.842770   47238 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:01.842792   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:01.842798   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:01.842803   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:01 GMT
	I0916 10:41:01.842807   47238 round_trippers.go:580]     Audit-Id: a7467f04-dae9-4af3-841d-2898bbf49041
	I0916 10:41:01.842810   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:01.842812   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:01.842817   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:01.842963   47238 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-016570","namespace":"kube-system","uid":"640affb4-aae3-401b-b06b-fd9e07a9b506","resourceVersion":"394","creationTimestamp":"2024-09-16T10:40:38Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5c4ebe83a62e176d48c858392b494ba5","kubernetes.io/config.mirror":"5c4ebe83a62e176d48c858392b494ba5","kubernetes.io/config.seen":"2024-09-16T10:40:37.769865268Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-016570","uid":"43403832-3ac5-4a10-b0e5-35f585b23f6d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:40:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 4978 chars]
	I0916 10:41:02.040637   47238 request.go:632] Waited for 197.286296ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8441/api/v1/nodes/functional-016570
	I0916 10:41:02.040710   47238 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes/functional-016570
	I0916 10:41:02.040718   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:02.040725   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:02.040731   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:02.042599   47238 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:41:02.042619   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:02.042629   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:02.042634   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:02 GMT
	I0916 10:41:02.042639   47238 round_trippers.go:580]     Audit-Id: f5eee0a1-fb0a-47b9-a62c-cf97733db188
	I0916 10:41:02.042643   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:02.042654   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:02.042660   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:02.042834   47238 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-016570","uid":"43403832-3ac5-4a10-b0e5-35f585b23f6d","resourceVersion":"397","creationTimestamp":"2024-09-16T10:40:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-016570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-016570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_40_38_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Upd
ate","apiVersion":"v1","time":"2024-09-16T10:40:35Z","fieldsType":"Fiel [truncated 5025 chars]
	I0916 10:41:02.043135   47238 pod_ready.go:93] pod "kube-scheduler-functional-016570" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:02.043150   47238 pod_ready.go:82] duration metric: took 399.536396ms for pod "kube-scheduler-functional-016570" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:02.043161   47238 pod_ready.go:39] duration metric: took 794.989376ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:41:02.043177   47238 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:41:02.043220   47238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:41:02.053563   47238 command_runner.go:130] > 1522
	I0916 10:41:02.054426   47238 api_server.go:72] duration metric: took 933.446089ms to wait for apiserver process to appear ...
	I0916 10:41:02.054446   47238 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:41:02.054468   47238 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0916 10:41:02.058667   47238 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0916 10:41:02.058770   47238 round_trippers.go:463] GET https://192.168.49.2:8441/version
	I0916 10:41:02.058784   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:02.058795   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:02.058802   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:02.059555   47238 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 10:41:02.059573   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:02.059581   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:02 GMT
	I0916 10:41:02.059587   47238 round_trippers.go:580]     Audit-Id: a04ccc0b-8020-427f-9226-d12e984081a1
	I0916 10:41:02.059591   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:02.059595   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:02.059600   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:02.059604   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:02.059609   47238 round_trippers.go:580]     Content-Length: 263
	I0916 10:41:02.059636   47238 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0916 10:41:02.059838   47238 api_server.go:141] control plane version: v1.31.1
	I0916 10:41:02.059877   47238 api_server.go:131] duration metric: took 5.42283ms to wait for apiserver health ...
	I0916 10:41:02.059889   47238 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:41:02.241301   47238 request.go:632] Waited for 181.324964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods
	I0916 10:41:02.241384   47238 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods
	I0916 10:41:02.241395   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:02.241403   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:02.241410   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:02.244245   47238 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:02.244288   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:02.244296   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:02.244319   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:02 GMT
	I0916 10:41:02.244327   47238 round_trippers.go:580]     Audit-Id: 50c2d844-9140-4374-90b4-d0dbb29266f5
	I0916 10:41:02.244332   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:02.244341   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:02.244346   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:02.244973   47238 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"422"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-59qm7","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"370e7aff-70ab-43f7-9770-098c21fd013d","resourceVersion":"412","creationTimestamp":"2024-09-16T10:40:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"053910d0-0f80-4b44-85c9-939b3882c87c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:40:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"053910d0-0f80-4b44-85c9-939b3882c87c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 58828 chars]
	I0916 10:41:02.247096   47238 system_pods.go:59] 8 kube-system pods found
	I0916 10:41:02.247125   47238 system_pods.go:61] "coredns-7c65d6cfc9-59qm7" [370e7aff-70ab-43f7-9770-098c21fd013d] Running
	I0916 10:41:02.247132   47238 system_pods.go:61] "etcd-functional-016570" [54625714-0265-4ecf-a4d3-b4ff173d81e0] Running
	I0916 10:41:02.247138   47238 system_pods.go:61] "kindnet-5qjpd" [8ee89403-0943-480c-9f48-4b25a0198f6d] Running
	I0916 10:41:02.247144   47238 system_pods.go:61] "kube-apiserver-functional-016570" [03b56925-37e8-4f4c-947d-8798a9b0b1e8] Running
	I0916 10:41:02.247151   47238 system_pods.go:61] "kube-controller-manager-functional-016570" [ab12e143-7f68-4f92-b30d-82299e1bf5a0] Running
	I0916 10:41:02.247159   47238 system_pods.go:61] "kube-proxy-w8qkq" [b4a00283-1d69-49c4-8c60-264ef3fd7aca] Running
	I0916 10:41:02.247165   47238 system_pods.go:61] "kube-scheduler-functional-016570" [640affb4-aae3-401b-b06b-fd9e07a9b506] Running
	I0916 10:41:02.247170   47238 system_pods.go:61] "storage-provisioner" [9924f10d-5beb-43b1-9782-44644a015b56] Running
	I0916 10:41:02.247179   47238 system_pods.go:74] duration metric: took 187.28208ms to wait for pod list to return data ...
	I0916 10:41:02.247192   47238 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:41:02.440577   47238 request.go:632] Waited for 193.257431ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8441/api/v1/namespaces/default/serviceaccounts
	I0916 10:41:02.440628   47238 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/default/serviceaccounts
	I0916 10:41:02.440633   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:02.440640   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:02.440643   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:02.442723   47238 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:02.442742   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:02.442751   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:02.442757   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:02.442761   47238 round_trippers.go:580]     Content-Length: 261
	I0916 10:41:02.442765   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:02 GMT
	I0916 10:41:02.442769   47238 round_trippers.go:580]     Audit-Id: f7882051-0b1c-4c4b-aea3-b2fdb85ddfd2
	I0916 10:41:02.442772   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:02.442777   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:02.442800   47238 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"422"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"fc2d74f0-4f11-4ddc-8fef-1bf15c992759","resourceVersion":"295","creationTimestamp":"2024-09-16T10:40:42Z"}}]}
	I0916 10:41:02.443025   47238 default_sa.go:45] found service account: "default"
	I0916 10:41:02.443042   47238 default_sa.go:55] duration metric: took 195.841126ms for default service account to be created ...
	I0916 10:41:02.443052   47238 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:41:02.641511   47238 request.go:632] Waited for 198.386372ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods
	I0916 10:41:02.641588   47238 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods
	I0916 10:41:02.641595   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:02.641606   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:02.641615   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:02.644165   47238 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:02.644187   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:02.644196   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:02.644201   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:02.644207   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:02.644211   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:02 GMT
	I0916 10:41:02.644215   47238 round_trippers.go:580]     Audit-Id: 71bb75ab-57f2-47ad-8f1d-9bf3da582d3b
	I0916 10:41:02.644218   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:02.644981   47238 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"422"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-59qm7","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"370e7aff-70ab-43f7-9770-098c21fd013d","resourceVersion":"412","creationTimestamp":"2024-09-16T10:40:43Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"053910d0-0f80-4b44-85c9-939b3882c87c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:40:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"053910d0-0f80-4b44-85c9-939b3882c87c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 58828 chars]
	I0916 10:41:02.646926   47238 system_pods.go:86] 8 kube-system pods found
	I0916 10:41:02.646951   47238 system_pods.go:89] "coredns-7c65d6cfc9-59qm7" [370e7aff-70ab-43f7-9770-098c21fd013d] Running
	I0916 10:41:02.646956   47238 system_pods.go:89] "etcd-functional-016570" [54625714-0265-4ecf-a4d3-b4ff173d81e0] Running
	I0916 10:41:02.646959   47238 system_pods.go:89] "kindnet-5qjpd" [8ee89403-0943-480c-9f48-4b25a0198f6d] Running
	I0916 10:41:02.646962   47238 system_pods.go:89] "kube-apiserver-functional-016570" [03b56925-37e8-4f4c-947d-8798a9b0b1e8] Running
	I0916 10:41:02.646966   47238 system_pods.go:89] "kube-controller-manager-functional-016570" [ab12e143-7f68-4f92-b30d-82299e1bf5a0] Running
	I0916 10:41:02.646969   47238 system_pods.go:89] "kube-proxy-w8qkq" [b4a00283-1d69-49c4-8c60-264ef3fd7aca] Running
	I0916 10:41:02.646972   47238 system_pods.go:89] "kube-scheduler-functional-016570" [640affb4-aae3-401b-b06b-fd9e07a9b506] Running
	I0916 10:41:02.646975   47238 system_pods.go:89] "storage-provisioner" [9924f10d-5beb-43b1-9782-44644a015b56] Running
	I0916 10:41:02.646981   47238 system_pods.go:126] duration metric: took 203.923632ms to wait for k8s-apps to be running ...
	I0916 10:41:02.646988   47238 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:41:02.647038   47238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:41:02.657958   47238 system_svc.go:56] duration metric: took 10.957701ms WaitForService to wait for kubelet
	I0916 10:41:02.657990   47238 kubeadm.go:582] duration metric: took 1.537018145s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:41:02.658006   47238 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:41:02.841406   47238 request.go:632] Waited for 183.309623ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8441/api/v1/nodes
	I0916 10:41:02.841457   47238 round_trippers.go:463] GET https://192.168.49.2:8441/api/v1/nodes
	I0916 10:41:02.841463   47238 round_trippers.go:469] Request Headers:
	I0916 10:41:02.841470   47238 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:41:02.841474   47238 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:41:02.843828   47238 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:41:02.843847   47238 round_trippers.go:577] Response Headers:
	I0916 10:41:02.843856   47238 round_trippers.go:580]     Content-Type: application/json
	I0916 10:41:02.843862   47238 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 3f4128a4-6b8f-4b80-85ff-5e6656a4a617
	I0916 10:41:02.843868   47238 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b91b6396-b705-421e-aae3-76af1da037ed
	I0916 10:41:02.843872   47238 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:41:02 GMT
	I0916 10:41:02.843886   47238 round_trippers.go:580]     Audit-Id: 3b7b97f9-d0fa-455b-b176-2a3a870192bb
	I0916 10:41:02.843894   47238 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:41:02.844020   47238 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"423"},"items":[{"metadata":{"name":"functional-016570","uid":"43403832-3ac5-4a10-b0e5-35f585b23f6d","resourceVersion":"397","creationTimestamp":"2024-09-16T10:40:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-016570","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"functional-016570","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_40_38_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"ma
nagedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v [truncated 5078 chars]
	I0916 10:41:02.844354   47238 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:41:02.844379   47238 node_conditions.go:123] node cpu capacity is 8
	I0916 10:41:02.844388   47238 node_conditions.go:105] duration metric: took 186.378223ms to run NodePressure ...
	I0916 10:41:02.844399   47238 start.go:241] waiting for startup goroutines ...
	I0916 10:41:02.844408   47238 start.go:246] waiting for cluster config update ...
	I0916 10:41:02.844423   47238 start.go:255] writing updated cluster config ...
	I0916 10:41:02.844666   47238 ssh_runner.go:195] Run: rm -f paused
	I0916 10:41:02.850472   47238 out.go:177] * Done! kubectl is now configured to use "functional-016570" cluster and "default" namespace by default
	E0916 10:41:02.851928   47238 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fd0c81e7a39a2       c69fa2e9cbf5f       10 seconds ago      Running             coredns                   0                   3d9a434f8b6e5       coredns-7c65d6cfc9-59qm7
	03ddfa3f2cafc       6e38f40d628db       20 seconds ago      Running             storage-provisioner       0                   b81ffde02718d       storage-provisioner
	bf96dac81b725       12968670680f4       21 seconds ago      Running             kindnet-cni               0                   c7f56f796b013       kindnet-5qjpd
	80095e847084f       60c005f310ff3       21 seconds ago      Running             kube-proxy                0                   f4ed79f8dffeb       kube-proxy-w8qkq
	0062114d9f75f       175ffd71cce3d       32 seconds ago      Running             kube-controller-manager   0                   8b5d374851050       kube-controller-manager-functional-016570
	0906c5e415b9c       9aa1fad941575       32 seconds ago      Running             kube-scheduler            0                   caa2007696d1b       kube-scheduler-functional-016570
	c1a0361849f33       6bab7719df100       32 seconds ago      Running             kube-apiserver            0                   5b66d77e8e334       kube-apiserver-functional-016570
	b4905826c508e       2e96e5913fc06       32 seconds ago      Running             etcd                      0                   2cdebcb8c7807       etcd-functional-016570
	
	
	==> containerd <==
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.198876249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.198891975Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.198961660Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.198985894Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.199001899Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.199018793Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.199032443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.199050407Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.199066137Z" level=info msg="NRI interface is disabled by configuration."
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.199082239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.199478661Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRunti
meSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:true IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath:/etc/containerd/certs.d Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress: StreamServerPort:10010 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.10 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissing
HugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:true EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.199966451Z" level=info msg="Connect containerd service"
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.200125138Z" level=info msg="using legacy CRI server"
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.200198751Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.200469956Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.201275863Z" level=info msg="Start subscribing containerd event"
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.201345401Z" level=info msg="Start recovering state"
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.201380943Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.201438744Z" level=info msg=serving... address=/run/containerd/containerd.sock
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.259828531Z" level=info msg="Start event monitor"
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.259866517Z" level=info msg="Start snapshots syncer"
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.259881223Z" level=info msg="Start cni network conf syncer for default"
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.259891423Z" level=info msg="Start streaming server"
	Sep 16 10:41:00 functional-016570 containerd[2685]: time="2024-09-16T10:41:00.259985670Z" level=info msg="containerd successfully booted in 0.197059s"
	Sep 16 10:41:00 functional-016570 systemd[1]: Started containerd container runtime.
	
	
	==> coredns [fd0c81e7a39a2566405ad2950426958ab0d7abfe073ce6517f67e87f2cc2dabe] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44994 - 49719 "HINFO IN 5811560446017322614.7472127089541594346. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008166038s
	
	
	==> describe nodes <==
	Name:               functional-016570
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-016570
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=functional-016570
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_40_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:40:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-016570
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:40:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:40:48 +0000   Mon, 16 Sep 2024 10:40:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:40:48 +0000   Mon, 16 Sep 2024 10:40:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:40:48 +0000   Mon, 16 Sep 2024 10:40:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:40:48 +0000   Mon, 16 Sep 2024 10:40:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-016570
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 7e0bbc72eb56431496705823b26dcd4d
	  System UUID:                9e1cbc84-d044-4524-b143-ca93a6dc5aa0
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-59qm7                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     22s
	  kube-system                 etcd-functional-016570                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         27s
	  kube-system                 kindnet-5qjpd                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      22s
	  kube-system                 kube-apiserver-functional-016570             250m (3%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-controller-manager-functional-016570    200m (2%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-proxy-w8qkq                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	  kube-system                 kube-scheduler-functional-016570             100m (1%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 21s   kube-proxy       
	  Normal   Starting                 28s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 28s   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  28s   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  27s   kubelet          Node functional-016570 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    27s   kubelet          Node functional-016570 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     27s   kubelet          Node functional-016570 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           23s   node-controller  Node functional-016570 event: Registered Node functional-016570 in Controller
	
	
	==> dmesg <==
	[Sep16 10:17]  #2
	[  +0.001391]  #3
	[  +0.000000]  #4
	[  +0.003060] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.003238] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.002037] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.002135]  #5
	[  +0.000696]  #6
	[  +0.003195]  #7
	[  +0.058540] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.423486] i8042: Warning: Keylock active
	[  +0.007424] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.002994] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000654] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000607] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000622] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000638] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000657] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000607] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000608] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.588189] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +7.251152] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [b4905826c508e92d29a39ef565f1c838d026ed2e1af8256da846ca2ca7e33d25] <==
	{"level":"info","ts":"2024-09-16T10:40:33.262701Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T10:40:33.262867Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:40:33.262937Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:40:33.263038Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:40:33.263071Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:40:33.951012Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T10:40:33.951071Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T10:40:33.951103Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-16T10:40:33.951131Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T10:40:33.951143Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-16T10:40:33.951156Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-16T10:40:33.951169Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-16T10:40:33.952133Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-016570 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:40:33.952149Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:40:33.952176Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:40:33.952174Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:40:33.952458Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:40:33.952538Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:40:33.952886Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:40:33.953028Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:40:33.953056Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:40:33.953277Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:40:33.954142Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:40:33.955433Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:40:33.956074Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> kernel <==
	 10:41:05 up 23 min,  0 users,  load average: 1.91, 0.92, 0.56
	Linux functional-016570 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [bf96dac81b725b0cdd05c80d46fccb31fba58eb314cbefaf4fa45648dd564d75] <==
	I0916 10:40:44.322865       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 10:40:44.323089       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0916 10:40:44.323250       1 main.go:148] setting mtu 1500 for CNI 
	I0916 10:40:44.323270       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 10:40:44.323292       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 10:40:44.644895       1 controller.go:334] Starting controller kube-network-policies
	I0916 10:40:44.644910       1 controller.go:338] Waiting for informer caches to sync
	I0916 10:40:44.644915       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 10:40:45.019853       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 10:40:45.019909       1 metrics.go:61] Registering metrics
	I0916 10:40:45.019969       1 controller.go:374] Syncing nftables rules
	I0916 10:40:54.644919       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:40:54.644955       1 main.go:299] handling current node
	I0916 10:41:04.651830       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:41:04.651862       1 main.go:299] handling current node
	
	
	==> kube-apiserver [c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171] <==
	I0916 10:40:35.620322       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:40:35.621313       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:40:35.621352       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:40:35.621382       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:40:35.621425       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:40:35.621453       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 10:40:35.621585       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 10:40:35.622013       1 controller.go:615] quota admission added evaluator for: namespaces
	E0916 10:40:35.627641       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0916 10:40:35.831014       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:40:36.457455       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0916 10:40:36.461248       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0916 10:40:36.461271       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 10:40:37.010774       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:40:37.051271       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 10:40:37.130025       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0916 10:40:37.137482       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0916 10:40:37.138581       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:40:37.143484       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:40:37.469770       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:40:37.933347       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:40:37.944494       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 10:40:37.955455       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:40:43.026924       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0916 10:40:43.229838       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [0062114d9f75f428e3b03efa430fa13cefb82159b06c16a18002082c26a43f86] <==
	I0916 10:40:42.326198       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:40:42.346002       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-016570"
	I0916 10:40:42.370119       1 shared_informer.go:320] Caches are synced for attach detach
	I0916 10:40:42.376598       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:40:42.409847       1 shared_informer.go:320] Caches are synced for PV protection
	I0916 10:40:42.419622       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0916 10:40:42.420786       1 shared_informer.go:320] Caches are synced for persistent volume
	I0916 10:40:42.842030       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:40:42.920000       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:40:42.920038       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:40:43.141796       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-016570"
	I0916 10:40:43.434546       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="197.686254ms"
	I0916 10:40:43.442835       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="8.220985ms"
	I0916 10:40:43.442928       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="55.644µs"
	I0916 10:40:43.525949       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="66.647µs"
	I0916 10:40:43.923991       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="79.808124ms"
	I0916 10:40:43.931522       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="7.483524ms"
	I0916 10:40:43.931666       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="96.772µs"
	I0916 10:40:44.880006       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="75.649µs"
	I0916 10:40:44.885322       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="66.511µs"
	I0916 10:40:44.888055       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="62.771µs"
	I0916 10:40:48.140209       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-016570"
	I0916 10:40:55.879590       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="84.161µs"
	I0916 10:40:55.896034       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.235562ms"
	I0916 10:40:55.896140       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="68.028µs"
	
	
	==> kube-proxy [80095e847084fa5775086821e6be1bb6e7bae0fe6a66745b19cb9b75e266bc3f] <==
	I0916 10:40:44.050300       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:40:44.179052       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:40:44.179113       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:40:44.196854       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:40:44.196926       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:40:44.198841       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:40:44.199284       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:40:44.199313       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:40:44.200417       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:40:44.200434       1 config.go:199] "Starting service config controller"
	I0916 10:40:44.200460       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:40:44.200461       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:40:44.200579       1 config.go:328] "Starting node config controller"
	I0916 10:40:44.200592       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:40:44.300956       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:40:44.300944       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:40:44.300945       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [0906c5e415b9c8b7cf0dc6e35daf1caed07aec8e00dc68457a9203cdeaa0fcee] <==
	E0916 10:40:35.625697       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 10:40:35.625947       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:35.625923       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:40:35.625982       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:35.626126       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:40:35.626272       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:35.626178       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 10:40:35.627185       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.485194       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:40:36.485283       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.534754       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:40:36.534802       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.553649       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 10:40:36.553693       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.739511       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 10:40:36.739572       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.763525       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 10:40:36.763583       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.768041       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 10:40:36.768094       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.782504       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 10:40:36.782564       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.919306       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:40:36.919357       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 10:40:39.247701       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:40:43 functional-016570 kubelet[1610]: I0916 10:40:43.430896    1610 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drftz\" (UniqueName: \"kubernetes.io/projected/370e7aff-70ab-43f7-9770-098c21fd013d-kube-api-access-drftz\") pod \"coredns-7c65d6cfc9-59qm7\" (UID: \"370e7aff-70ab-43f7-9770-098c21fd013d\") " pod="kube-system/coredns-7c65d6cfc9-59qm7"
	Sep 16 10:40:43 functional-016570 kubelet[1610]: I0916 10:40:43.430929    1610 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9911055-0a8b-4dea-9377-95c0203b4a4f-config-volume\") pod \"coredns-7c65d6cfc9-rwqzc\" (UID: \"c9911055-0a8b-4dea-9377-95c0203b4a4f\") " pod="kube-system/coredns-7c65d6cfc9-rwqzc"
	Sep 16 10:40:43 functional-016570 kubelet[1610]: E0916 10:40:43.830271    1610 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1243ca1996d54c6e5a27f6b3c7af87136f4be90bf44edcc3ff7c11a95e79cecc\": failed to find network info for sandbox \"1243ca1996d54c6e5a27f6b3c7af87136f4be90bf44edcc3ff7c11a95e79cecc\""
	Sep 16 10:40:43 functional-016570 kubelet[1610]: E0916 10:40:43.830382    1610 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1243ca1996d54c6e5a27f6b3c7af87136f4be90bf44edcc3ff7c11a95e79cecc\": failed to find network info for sandbox \"1243ca1996d54c6e5a27f6b3c7af87136f4be90bf44edcc3ff7c11a95e79cecc\"" pod="kube-system/coredns-7c65d6cfc9-rwqzc"
	Sep 16 10:40:43 functional-016570 kubelet[1610]: E0916 10:40:43.830611    1610 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1243ca1996d54c6e5a27f6b3c7af87136f4be90bf44edcc3ff7c11a95e79cecc\": failed to find network info for sandbox \"1243ca1996d54c6e5a27f6b3c7af87136f4be90bf44edcc3ff7c11a95e79cecc\"" pod="kube-system/coredns-7c65d6cfc9-rwqzc"
	Sep 16 10:40:43 functional-016570 kubelet[1610]: E0916 10:40:43.830694    1610 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-rwqzc_kube-system(c9911055-0a8b-4dea-9377-95c0203b4a4f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-rwqzc_kube-system(c9911055-0a8b-4dea-9377-95c0203b4a4f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1243ca1996d54c6e5a27f6b3c7af87136f4be90bf44edcc3ff7c11a95e79cecc\\\": failed to find network info for sandbox \\\"1243ca1996d54c6e5a27f6b3c7af87136f4be90bf44edcc3ff7c11a95e79cecc\\\"\"" pod="kube-system/coredns-7c65d6cfc9-rwqzc" podUID="c9911055-0a8b-4dea-9377-95c0203b4a4f"
	Sep 16 10:40:43 functional-016570 kubelet[1610]: E0916 10:40:43.841916    1610 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42993f922a85423d8507c1624fbc8d585acdbf71f24477870448fb925be6dec9\": failed to find network info for sandbox \"42993f922a85423d8507c1624fbc8d585acdbf71f24477870448fb925be6dec9\""
	Sep 16 10:40:43 functional-016570 kubelet[1610]: E0916 10:40:43.841966    1610 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42993f922a85423d8507c1624fbc8d585acdbf71f24477870448fb925be6dec9\": failed to find network info for sandbox \"42993f922a85423d8507c1624fbc8d585acdbf71f24477870448fb925be6dec9\"" pod="kube-system/coredns-7c65d6cfc9-59qm7"
	Sep 16 10:40:43 functional-016570 kubelet[1610]: E0916 10:40:43.841987    1610 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42993f922a85423d8507c1624fbc8d585acdbf71f24477870448fb925be6dec9\": failed to find network info for sandbox \"42993f922a85423d8507c1624fbc8d585acdbf71f24477870448fb925be6dec9\"" pod="kube-system/coredns-7c65d6cfc9-59qm7"
	Sep 16 10:40:43 functional-016570 kubelet[1610]: E0916 10:40:43.842036    1610 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-59qm7_kube-system(370e7aff-70ab-43f7-9770-098c21fd013d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-59qm7_kube-system(370e7aff-70ab-43f7-9770-098c21fd013d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"42993f922a85423d8507c1624fbc8d585acdbf71f24477870448fb925be6dec9\\\": failed to find network info for sandbox \\\"42993f922a85423d8507c1624fbc8d585acdbf71f24477870448fb925be6dec9\\\"\"" pod="kube-system/coredns-7c65d6cfc9-59qm7" podUID="370e7aff-70ab-43f7-9770-098c21fd013d"
	Sep 16 10:40:44 functional-016570 kubelet[1610]: I0916 10:40:44.036355    1610 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9911055-0a8b-4dea-9377-95c0203b4a4f-config-volume\") pod \"c9911055-0a8b-4dea-9377-95c0203b4a4f\" (UID: \"c9911055-0a8b-4dea-9377-95c0203b4a4f\") "
	Sep 16 10:40:44 functional-016570 kubelet[1610]: I0916 10:40:44.036413    1610 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8c54\" (UniqueName: \"kubernetes.io/projected/c9911055-0a8b-4dea-9377-95c0203b4a4f-kube-api-access-r8c54\") pod \"c9911055-0a8b-4dea-9377-95c0203b4a4f\" (UID: \"c9911055-0a8b-4dea-9377-95c0203b4a4f\") "
	Sep 16 10:40:44 functional-016570 kubelet[1610]: I0916 10:40:44.036813    1610 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9911055-0a8b-4dea-9377-95c0203b4a4f-config-volume" (OuterVolumeSpecName: "config-volume") pod "c9911055-0a8b-4dea-9377-95c0203b4a4f" (UID: "c9911055-0a8b-4dea-9377-95c0203b4a4f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Sep 16 10:40:44 functional-016570 kubelet[1610]: I0916 10:40:44.039236    1610 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9911055-0a8b-4dea-9377-95c0203b4a4f-kube-api-access-r8c54" (OuterVolumeSpecName: "kube-api-access-r8c54") pod "c9911055-0a8b-4dea-9377-95c0203b4a4f" (UID: "c9911055-0a8b-4dea-9377-95c0203b4a4f"). InnerVolumeSpecName "kube-api-access-r8c54". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:40:44 functional-016570 kubelet[1610]: I0916 10:40:44.137201    1610 reconciler_common.go:288] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9911055-0a8b-4dea-9377-95c0203b4a4f-config-volume\") on node \"functional-016570\" DevicePath \"\""
	Sep 16 10:40:44 functional-016570 kubelet[1610]: I0916 10:40:44.137238    1610 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-r8c54\" (UniqueName: \"kubernetes.io/projected/c9911055-0a8b-4dea-9377-95c0203b4a4f-kube-api-access-r8c54\") on node \"functional-016570\" DevicePath \"\""
	Sep 16 10:40:44 functional-016570 kubelet[1610]: I0916 10:40:44.338572    1610 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvpkq\" (UniqueName: \"kubernetes.io/projected/9924f10d-5beb-43b1-9782-44644a015b56-kube-api-access-bvpkq\") pod \"storage-provisioner\" (UID: \"9924f10d-5beb-43b1-9782-44644a015b56\") " pod="kube-system/storage-provisioner"
	Sep 16 10:40:44 functional-016570 kubelet[1610]: I0916 10:40:44.338623    1610 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9924f10d-5beb-43b1-9782-44644a015b56-tmp\") pod \"storage-provisioner\" (UID: \"9924f10d-5beb-43b1-9782-44644a015b56\") " pod="kube-system/storage-provisioner"
	Sep 16 10:40:44 functional-016570 kubelet[1610]: I0916 10:40:44.851111    1610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=0.851088151 podStartE2EDuration="851.088151ms" podCreationTimestamp="2024-09-16 10:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:40:44.850855071 +0000 UTC m=+7.148175382" watchObservedRunningTime="2024-09-16 10:40:44.851088151 +0000 UTC m=+7.148408464"
	Sep 16 10:40:44 functional-016570 kubelet[1610]: I0916 10:40:44.859288    1610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-w8qkq" podStartSLOduration=1.859265894 podStartE2EDuration="1.859265894s" podCreationTimestamp="2024-09-16 10:40:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:40:44.858994711 +0000 UTC m=+7.156315022" watchObservedRunningTime="2024-09-16 10:40:44.859265894 +0000 UTC m=+7.156586204"
	Sep 16 10:40:44 functional-016570 kubelet[1610]: I0916 10:40:44.880239    1610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-5qjpd" podStartSLOduration=1.880217383 podStartE2EDuration="1.880217383s" podCreationTimestamp="2024-09-16 10:40:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:40:44.869563495 +0000 UTC m=+7.166883786" watchObservedRunningTime="2024-09-16 10:40:44.880217383 +0000 UTC m=+7.177537693"
	Sep 16 10:40:45 functional-016570 kubelet[1610]: I0916 10:40:45.780903    1610 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9911055-0a8b-4dea-9377-95c0203b4a4f" path="/var/lib/kubelet/pods/c9911055-0a8b-4dea-9377-95c0203b4a4f/volumes"
	Sep 16 10:40:48 functional-016570 kubelet[1610]: I0916 10:40:48.117509    1610 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 16 10:40:48 functional-016570 kubelet[1610]: I0916 10:40:48.118369    1610 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 16 10:40:55 functional-016570 kubelet[1610]: I0916 10:40:55.889287    1610 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-59qm7" podStartSLOduration=12.889266902 podStartE2EDuration="12.889266902s" podCreationTimestamp="2024-09-16 10:40:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:40:55.878958676 +0000 UTC m=+18.176278988" watchObservedRunningTime="2024-09-16 10:40:55.889266902 +0000 UTC m=+18.186587203"
	
	
	==> storage-provisioner [03ddfa3f2cafc3bb86ca39c37b44d4f722054dbe5b68412154aaff07a0db9267] <==
	I0916 10:40:44.592162       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:40:44.598921       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:40:44.598961       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:40:44.605075       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:40:44.605207       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-016570_34b5a6aa-cad3-4b7c-8e2b-f70c513bb4eb!
	I0916 10:40:44.605217       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1e3e2c42-8555-41e5-b1cf-7a6ddf78f6d7", APIVersion:"v1", ResourceVersion:"381", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-016570_34b5a6aa-cad3-4b7c-8e2b-f70c513bb4eb became leader
	I0916 10:40:44.706132       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-016570_34b5a6aa-cad3-4b7c-8e2b-f70c513bb4eb!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-016570 -n functional-016570
helpers_test.go:261: (dbg) Run:  kubectl --context functional-016570 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context functional-016570 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (465.972µs)
helpers_test.go:263: kubectl --context functional-016570 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestFunctional/serial/KubectlGetPods (1.74s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (1.98s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-016570 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:810: (dbg) Non-zero exit: kubectl --context functional-016570 get po -l tier=control-plane -n kube-system -o=json: fork/exec /usr/local/bin/kubectl: exec format error (589.271µs)
functional_test.go:812: failed to get components. args "kubectl --context functional-016570 get po -l tier=control-plane -n kube-system -o=json": fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-016570
helpers_test.go:235: (dbg) docker inspect functional-016570:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b",
	        "Created": "2024-09-16T10:40:22.437729239Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 44343,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:40:22.549860744Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b/hostname",
	        "HostsPath": "/var/lib/docker/containers/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b/hosts",
	        "LogPath": "/var/lib/docker/containers/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b-json.log",
	        "Name": "/functional-016570",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-016570:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-016570",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/89f1441ae4b7c160cc54fa69d84178c0aef1412348cbc8b5731e08fc84a1cd48-init/diff:/var/lib/docker/overlay2/e391a31d00d345931d18da68f8a7d7338830f6d9b98fe203461225d03a65d594/diff",
	                "MergedDir": "/var/lib/docker/overlay2/89f1441ae4b7c160cc54fa69d84178c0aef1412348cbc8b5731e08fc84a1cd48/merged",
	                "UpperDir": "/var/lib/docker/overlay2/89f1441ae4b7c160cc54fa69d84178c0aef1412348cbc8b5731e08fc84a1cd48/diff",
	                "WorkDir": "/var/lib/docker/overlay2/89f1441ae4b7c160cc54fa69d84178c0aef1412348cbc8b5731e08fc84a1cd48/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-016570",
	                "Source": "/var/lib/docker/volumes/functional-016570/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-016570",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-016570",
	                "name.minikube.sigs.k8s.io": "functional-016570",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cea48287f6ef85c384c2d0fd7e890db0e03a4817385b96024c8e0a0688fd7962",
	            "SandboxKey": "/var/run/docker/netns/cea48287f6ef",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-016570": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "0f0536305092a99e49a50f7adb5b668b5d3bb1c3c21d82e43be0c155c4e7cfe5",
	                    "EndpointID": "9201dc1ccce436b0c1f5c3cef087a9b08f43da4627ab9f3717ae9bbdfb5de9f3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-016570",
	                        "389fc216adb5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-016570 -n functional-016570
helpers_test.go:244: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-016570 logs -n 25: (1.342660307s)
helpers_test.go:252: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| unpause | nospam-421019 --log_dir                                                  | nospam-421019     | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | /tmp/nospam-421019 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-421019 --log_dir                                                  | nospam-421019     | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | /tmp/nospam-421019 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-421019 --log_dir                                                  | nospam-421019     | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | /tmp/nospam-421019 unpause                                               |                   |         |         |                     |                     |
	| stop    | nospam-421019 --log_dir                                                  | nospam-421019     | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | /tmp/nospam-421019 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-421019 --log_dir                                                  | nospam-421019     | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | /tmp/nospam-421019 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-421019 --log_dir                                                  | nospam-421019     | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | /tmp/nospam-421019 stop                                                  |                   |         |         |                     |                     |
	| delete  | -p nospam-421019                                                         | nospam-421019     | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	| start   | -p functional-016570                                                     | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:40 UTC |
	|         | --memory=4000                                                            |                   |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |         |         |                     |                     |
	|         | --wait=all --driver=docker                                               |                   |         |         |                     |                     |
	|         | --container-runtime=containerd                                           |                   |         |         |                     |                     |
	| start   | -p functional-016570                                                     | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:40 UTC | 16 Sep 24 10:41 UTC |
	|         | --alsologtostderr -v=8                                                   |                   |         |         |                     |                     |
	| cache   | functional-016570 cache add                                              | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC | 16 Sep 24 10:41 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | functional-016570 cache add                                              | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC | 16 Sep 24 10:41 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | functional-016570 cache add                                              | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC | 16 Sep 24 10:41 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-016570 cache add                                              | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC | 16 Sep 24 10:41 UTC |
	|         | minikube-local-cache-test:functional-016570                              |                   |         |         |                     |                     |
	| cache   | functional-016570 cache delete                                           | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC | 16 Sep 24 10:41 UTC |
	|         | minikube-local-cache-test:functional-016570                              |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC | 16 Sep 24 10:41 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | list                                                                     | minikube          | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC | 16 Sep 24 10:41 UTC |
	| ssh     | functional-016570 ssh sudo                                               | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC | 16 Sep 24 10:41 UTC |
	|         | crictl images                                                            |                   |         |         |                     |                     |
	| ssh     | functional-016570                                                        | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC | 16 Sep 24 10:41 UTC |
	|         | ssh sudo crictl rmi                                                      |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| ssh     | functional-016570 ssh                                                    | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-016570 cache reload                                           | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC | 16 Sep 24 10:41 UTC |
	| ssh     | functional-016570 ssh                                                    | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC | 16 Sep 24 10:41 UTC |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC | 16 Sep 24 10:41 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC | 16 Sep 24 10:41 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| kubectl | functional-016570 kubectl --                                             | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC | 16 Sep 24 10:41 UTC |
	|         | --context functional-016570                                              |                   |         |         |                     |                     |
	|         | get pods                                                                 |                   |         |         |                     |                     |
	| start   | -p functional-016570                                                     | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC | 16 Sep 24 10:42 UTC |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |         |         |                     |                     |
	|         | --wait=all                                                               |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:41:13
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:41:13.755639   50617 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:41:13.755728   50617 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:41:13.755731   50617 out.go:358] Setting ErrFile to fd 2...
	I0916 10:41:13.755764   50617 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:41:13.755954   50617 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 10:41:13.756487   50617 out.go:352] Setting JSON to false
	I0916 10:41:13.757442   50617 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1418,"bootTime":1726481856,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:41:13.757534   50617 start.go:139] virtualization: kvm guest
	I0916 10:41:13.759788   50617 out.go:177] * [functional-016570] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:41:13.761282   50617 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:41:13.761282   50617 notify.go:220] Checking for updates...
	I0916 10:41:13.762986   50617 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:41:13.764358   50617 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:41:13.765773   50617 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 10:41:13.767132   50617 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:41:13.770735   50617 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:41:13.772851   50617 config.go:182] Loaded profile config "functional-016570": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:41:13.772943   50617 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:41:13.795644   50617 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:41:13.795710   50617 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:41:13.843029   50617 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:65 SystemTime:2024-09-16 10:41:13.833885438 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:41:13.843113   50617 docker.go:318] overlay module found
	I0916 10:41:13.845037   50617 out.go:177] * Using the docker driver based on existing profile
	I0916 10:41:13.846503   50617 start.go:297] selected driver: docker
	I0916 10:41:13.846509   50617 start.go:901] validating driver "docker" against &{Name:functional-016570 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-016570 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:41:13.846579   50617 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:41:13.846656   50617 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:41:13.898210   50617 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:65 SystemTime:2024-09-16 10:41:13.889174711 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:41:13.899057   50617 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:41:13.899087   50617 cni.go:84] Creating CNI manager for ""
	I0916 10:41:13.899142   50617 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 10:41:13.899201   50617 start.go:340] cluster config:
	{Name:functional-016570 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-016570 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:41:13.901264   50617 out.go:177] * Starting "functional-016570" primary control-plane node in "functional-016570" cluster
	I0916 10:41:13.902699   50617 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 10:41:13.904006   50617 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:41:13.905186   50617 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:41:13.905220   50617 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4
	I0916 10:41:13.905225   50617 cache.go:56] Caching tarball of preloaded images
	I0916 10:41:13.905284   50617 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:41:13.905307   50617 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 10:41:13.905314   50617 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 10:41:13.905424   50617 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/config.json ...
	W0916 10:41:13.923860   50617 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:41:13.923872   50617 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:41:13.923941   50617 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:41:13.923954   50617 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:41:13.923958   50617 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:41:13.923966   50617 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:41:13.923973   50617 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:41:13.974864   50617 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:41:13.974899   50617 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:41:13.974926   50617 start.go:360] acquireMachinesLock for functional-016570: {Name:mkd69bbb7ce10518607df066fca58f5ba9fc9f29 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:41:13.974985   50617 start.go:364] duration metric: took 42.729µs to acquireMachinesLock for "functional-016570"
	I0916 10:41:13.974998   50617 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:41:13.975001   50617 fix.go:54] fixHost starting: 
	I0916 10:41:13.975211   50617 cli_runner.go:164] Run: docker container inspect functional-016570 --format={{.State.Status}}
	I0916 10:41:13.991513   50617 fix.go:112] recreateIfNeeded on functional-016570: state=Running err=<nil>
	W0916 10:41:13.991535   50617 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:41:13.993608   50617 out.go:177] * Updating the running docker "functional-016570" container ...
	I0916 10:41:13.994752   50617 machine.go:93] provisionDockerMachine start ...
	I0916 10:41:13.994821   50617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-016570
	I0916 10:41:14.011299   50617 main.go:141] libmachine: Using SSH client type: native
	I0916 10:41:14.011496   50617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 10:41:14.011502   50617 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:41:14.143228   50617 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-016570
	
	I0916 10:41:14.143249   50617 ubuntu.go:169] provisioning hostname "functional-016570"
	I0916 10:41:14.143314   50617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-016570
	I0916 10:41:14.161450   50617 main.go:141] libmachine: Using SSH client type: native
	I0916 10:41:14.161632   50617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 10:41:14.161645   50617 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-016570 && echo "functional-016570" | sudo tee /etc/hostname
	I0916 10:41:14.302523   50617 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-016570
	
	I0916 10:41:14.302587   50617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-016570
	I0916 10:41:14.319519   50617 main.go:141] libmachine: Using SSH client type: native
	I0916 10:41:14.319709   50617 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0916 10:41:14.319720   50617 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-016570' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-016570/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-016570' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:41:14.455716   50617 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:41:14.455732   50617 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 10:41:14.455762   50617 ubuntu.go:177] setting up certificates
	I0916 10:41:14.455770   50617 provision.go:84] configureAuth start
	I0916 10:41:14.455814   50617 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-016570
	I0916 10:41:14.473741   50617 provision.go:143] copyHostCerts
	I0916 10:41:14.473800   50617 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 10:41:14.473807   50617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:41:14.473891   50617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 10:41:14.473988   50617 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 10:41:14.473991   50617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:41:14.474013   50617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 10:41:14.474090   50617 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 10:41:14.474093   50617 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:41:14.474108   50617 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 10:41:14.474160   50617 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.functional-016570 san=[127.0.0.1 192.168.49.2 functional-016570 localhost minikube]
	I0916 10:41:14.608587   50617 provision.go:177] copyRemoteCerts
	I0916 10:41:14.608636   50617 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:41:14.608667   50617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-016570
	I0916 10:41:14.625005   50617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/functional-016570/id_rsa Username:docker}
	I0916 10:41:14.724294   50617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 10:41:14.745327   50617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0916 10:41:14.768062   50617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:41:14.790215   50617 provision.go:87] duration metric: took 334.432189ms to configureAuth
	I0916 10:41:14.790236   50617 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:41:14.790514   50617 config.go:182] Loaded profile config "functional-016570": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:41:14.790527   50617 machine.go:96] duration metric: took 795.76809ms to provisionDockerMachine
	I0916 10:41:14.790545   50617 start.go:293] postStartSetup for "functional-016570" (driver="docker")
	I0916 10:41:14.790553   50617 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:41:14.790599   50617 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:41:14.790635   50617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-016570
	I0916 10:41:14.807827   50617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/functional-016570/id_rsa Username:docker}
	I0916 10:41:14.900524   50617 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:41:14.903623   50617 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:41:14.903652   50617 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:41:14.903658   50617 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:41:14.903666   50617 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:41:14.903674   50617 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 10:41:14.903717   50617 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 10:41:14.903847   50617 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 10:41:14.903943   50617 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/test/nested/copy/11189/hosts -> hosts in /etc/test/nested/copy/11189
	I0916 10:41:14.903980   50617 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/11189
	I0916 10:41:14.912176   50617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:41:14.933649   50617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/test/nested/copy/11189/hosts --> /etc/test/nested/copy/11189/hosts (40 bytes)
	I0916 10:41:14.956461   50617 start.go:296] duration metric: took 165.903621ms for postStartSetup
	I0916 10:41:14.956529   50617 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:41:14.956559   50617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-016570
	I0916 10:41:14.973403   50617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/functional-016570/id_rsa Username:docker}
	I0916 10:41:15.064828   50617 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:41:15.069268   50617 fix.go:56] duration metric: took 1.09425679s for fixHost
	I0916 10:41:15.069285   50617 start.go:83] releasing machines lock for "functional-016570", held for 1.094293467s
	I0916 10:41:15.069373   50617 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-016570
	I0916 10:41:15.086650   50617 ssh_runner.go:195] Run: cat /version.json
	I0916 10:41:15.086695   50617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-016570
	I0916 10:41:15.086732   50617 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:41:15.086808   50617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-016570
	I0916 10:41:15.106311   50617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/functional-016570/id_rsa Username:docker}
	I0916 10:41:15.106325   50617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/functional-016570/id_rsa Username:docker}
	I0916 10:41:15.271808   50617 ssh_runner.go:195] Run: systemctl --version
	I0916 10:41:15.275906   50617 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:41:15.280036   50617 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 10:41:15.296943   50617 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:41:15.297007   50617 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:41:15.305629   50617 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:41:15.305648   50617 start.go:495] detecting cgroup driver to use...
	I0916 10:41:15.305684   50617 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:41:15.305720   50617 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 10:41:15.316813   50617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 10:41:15.327375   50617 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:41:15.327431   50617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:41:15.339572   50617 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:41:15.350387   50617 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:41:15.445895   50617 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:41:15.542115   50617 docker.go:233] disabling docker service ...
	I0916 10:41:15.542176   50617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:41:15.553958   50617 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:41:15.564870   50617 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:41:15.668215   50617 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:41:15.769162   50617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:41:15.779974   50617 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:41:15.794693   50617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 10:41:15.803986   50617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:41:15.813157   50617 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:41:15.813211   50617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:41:15.822453   50617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:41:15.831494   50617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:41:15.841907   50617 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:41:15.850970   50617 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:41:15.859514   50617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:41:15.868696   50617 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:41:15.878361   50617 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:41:15.887430   50617 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:41:15.895085   50617 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:41:15.902544   50617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:41:16.002931   50617 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 10:41:16.256463   50617 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 10:41:16.256522   50617 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 10:41:16.259956   50617 start.go:563] Will wait 60s for crictl version
	I0916 10:41:16.259996   50617 ssh_runner.go:195] Run: which crictl
	I0916 10:41:16.262852   50617 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:41:16.294533   50617 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 10:41:16.294596   50617 ssh_runner.go:195] Run: containerd --version
	I0916 10:41:16.314964   50617 ssh_runner.go:195] Run: containerd --version
	I0916 10:41:16.338810   50617 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 10:41:16.340225   50617 cli_runner.go:164] Run: docker network inspect functional-016570 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:41:16.356478   50617 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:41:16.361883   50617 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0916 10:41:16.363432   50617 kubeadm.go:883] updating cluster {Name:functional-016570 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-016570 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mou
ntMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:41:16.363561   50617 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:41:16.363624   50617 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:41:16.394985   50617 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 10:41:16.394995   50617 containerd.go:534] Images already preloaded, skipping extraction
	I0916 10:41:16.395042   50617 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:41:16.425985   50617 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 10:41:16.425997   50617 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:41:16.426002   50617 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.31.1 containerd true true} ...
	I0916 10:41:16.426099   50617 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-016570 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:functional-016570 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:41:16.426145   50617 ssh_runner.go:195] Run: sudo crictl info
	I0916 10:41:16.459724   50617 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0916 10:41:16.459761   50617 cni.go:84] Creating CNI manager for ""
	I0916 10:41:16.459771   50617 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 10:41:16.459779   50617 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:41:16.459807   50617 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-016570 NodeName:functional-016570 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfi
gOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:41:16.459990   50617 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "functional-016570"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:41:16.460050   50617 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:41:16.468823   50617 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:41:16.468896   50617 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:41:16.477097   50617 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0916 10:41:16.493756   50617 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:41:16.509672   50617 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2021 bytes)
	I0916 10:41:16.525272   50617 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0916 10:41:16.528567   50617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:41:16.621010   50617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:41:16.631703   50617 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570 for IP: 192.168.49.2
	I0916 10:41:16.631714   50617 certs.go:194] generating shared ca certs ...
	I0916 10:41:16.631752   50617 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:41:16.631890   50617 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 10:41:16.631923   50617 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 10:41:16.631928   50617 certs.go:256] generating profile certs ...
	I0916 10:41:16.631995   50617 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.key
	I0916 10:41:16.632033   50617 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/apiserver.key.50ed18d6
	I0916 10:41:16.632090   50617 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/proxy-client.key
	I0916 10:41:16.632187   50617 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 10:41:16.632210   50617 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 10:41:16.632215   50617 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:41:16.632236   50617 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 10:41:16.632254   50617 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:41:16.632273   50617 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 10:41:16.632305   50617 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:41:16.632837   50617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:41:16.655714   50617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:41:16.678341   50617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:41:16.700135   50617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:41:16.722754   50617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 10:41:16.745260   50617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:41:16.768255   50617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:41:16.790258   50617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:41:16.811851   50617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 10:41:16.833433   50617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 10:41:16.855307   50617 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:41:16.877464   50617 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:41:16.893789   50617 ssh_runner.go:195] Run: openssl version
	I0916 10:41:16.898913   50617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:41:16.907761   50617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:41:16.911103   50617 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:41:16.911146   50617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:41:16.917616   50617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:41:16.925867   50617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 10:41:16.935144   50617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 10:41:16.938583   50617 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 10:41:16.938622   50617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 10:41:16.945041   50617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 10:41:16.953666   50617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 10:41:16.963315   50617 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 10:41:16.966762   50617 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 10:41:16.966802   50617 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 10:41:16.973211   50617 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:41:16.981449   50617 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:41:16.984771   50617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 10:41:16.990871   50617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 10:41:16.996863   50617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 10:41:17.002786   50617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 10:41:17.008928   50617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 10:41:17.015080   50617 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 10:41:17.021163   50617 kubeadm.go:392] StartCluster: {Name:functional-016570 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-016570 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:41:17.021234   50617 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 10:41:17.021286   50617 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:41:17.053841   50617 cri.go:89] found id: "fd0c81e7a39a2566405ad2950426958ab0d7abfe073ce6517f67e87f2cc2dabe"
	I0916 10:41:17.053858   50617 cri.go:89] found id: "03ddfa3f2cafc3bb86ca39c37b44d4f722054dbe5b68412154aaff07a0db9267"
	I0916 10:41:17.053861   50617 cri.go:89] found id: "bf96dac81b725b0cdd05c80d46fccb31fba58eb314cbefaf4fa45648dd564d75"
	I0916 10:41:17.053863   50617 cri.go:89] found id: "80095e847084fa5775086821e6be1bb6e7bae0fe6a66745b19cb9b75e266bc3f"
	I0916 10:41:17.053865   50617 cri.go:89] found id: "0062114d9f75f428e3b03efa430fa13cefb82159b06c16a18002082c26a43f86"
	I0916 10:41:17.053872   50617 cri.go:89] found id: "0906c5e415b9c8b7cf0dc6e35daf1caed07aec8e00dc68457a9203cdeaa0fcee"
	I0916 10:41:17.053874   50617 cri.go:89] found id: "c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171"
	I0916 10:41:17.053876   50617 cri.go:89] found id: "b4905826c508e92d29a39ef565f1c838d026ed2e1af8256da846ca2ca7e33d25"
	I0916 10:41:17.053877   50617 cri.go:89] found id: ""
	I0916 10:41:17.053916   50617 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0916 10:41:17.078235   50617 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"0062114d9f75f428e3b03efa430fa13cefb82159b06c16a18002082c26a43f86","pid":1514,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0062114d9f75f428e3b03efa430fa13cefb82159b06c16a18002082c26a43f86","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0062114d9f75f428e3b03efa430fa13cefb82159b06c16a18002082c26a43f86/rootfs","created":"2024-09-16T10:40:33.248379085Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-controller-manager:v1.31.1","io.kubernetes.cri.sandbox-id":"8b5d37485105016ad047d2bd7badc2c1ccabbe6c13a29075aec1d82c11b6924f","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-016570","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"05bfea671b4b973ad25665da415eb7d0"},"owner":"root"},{"ociVersion":
"1.0.2-dev","id":"03ddfa3f2cafc3bb86ca39c37b44d4f722054dbe5b68412154aaff07a0db9267","pid":2297,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/03ddfa3f2cafc3bb86ca39c37b44d4f722054dbe5b68412154aaff07a0db9267","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/03ddfa3f2cafc3bb86ca39c37b44d4f722054dbe5b68412154aaff07a0db9267/rootfs","created":"2024-09-16T10:40:44.571062175Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri.sandbox-id":"b81ffde02718d4aa5d690a7b2df31a8489e3a52138ff033bbcd631c55c48cadf","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9924f10d-5beb-43b1-9782-44644a015b56"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0906c5e415b9c8b7cf0dc6e35daf1caed07aec8e00dc68457a9203cdeaa0fcee","pid":1517,"stat
us":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0906c5e415b9c8b7cf0dc6e35daf1caed07aec8e00dc68457a9203cdeaa0fcee","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0906c5e415b9c8b7cf0dc6e35daf1caed07aec8e00dc68457a9203cdeaa0fcee/rootfs","created":"2024-09-16T10:40:33.245231601Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-scheduler:v1.31.1","io.kubernetes.cri.sandbox-id":"caa2007696d1b222357e825d8e2d17593495ae89592ab8a4d4d3efc1c5faa1d3","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-016570","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"5c4ebe83a62e176d48c858392b494ba5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2cdebcb8c7807d8f74249d94f4671d3ed0afef05d2c61c91b14d092a2cca0dbf","pid":1361,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2cdebcb8c7807d8f74249d9
4f4671d3ed0afef05d2c61c91b14d092a2cca0dbf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2cdebcb8c7807d8f74249d94f4671d3ed0afef05d2c61c91b14d092a2cca0dbf/rootfs","created":"2024-09-16T10:40:33.026445222Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"2cdebcb8c7807d8f74249d94f4671d3ed0afef05d2c61c91b14d092a2cca0dbf","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-functional-016570_9ff8ce834d4b88cb05c2ce6dadcabd95","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-functional-016570","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9ff8ce834d4b88cb05c2ce6dadcabd95"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3d9a434f8b6e5a2769089045bbe03e5edd8fcdb55e1ebbcbb3906a6f7820100e","pid":2382,"status":"running","bundle":"/run/contain
erd/io.containerd.runtime.v2.task/k8s.io/3d9a434f8b6e5a2769089045bbe03e5edd8fcdb55e1ebbcbb3906a6f7820100e","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3d9a434f8b6e5a2769089045bbe03e5edd8fcdb55e1ebbcbb3906a6f7820100e/rootfs","created":"2024-09-16T10:40:54.834227465Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"3d9a434f8b6e5a2769089045bbe03e5edd8fcdb55e1ebbcbb3906a6f7820100e","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-7c65d6cfc9-59qm7_370e7aff-70ab-43f7-9770-098c21fd013d","io.kubernetes.cri.sandbox-memory":"178257920","io.kubernetes.cri.sandbox-name":"coredns-7c65d6cfc9-59qm7","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"370e7aff-70ab-43f7-9770-098c21fd013d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5b66d77e8e33400b91593c23cc79092e12
62597c431c960d97c2f3351c50e961","pid":1360,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961/rootfs","created":"2024-09-16T10:40:33.024281022Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-functional-016570_5333b7f22b4ca6fa3369f64c875d053e","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-016570","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"5333b7f22b4ca6fa3369f64c875d053e"
},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"80095e847084fa5775086821e6be1bb6e7bae0fe6a66745b19cb9b75e266bc3f","pid":2009,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/80095e847084fa5775086821e6be1bb6e7bae0fe6a66745b19cb9b75e266bc3f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/80095e847084fa5775086821e6be1bb6e7bae0fe6a66745b19cb9b75e266bc3f/rootfs","created":"2024-09-16T10:40:43.922824123Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-proxy:v1.31.1","io.kubernetes.cri.sandbox-id":"f4ed79f8dffebd8037cefce09832cf889e13430ea89968777ef8fbbf95f977f5","io.kubernetes.cri.sandbox-name":"kube-proxy-w8qkq","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b4a00283-1d69-49c4-8c60-264ef3fd7aca"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8b5d37485105016ad047d2bd7badc2c1ccabbe6c13a29075aec1d82c11b6924f","pid
":1383,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8b5d37485105016ad047d2bd7badc2c1ccabbe6c13a29075aec1d82c11b6924f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8b5d37485105016ad047d2bd7badc2c1ccabbe6c13a29075aec1d82c11b6924f/rootfs","created":"2024-09-16T10:40:33.032806907Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"8b5d37485105016ad047d2bd7badc2c1ccabbe6c13a29075aec1d82c11b6924f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-functional-016570_05bfea671b4b973ad25665da415eb7d0","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-functional-016570","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"05bfea671b4b973ad25665da415eb7d0"},"owner":"root"},
{"ociVersion":"1.0.2-dev","id":"b4905826c508e92d29a39ef565f1c838d026ed2e1af8256da846ca2ca7e33d25","pid":1447,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b4905826c508e92d29a39ef565f1c838d026ed2e1af8256da846ca2ca7e33d25","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b4905826c508e92d29a39ef565f1c838d026ed2e1af8256da846ca2ca7e33d25/rootfs","created":"2024-09-16T10:40:33.17291957Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.5.15-0","io.kubernetes.cri.sandbox-id":"2cdebcb8c7807d8f74249d94f4671d3ed0afef05d2c61c91b14d092a2cca0dbf","io.kubernetes.cri.sandbox-name":"etcd-functional-016570","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9ff8ce834d4b88cb05c2ce6dadcabd95"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b81ffde02718d4aa5d690a7b2df31a8489e3a52138ff033bbcd631c55c48cadf","pid":2251,"status":"running","b
undle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b81ffde02718d4aa5d690a7b2df31a8489e3a52138ff033bbcd631c55c48cadf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b81ffde02718d4aa5d690a7b2df31a8489e3a52138ff033bbcd631c55c48cadf/rootfs","created":"2024-09-16T10:40:44.503322931Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"b81ffde02718d4aa5d690a7b2df31a8489e3a52138ff033bbcd631c55c48cadf","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_9924f10d-5beb-43b1-9782-44644a015b56","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"storage-provisioner","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"9924f10d-5beb-43b1-9782-44644a015b56"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bf96dac81b725b0cdd05c80d46fccb31fb
a58eb314cbefaf4fa45648dd564d75","pid":2058,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bf96dac81b725b0cdd05c80d46fccb31fba58eb314cbefaf4fa45648dd564d75","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bf96dac81b725b0cdd05c80d46fccb31fba58eb314cbefaf4fa45648dd564d75/rootfs","created":"2024-09-16T10:40:44.122183432Z","annotations":{"io.kubernetes.cri.container-name":"kindnet-cni","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"docker.io/kindest/kindnetd:v20240813-c6f155d6","io.kubernetes.cri.sandbox-id":"c7f56f796b013d6e4a9b5ce02ee0358acad8efaf96059c2970b319e87e53c060","io.kubernetes.cri.sandbox-name":"kindnet-5qjpd","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"8ee89403-0943-480c-9f48-4b25a0198f6d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171","pid":1522,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v
2.task/k8s.io/c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171/rootfs","created":"2024-09-16T10:40:33.25104973Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/kube-apiserver:v1.31.1","io.kubernetes.cri.sandbox-id":"5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961","io.kubernetes.cri.sandbox-name":"kube-apiserver-functional-016570","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"5333b7f22b4ca6fa3369f64c875d053e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c7f56f796b013d6e4a9b5ce02ee0358acad8efaf96059c2970b319e87e53c060","pid":1934,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c7f56f796b013d6e4a9b5ce02ee0358acad8efaf96059c2970b319e87e53c060","rootfs":"/run/contai
nerd/io.containerd.runtime.v2.task/k8s.io/c7f56f796b013d6e4a9b5ce02ee0358acad8efaf96059c2970b319e87e53c060/rootfs","created":"2024-09-16T10:40:43.727260795Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"10000","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"c7f56f796b013d6e4a9b5ce02ee0358acad8efaf96059c2970b319e87e53c060","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kindnet-5qjpd_8ee89403-0943-480c-9f48-4b25a0198f6d","io.kubernetes.cri.sandbox-memory":"52428800","io.kubernetes.cri.sandbox-name":"kindnet-5qjpd","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"8ee89403-0943-480c-9f48-4b25a0198f6d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"caa2007696d1b222357e825d8e2d17593495ae89592ab8a4d4d3efc1c5faa1d3","pid":1381,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/caa2007696d1b222357e82
5d8e2d17593495ae89592ab8a4d4d3efc1c5faa1d3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/caa2007696d1b222357e825d8e2d17593495ae89592ab8a4d4d3efc1c5faa1d3/rootfs","created":"2024-09-16T10:40:33.032509578Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"caa2007696d1b222357e825d8e2d17593495ae89592ab8a4d4d3efc1c5faa1d3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-functional-016570_5c4ebe83a62e176d48c858392b494ba5","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-functional-016570","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"5c4ebe83a62e176d48c858392b494ba5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f4ed79f8dffebd8037cefce09832cf889e13430ea89968777ef8fbbf95f977f5","pid":1927,"status":"running","
bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f4ed79f8dffebd8037cefce09832cf889e13430ea89968777ef8fbbf95f977f5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f4ed79f8dffebd8037cefce09832cf889e13430ea89968777ef8fbbf95f977f5/rootfs","created":"2024-09-16T10:40:43.632882566Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2","io.kubernetes.cri.sandbox-id":"f4ed79f8dffebd8037cefce09832cf889e13430ea89968777ef8fbbf95f977f5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-w8qkq_b4a00283-1d69-49c4-8c60-264ef3fd7aca","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-proxy-w8qkq","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b4a00283-1d69-49c4-8c60-264ef3fd7aca"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fd0c81e7a39a2566405ad2950426958ab0d7abf
e073ce6517f67e87f2cc2dabe","pid":2413,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fd0c81e7a39a2566405ad2950426958ab0d7abfe073ce6517f67e87f2cc2dabe","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fd0c81e7a39a2566405ad2950426958ab0d7abfe073ce6517f67e87f2cc2dabe/rootfs","created":"2024-09-16T10:40:54.906002595Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/coredns/coredns:v1.11.3","io.kubernetes.cri.sandbox-id":"3d9a434f8b6e5a2769089045bbe03e5edd8fcdb55e1ebbcbb3906a6f7820100e","io.kubernetes.cri.sandbox-name":"coredns-7c65d6cfc9-59qm7","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"370e7aff-70ab-43f7-9770-098c21fd013d"},"owner":"root"}]
	I0916 10:41:17.078459   50617 cri.go:126] list returned 16 containers
	I0916 10:41:17.078466   50617 cri.go:129] container: {ID:0062114d9f75f428e3b03efa430fa13cefb82159b06c16a18002082c26a43f86 Status:running}
	I0916 10:41:17.078481   50617 cri.go:135] skipping {0062114d9f75f428e3b03efa430fa13cefb82159b06c16a18002082c26a43f86 running}: state = "running", want "paused"
	I0916 10:41:17.078489   50617 cri.go:129] container: {ID:03ddfa3f2cafc3bb86ca39c37b44d4f722054dbe5b68412154aaff07a0db9267 Status:running}
	I0916 10:41:17.078493   50617 cri.go:135] skipping {03ddfa3f2cafc3bb86ca39c37b44d4f722054dbe5b68412154aaff07a0db9267 running}: state = "running", want "paused"
	I0916 10:41:17.078496   50617 cri.go:129] container: {ID:0906c5e415b9c8b7cf0dc6e35daf1caed07aec8e00dc68457a9203cdeaa0fcee Status:running}
	I0916 10:41:17.078499   50617 cri.go:135] skipping {0906c5e415b9c8b7cf0dc6e35daf1caed07aec8e00dc68457a9203cdeaa0fcee running}: state = "running", want "paused"
	I0916 10:41:17.078502   50617 cri.go:129] container: {ID:2cdebcb8c7807d8f74249d94f4671d3ed0afef05d2c61c91b14d092a2cca0dbf Status:running}
	I0916 10:41:17.078507   50617 cri.go:131] skipping 2cdebcb8c7807d8f74249d94f4671d3ed0afef05d2c61c91b14d092a2cca0dbf - not in ps
	I0916 10:41:17.078510   50617 cri.go:129] container: {ID:3d9a434f8b6e5a2769089045bbe03e5edd8fcdb55e1ebbcbb3906a6f7820100e Status:running}
	I0916 10:41:17.078514   50617 cri.go:131] skipping 3d9a434f8b6e5a2769089045bbe03e5edd8fcdb55e1ebbcbb3906a6f7820100e - not in ps
	I0916 10:41:17.078517   50617 cri.go:129] container: {ID:5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961 Status:running}
	I0916 10:41:17.078520   50617 cri.go:131] skipping 5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961 - not in ps
	I0916 10:41:17.078522   50617 cri.go:129] container: {ID:80095e847084fa5775086821e6be1bb6e7bae0fe6a66745b19cb9b75e266bc3f Status:running}
	I0916 10:41:17.078526   50617 cri.go:135] skipping {80095e847084fa5775086821e6be1bb6e7bae0fe6a66745b19cb9b75e266bc3f running}: state = "running", want "paused"
	I0916 10:41:17.078529   50617 cri.go:129] container: {ID:8b5d37485105016ad047d2bd7badc2c1ccabbe6c13a29075aec1d82c11b6924f Status:running}
	I0916 10:41:17.078532   50617 cri.go:131] skipping 8b5d37485105016ad047d2bd7badc2c1ccabbe6c13a29075aec1d82c11b6924f - not in ps
	I0916 10:41:17.078535   50617 cri.go:129] container: {ID:b4905826c508e92d29a39ef565f1c838d026ed2e1af8256da846ca2ca7e33d25 Status:running}
	I0916 10:41:17.078538   50617 cri.go:135] skipping {b4905826c508e92d29a39ef565f1c838d026ed2e1af8256da846ca2ca7e33d25 running}: state = "running", want "paused"
	I0916 10:41:17.078541   50617 cri.go:129] container: {ID:b81ffde02718d4aa5d690a7b2df31a8489e3a52138ff033bbcd631c55c48cadf Status:running}
	I0916 10:41:17.078546   50617 cri.go:131] skipping b81ffde02718d4aa5d690a7b2df31a8489e3a52138ff033bbcd631c55c48cadf - not in ps
	I0916 10:41:17.078548   50617 cri.go:129] container: {ID:bf96dac81b725b0cdd05c80d46fccb31fba58eb314cbefaf4fa45648dd564d75 Status:running}
	I0916 10:41:17.078551   50617 cri.go:135] skipping {bf96dac81b725b0cdd05c80d46fccb31fba58eb314cbefaf4fa45648dd564d75 running}: state = "running", want "paused"
	I0916 10:41:17.078553   50617 cri.go:129] container: {ID:c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171 Status:running}
	I0916 10:41:17.078556   50617 cri.go:135] skipping {c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171 running}: state = "running", want "paused"
	I0916 10:41:17.078559   50617 cri.go:129] container: {ID:c7f56f796b013d6e4a9b5ce02ee0358acad8efaf96059c2970b319e87e53c060 Status:running}
	I0916 10:41:17.078562   50617 cri.go:131] skipping c7f56f796b013d6e4a9b5ce02ee0358acad8efaf96059c2970b319e87e53c060 - not in ps
	I0916 10:41:17.078566   50617 cri.go:129] container: {ID:caa2007696d1b222357e825d8e2d17593495ae89592ab8a4d4d3efc1c5faa1d3 Status:running}
	I0916 10:41:17.078570   50617 cri.go:131] skipping caa2007696d1b222357e825d8e2d17593495ae89592ab8a4d4d3efc1c5faa1d3 - not in ps
	I0916 10:41:17.078573   50617 cri.go:129] container: {ID:f4ed79f8dffebd8037cefce09832cf889e13430ea89968777ef8fbbf95f977f5 Status:running}
	I0916 10:41:17.078576   50617 cri.go:131] skipping f4ed79f8dffebd8037cefce09832cf889e13430ea89968777ef8fbbf95f977f5 - not in ps
	I0916 10:41:17.078579   50617 cri.go:129] container: {ID:fd0c81e7a39a2566405ad2950426958ab0d7abfe073ce6517f67e87f2cc2dabe Status:running}
	I0916 10:41:17.078587   50617 cri.go:135] skipping {fd0c81e7a39a2566405ad2950426958ab0d7abfe073ce6517f67e87f2cc2dabe running}: state = "running", want "paused"
	I0916 10:41:17.078633   50617 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:41:17.087018   50617 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 10:41:17.087026   50617 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 10:41:17.087072   50617 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0916 10:41:17.094561   50617 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:41:17.095042   50617 kubeconfig.go:125] found "functional-016570" server: "https://192.168.49.2:8441"
	I0916 10:41:17.096218   50617 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 10:41:17.104540   50617 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2024-09-16 10:40:29.430483202 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2024-09-16 10:41:16.522635221 +0000
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0916 10:41:17.104552   50617 kubeadm.go:1160] stopping kube-system containers ...
	I0916 10:41:17.104608   50617 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0916 10:41:17.104660   50617 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:41:17.141386   50617 cri.go:89] found id: "fd0c81e7a39a2566405ad2950426958ab0d7abfe073ce6517f67e87f2cc2dabe"
	I0916 10:41:17.141397   50617 cri.go:89] found id: "03ddfa3f2cafc3bb86ca39c37b44d4f722054dbe5b68412154aaff07a0db9267"
	I0916 10:41:17.141400   50617 cri.go:89] found id: "bf96dac81b725b0cdd05c80d46fccb31fba58eb314cbefaf4fa45648dd564d75"
	I0916 10:41:17.141403   50617 cri.go:89] found id: "80095e847084fa5775086821e6be1bb6e7bae0fe6a66745b19cb9b75e266bc3f"
	I0916 10:41:17.141405   50617 cri.go:89] found id: "0062114d9f75f428e3b03efa430fa13cefb82159b06c16a18002082c26a43f86"
	I0916 10:41:17.141407   50617 cri.go:89] found id: "0906c5e415b9c8b7cf0dc6e35daf1caed07aec8e00dc68457a9203cdeaa0fcee"
	I0916 10:41:17.141409   50617 cri.go:89] found id: "c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171"
	I0916 10:41:17.141410   50617 cri.go:89] found id: "b4905826c508e92d29a39ef565f1c838d026ed2e1af8256da846ca2ca7e33d25"
	I0916 10:41:17.141412   50617 cri.go:89] found id: ""
	I0916 10:41:17.141416   50617 cri.go:252] Stopping containers: [fd0c81e7a39a2566405ad2950426958ab0d7abfe073ce6517f67e87f2cc2dabe 03ddfa3f2cafc3bb86ca39c37b44d4f722054dbe5b68412154aaff07a0db9267 bf96dac81b725b0cdd05c80d46fccb31fba58eb314cbefaf4fa45648dd564d75 80095e847084fa5775086821e6be1bb6e7bae0fe6a66745b19cb9b75e266bc3f 0062114d9f75f428e3b03efa430fa13cefb82159b06c16a18002082c26a43f86 0906c5e415b9c8b7cf0dc6e35daf1caed07aec8e00dc68457a9203cdeaa0fcee c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171 b4905826c508e92d29a39ef565f1c838d026ed2e1af8256da846ca2ca7e33d25]
	I0916 10:41:17.141459   50617 ssh_runner.go:195] Run: which crictl
	I0916 10:41:17.144688   50617 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 fd0c81e7a39a2566405ad2950426958ab0d7abfe073ce6517f67e87f2cc2dabe 03ddfa3f2cafc3bb86ca39c37b44d4f722054dbe5b68412154aaff07a0db9267 bf96dac81b725b0cdd05c80d46fccb31fba58eb314cbefaf4fa45648dd564d75 80095e847084fa5775086821e6be1bb6e7bae0fe6a66745b19cb9b75e266bc3f 0062114d9f75f428e3b03efa430fa13cefb82159b06c16a18002082c26a43f86 0906c5e415b9c8b7cf0dc6e35daf1caed07aec8e00dc68457a9203cdeaa0fcee c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171 b4905826c508e92d29a39ef565f1c838d026ed2e1af8256da846ca2ca7e33d25
	I0916 10:41:32.600545   50617 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 fd0c81e7a39a2566405ad2950426958ab0d7abfe073ce6517f67e87f2cc2dabe 03ddfa3f2cafc3bb86ca39c37b44d4f722054dbe5b68412154aaff07a0db9267 bf96dac81b725b0cdd05c80d46fccb31fba58eb314cbefaf4fa45648dd564d75 80095e847084fa5775086821e6be1bb6e7bae0fe6a66745b19cb9b75e266bc3f 0062114d9f75f428e3b03efa430fa13cefb82159b06c16a18002082c26a43f86 0906c5e415b9c8b7cf0dc6e35daf1caed07aec8e00dc68457a9203cdeaa0fcee c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171 b4905826c508e92d29a39ef565f1c838d026ed2e1af8256da846ca2ca7e33d25: (15.455802152s)
	I0916 10:41:32.600599   50617 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0916 10:41:32.711262   50617 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:41:32.719771   50617 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Sep 16 10:40 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Sep 16 10:40 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Sep 16 10:40 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Sep 16 10:40 /etc/kubernetes/scheduler.conf
	
	I0916 10:41:32.719825   50617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0916 10:41:32.727900   50617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0916 10:41:32.735704   50617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0916 10:41:32.743064   50617 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:41:32.743102   50617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:41:32.750450   50617 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0916 10:41:32.758187   50617 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:41:32.758227   50617 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:41:32.765796   50617 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:41:32.773861   50617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 10:41:32.814515   50617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 10:41:33.718230   50617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0916 10:41:33.875660   50617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 10:41:33.923184   50617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0916 10:41:34.033330   50617 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:41:34.033408   50617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:41:34.533898   50617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:41:35.034448   50617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:41:35.533621   50617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:41:36.033457   50617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:41:36.534359   50617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:41:37.033548   50617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:41:37.534388   50617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:41:38.033716   50617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:41:38.533752   50617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:41:39.033745   50617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:41:39.533937   50617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:41:40.033977   50617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:41:40.534397   50617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:41:41.034532   50617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:41:41.046200   50617 api_server.go:72] duration metric: took 7.012858066s to wait for apiserver process to appear ...
	I0916 10:41:41.046219   50617 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:41:41.046243   50617 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0916 10:41:43.421778   50617 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0916 10:41:43.421818   50617 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[-]autoregister-completion failed: reason withheld
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0916 10:41:43.421834   50617 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0916 10:41:43.426678   50617 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0916 10:41:43.426705   50617 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0916 10:41:43.546903   50617 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0916 10:41:43.551225   50617 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0916 10:41:43.551240   50617 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0916 10:41:44.046360   50617 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0916 10:41:44.050109   50617 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0916 10:41:44.050125   50617 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0916 10:41:44.546644   50617 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0916 10:41:44.550300   50617 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0916 10:41:44.557220   50617 api_server.go:141] control plane version: v1.31.1
	I0916 10:41:44.557237   50617 api_server.go:131] duration metric: took 3.511013224s to wait for apiserver health ...
	I0916 10:41:44.557244   50617 cni.go:84] Creating CNI manager for ""
	I0916 10:41:44.557248   50617 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 10:41:44.559199   50617 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 10:41:44.560570   50617 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 10:41:44.564309   50617 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 10:41:44.564319   50617 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 10:41:44.580691   50617 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 10:41:44.871633   50617 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:41:44.879038   50617 system_pods.go:59] 8 kube-system pods found
	I0916 10:41:44.879058   50617 system_pods.go:61] "coredns-7c65d6cfc9-59qm7" [370e7aff-70ab-43f7-9770-098c21fd013d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 10:41:44.879064   50617 system_pods.go:61] "etcd-functional-016570" [54625714-0265-4ecf-a4d3-b4ff173d81e0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0916 10:41:44.879068   50617 system_pods.go:61] "kindnet-5qjpd" [8ee89403-0943-480c-9f48-4b25a0198f6d] Running
	I0916 10:41:44.879073   50617 system_pods.go:61] "kube-apiserver-functional-016570" [c3ecd4da-cdaf-40ff-ba59-093b10687650] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0916 10:41:44.879078   50617 system_pods.go:61] "kube-controller-manager-functional-016570" [ab12e143-7f68-4f92-b30d-82299e1bf5a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0916 10:41:44.879081   50617 system_pods.go:61] "kube-proxy-w8qkq" [b4a00283-1d69-49c4-8c60-264ef3fd7aca] Running
	I0916 10:41:44.879088   50617 system_pods.go:61] "kube-scheduler-functional-016570" [640affb4-aae3-401b-b06b-fd9e07a9b506] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0916 10:41:44.879091   50617 system_pods.go:61] "storage-provisioner" [9924f10d-5beb-43b1-9782-44644a015b56] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 10:41:44.879096   50617 system_pods.go:74] duration metric: took 7.451347ms to wait for pod list to return data ...
	I0916 10:41:44.879101   50617 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:41:44.882633   50617 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:41:44.882648   50617 node_conditions.go:123] node cpu capacity is 8
	I0916 10:41:44.882656   50617 node_conditions.go:105] duration metric: took 3.552128ms to run NodePressure ...
	I0916 10:41:44.882670   50617 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0916 10:41:45.129698   50617 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0916 10:41:45.133649   50617 kubeadm.go:739] kubelet initialised
	I0916 10:41:45.133659   50617 kubeadm.go:740] duration metric: took 3.948645ms waiting for restarted kubelet to initialise ...
	I0916 10:41:45.133665   50617 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:41:45.138361   50617 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-59qm7" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:45.143026   50617 pod_ready.go:93] pod "coredns-7c65d6cfc9-59qm7" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:45.143037   50617 pod_ready.go:82] duration metric: took 4.661009ms for pod "coredns-7c65d6cfc9-59qm7" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:45.143044   50617 pod_ready.go:79] waiting up to 4m0s for pod "etcd-functional-016570" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:47.148954   50617 pod_ready.go:103] pod "etcd-functional-016570" in "kube-system" namespace has status "Ready":"False"
	I0916 10:41:49.649215   50617 pod_ready.go:103] pod "etcd-functional-016570" in "kube-system" namespace has status "Ready":"False"
	I0916 10:41:50.648978   50617 pod_ready.go:93] pod "etcd-functional-016570" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:50.648988   50617 pod_ready.go:82] duration metric: took 5.505939203s for pod "etcd-functional-016570" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:50.648999   50617 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-functional-016570" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:52.654735   50617 pod_ready.go:103] pod "kube-apiserver-functional-016570" in "kube-system" namespace has status "Ready":"False"
	I0916 10:41:53.155419   50617 pod_ready.go:93] pod "kube-apiserver-functional-016570" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:53.155430   50617 pod_ready.go:82] duration metric: took 2.506426124s for pod "kube-apiserver-functional-016570" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:53.155438   50617 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-functional-016570" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:55.161971   50617 pod_ready.go:103] pod "kube-controller-manager-functional-016570" in "kube-system" namespace has status "Ready":"False"
	I0916 10:41:57.663884   50617 pod_ready.go:103] pod "kube-controller-manager-functional-016570" in "kube-system" namespace has status "Ready":"False"
	I0916 10:41:59.661282   50617 pod_ready.go:93] pod "kube-controller-manager-functional-016570" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:59.661293   50617 pod_ready.go:82] duration metric: took 6.505850398s for pod "kube-controller-manager-functional-016570" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:59.661302   50617 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-w8qkq" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:59.665971   50617 pod_ready.go:93] pod "kube-proxy-w8qkq" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:59.665982   50617 pod_ready.go:82] duration metric: took 4.675665ms for pod "kube-proxy-w8qkq" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:59.665990   50617 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-functional-016570" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:59.670315   50617 pod_ready.go:93] pod "kube-scheduler-functional-016570" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:59.670325   50617 pod_ready.go:82] duration metric: took 4.330634ms for pod "kube-scheduler-functional-016570" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:59.670338   50617 pod_ready.go:39] duration metric: took 14.536661575s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:41:59.670352   50617 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:41:59.677778   50617 ops.go:34] apiserver oom_adj: -16
	I0916 10:41:59.677792   50617 kubeadm.go:597] duration metric: took 42.590761059s to restartPrimaryControlPlane
	I0916 10:41:59.677809   50617 kubeadm.go:394] duration metric: took 42.65664242s to StartCluster
	I0916 10:41:59.677824   50617 settings.go:142] acquiring lock: {Name:mk5f140da961bb985c566f732d611f3e238f1f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:41:59.677894   50617 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:41:59.678711   50617 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:41:59.678975   50617 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 10:41:59.679036   50617 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 10:41:59.679109   50617 addons.go:69] Setting storage-provisioner=true in profile "functional-016570"
	I0916 10:41:59.679122   50617 addons.go:234] Setting addon storage-provisioner=true in "functional-016570"
	W0916 10:41:59.679127   50617 addons.go:243] addon storage-provisioner should already be in state true
	I0916 10:41:59.679154   50617 host.go:66] Checking if "functional-016570" exists ...
	I0916 10:41:59.679150   50617 addons.go:69] Setting default-storageclass=true in profile "functional-016570"
	I0916 10:41:59.679179   50617 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-016570"
	I0916 10:41:59.679204   50617 config.go:182] Loaded profile config "functional-016570": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:41:59.679482   50617 cli_runner.go:164] Run: docker container inspect functional-016570 --format={{.State.Status}}
	I0916 10:41:59.679522   50617 cli_runner.go:164] Run: docker container inspect functional-016570 --format={{.State.Status}}
	I0916 10:41:59.680930   50617 out.go:177] * Verifying Kubernetes components...
	I0916 10:41:59.682455   50617 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:41:59.698056   50617 addons.go:234] Setting addon default-storageclass=true in "functional-016570"
	W0916 10:41:59.698066   50617 addons.go:243] addon default-storageclass should already be in state true
	I0916 10:41:59.698087   50617 host.go:66] Checking if "functional-016570" exists ...
	I0916 10:41:59.698457   50617 cli_runner.go:164] Run: docker container inspect functional-016570 --format={{.State.Status}}
	I0916 10:41:59.699432   50617 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:41:59.700878   50617 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:41:59.700887   50617 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:41:59.700927   50617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-016570
	I0916 10:41:59.721364   50617 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:41:59.721346   50617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/functional-016570/id_rsa Username:docker}
	I0916 10:41:59.721378   50617 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:41:59.721432   50617 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-016570
	I0916 10:41:59.744152   50617 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/functional-016570/id_rsa Username:docker}
	I0916 10:41:59.788273   50617 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:41:59.799563   50617 node_ready.go:35] waiting up to 6m0s for node "functional-016570" to be "Ready" ...
	I0916 10:41:59.802088   50617 node_ready.go:49] node "functional-016570" has status "Ready":"True"
	I0916 10:41:59.802098   50617 node_ready.go:38] duration metric: took 2.514537ms for node "functional-016570" to be "Ready" ...
	I0916 10:41:59.802104   50617 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:41:59.806652   50617 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-59qm7" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:59.811901   50617 pod_ready.go:93] pod "coredns-7c65d6cfc9-59qm7" in "kube-system" namespace has status "Ready":"True"
	I0916 10:41:59.811911   50617 pod_ready.go:82] duration metric: took 5.246265ms for pod "coredns-7c65d6cfc9-59qm7" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:59.811919   50617 pod_ready.go:79] waiting up to 6m0s for pod "etcd-functional-016570" in "kube-system" namespace to be "Ready" ...
	I0916 10:41:59.832720   50617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:41:59.849023   50617 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:42:00.059105   50617 pod_ready.go:93] pod "etcd-functional-016570" in "kube-system" namespace has status "Ready":"True"
	I0916 10:42:00.059118   50617 pod_ready.go:82] duration metric: took 247.194332ms for pod "etcd-functional-016570" in "kube-system" namespace to be "Ready" ...
	I0916 10:42:00.059129   50617 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-functional-016570" in "kube-system" namespace to be "Ready" ...
	I0916 10:42:00.336589   50617 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0916 10:42:00.337833   50617 addons.go:510] duration metric: took 658.806628ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 10:42:00.459156   50617 pod_ready.go:93] pod "kube-apiserver-functional-016570" in "kube-system" namespace has status "Ready":"True"
	I0916 10:42:00.459166   50617 pod_ready.go:82] duration metric: took 400.031663ms for pod "kube-apiserver-functional-016570" in "kube-system" namespace to be "Ready" ...
	I0916 10:42:00.459174   50617 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-functional-016570" in "kube-system" namespace to be "Ready" ...
	I0916 10:42:00.859463   50617 pod_ready.go:93] pod "kube-controller-manager-functional-016570" in "kube-system" namespace has status "Ready":"True"
	I0916 10:42:00.859475   50617 pod_ready.go:82] duration metric: took 400.29533ms for pod "kube-controller-manager-functional-016570" in "kube-system" namespace to be "Ready" ...
	I0916 10:42:00.859495   50617 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w8qkq" in "kube-system" namespace to be "Ready" ...
	I0916 10:42:01.258747   50617 pod_ready.go:93] pod "kube-proxy-w8qkq" in "kube-system" namespace has status "Ready":"True"
	I0916 10:42:01.258758   50617 pod_ready.go:82] duration metric: took 399.25815ms for pod "kube-proxy-w8qkq" in "kube-system" namespace to be "Ready" ...
	I0916 10:42:01.258766   50617 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-functional-016570" in "kube-system" namespace to be "Ready" ...
	I0916 10:42:01.659729   50617 pod_ready.go:93] pod "kube-scheduler-functional-016570" in "kube-system" namespace has status "Ready":"True"
	I0916 10:42:01.659763   50617 pod_ready.go:82] duration metric: took 400.991723ms for pod "kube-scheduler-functional-016570" in "kube-system" namespace to be "Ready" ...
	I0916 10:42:01.659773   50617 pod_ready.go:39] duration metric: took 1.857660922s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:42:01.659786   50617 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:42:01.659833   50617 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:42:01.670643   50617 api_server.go:72] duration metric: took 1.991641119s to wait for apiserver process to appear ...
	I0916 10:42:01.670660   50617 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:42:01.670679   50617 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I0916 10:42:01.674408   50617 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I0916 10:42:01.675347   50617 api_server.go:141] control plane version: v1.31.1
	I0916 10:42:01.675360   50617 api_server.go:131] duration metric: took 4.696433ms to wait for apiserver health ...
	I0916 10:42:01.675367   50617 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:42:01.862052   50617 system_pods.go:59] 8 kube-system pods found
	I0916 10:42:01.862067   50617 system_pods.go:61] "coredns-7c65d6cfc9-59qm7" [370e7aff-70ab-43f7-9770-098c21fd013d] Running
	I0916 10:42:01.862071   50617 system_pods.go:61] "etcd-functional-016570" [54625714-0265-4ecf-a4d3-b4ff173d81e0] Running
	I0916 10:42:01.862073   50617 system_pods.go:61] "kindnet-5qjpd" [8ee89403-0943-480c-9f48-4b25a0198f6d] Running
	I0916 10:42:01.862076   50617 system_pods.go:61] "kube-apiserver-functional-016570" [c3ecd4da-cdaf-40ff-ba59-093b10687650] Running
	I0916 10:42:01.862078   50617 system_pods.go:61] "kube-controller-manager-functional-016570" [ab12e143-7f68-4f92-b30d-82299e1bf5a0] Running
	I0916 10:42:01.862080   50617 system_pods.go:61] "kube-proxy-w8qkq" [b4a00283-1d69-49c4-8c60-264ef3fd7aca] Running
	I0916 10:42:01.862082   50617 system_pods.go:61] "kube-scheduler-functional-016570" [640affb4-aae3-401b-b06b-fd9e07a9b506] Running
	I0916 10:42:01.862084   50617 system_pods.go:61] "storage-provisioner" [9924f10d-5beb-43b1-9782-44644a015b56] Running
	I0916 10:42:01.862090   50617 system_pods.go:74] duration metric: took 186.718566ms to wait for pod list to return data ...
	I0916 10:42:01.862096   50617 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:42:02.059483   50617 default_sa.go:45] found service account: "default"
	I0916 10:42:02.059498   50617 default_sa.go:55] duration metric: took 197.397136ms for default service account to be created ...
	I0916 10:42:02.059505   50617 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:42:02.261707   50617 system_pods.go:86] 8 kube-system pods found
	I0916 10:42:02.261724   50617 system_pods.go:89] "coredns-7c65d6cfc9-59qm7" [370e7aff-70ab-43f7-9770-098c21fd013d] Running
	I0916 10:42:02.261728   50617 system_pods.go:89] "etcd-functional-016570" [54625714-0265-4ecf-a4d3-b4ff173d81e0] Running
	I0916 10:42:02.261731   50617 system_pods.go:89] "kindnet-5qjpd" [8ee89403-0943-480c-9f48-4b25a0198f6d] Running
	I0916 10:42:02.261733   50617 system_pods.go:89] "kube-apiserver-functional-016570" [c3ecd4da-cdaf-40ff-ba59-093b10687650] Running
	I0916 10:42:02.261737   50617 system_pods.go:89] "kube-controller-manager-functional-016570" [ab12e143-7f68-4f92-b30d-82299e1bf5a0] Running
	I0916 10:42:02.261739   50617 system_pods.go:89] "kube-proxy-w8qkq" [b4a00283-1d69-49c4-8c60-264ef3fd7aca] Running
	I0916 10:42:02.261741   50617 system_pods.go:89] "kube-scheduler-functional-016570" [640affb4-aae3-401b-b06b-fd9e07a9b506] Running
	I0916 10:42:02.261743   50617 system_pods.go:89] "storage-provisioner" [9924f10d-5beb-43b1-9782-44644a015b56] Running
	I0916 10:42:02.261750   50617 system_pods.go:126] duration metric: took 202.240144ms to wait for k8s-apps to be running ...
	I0916 10:42:02.261754   50617 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:42:02.261807   50617 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:42:02.272678   50617 system_svc.go:56] duration metric: took 10.910581ms WaitForService to wait for kubelet
	I0916 10:42:02.272698   50617 kubeadm.go:582] duration metric: took 2.593699118s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:42:02.272714   50617 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:42:02.460828   50617 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:42:02.460843   50617 node_conditions.go:123] node cpu capacity is 8
	I0916 10:42:02.460853   50617 node_conditions.go:105] duration metric: took 188.134475ms to run NodePressure ...
	I0916 10:42:02.460866   50617 start.go:241] waiting for startup goroutines ...
	I0916 10:42:02.460875   50617 start.go:246] waiting for cluster config update ...
	I0916 10:42:02.460887   50617 start.go:255] writing updated cluster config ...
	I0916 10:42:02.461230   50617 ssh_runner.go:195] Run: rm -f paused
	I0916 10:42:02.467615   50617 out.go:177] * Done! kubectl is now configured to use "functional-016570" cluster and "default" namespace by default
	E0916 10:42:02.469284   50617 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	4d490c9b7ae90       6e38f40d628db       8 seconds ago        Running             storage-provisioner       2                   b81ffde02718d       storage-provisioner
	d2500e97c949b       6bab7719df100       22 seconds ago       Running             kube-apiserver            0                   9d885083d4265       kube-apiserver-functional-016570
	c8262cd23469c       175ffd71cce3d       22 seconds ago       Running             kube-controller-manager   2                   8b5d374851050       kube-controller-manager-functional-016570
	861dc747735da       2e96e5913fc06       22 seconds ago       Running             etcd                      1                   2cdebcb8c7807       etcd-functional-016570
	f40f6265fc1c6       12968670680f4       40 seconds ago       Running             kindnet-cni               1                   c7f56f796b013       kindnet-5qjpd
	490a48762f629       6e38f40d628db       40 seconds ago       Exited              storage-provisioner       1                   b81ffde02718d       storage-provisioner
	2810e4a546750       60c005f310ff3       40 seconds ago       Running             kube-proxy                1                   f4ed79f8dffeb       kube-proxy-w8qkq
	485f2c5cef235       175ffd71cce3d       40 seconds ago       Exited              kube-controller-manager   1                   8b5d374851050       kube-controller-manager-functional-016570
	9ff9913af2feb       9aa1fad941575       40 seconds ago       Running             kube-scheduler            1                   caa2007696d1b       kube-scheduler-functional-016570
	b8bd1849da6c4       c69fa2e9cbf5f       40 seconds ago       Running             coredns                   1                   3d9a434f8b6e5       coredns-7c65d6cfc9-59qm7
	fd0c81e7a39a2       c69fa2e9cbf5f       About a minute ago   Exited              coredns                   0                   3d9a434f8b6e5       coredns-7c65d6cfc9-59qm7
	bf96dac81b725       12968670680f4       About a minute ago   Exited              kindnet-cni               0                   c7f56f796b013       kindnet-5qjpd
	80095e847084f       60c005f310ff3       About a minute ago   Exited              kube-proxy                0                   f4ed79f8dffeb       kube-proxy-w8qkq
	0906c5e415b9c       9aa1fad941575       About a minute ago   Exited              kube-scheduler            0                   caa2007696d1b       kube-scheduler-functional-016570
	b4905826c508e       2e96e5913fc06       About a minute ago   Exited              etcd                      0                   2cdebcb8c7807       etcd-functional-016570
	
	
	==> containerd <==
	Sep 16 10:41:40 functional-016570 containerd[4401]: time="2024-09-16T10:41:40.589237403Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 10:41:40 functional-016570 containerd[4401]: time="2024-09-16T10:41:40.590350392Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 10:41:40 functional-016570 containerd[4401]: time="2024-09-16T10:41:40.590388925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:41:40 functional-016570 containerd[4401]: time="2024-09-16T10:41:40.590492535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:41:40 functional-016570 containerd[4401]: time="2024-09-16T10:41:40.655365292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-functional-016570,Uid:36dc10fb67abc1f7e7d56f544fb5fbe5,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d885083d426573e5474bee7039886054440a56dd67dbf36bef60779b2322297\""
	Sep 16 10:41:40 functional-016570 containerd[4401]: time="2024-09-16T10:41:40.655533825Z" level=info msg="StartContainer for \"c8262cd23469ca086f00137b1fc38c96429b63d514a16f9a905da144ecd2b73c\" returns successfully"
	Sep 16 10:41:40 functional-016570 containerd[4401]: time="2024-09-16T10:41:40.655669756Z" level=info msg="StartContainer for \"861dc747735da2748e3f4b24b824a36d0a52f89bbf91f6d93373e8e94ec47110\" returns successfully"
	Sep 16 10:41:40 functional-016570 containerd[4401]: time="2024-09-16T10:41:40.658053659Z" level=info msg="CreateContainer within sandbox \"9d885083d426573e5474bee7039886054440a56dd67dbf36bef60779b2322297\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
	Sep 16 10:41:40 functional-016570 containerd[4401]: time="2024-09-16T10:41:40.675376405Z" level=info msg="CreateContainer within sandbox \"9d885083d426573e5474bee7039886054440a56dd67dbf36bef60779b2322297\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d2500e97c949b2a2e8330e357558411d2893aaafdb03d1ef0422a5a6fb1cb12f\""
	Sep 16 10:41:40 functional-016570 containerd[4401]: time="2024-09-16T10:41:40.676220406Z" level=info msg="StartContainer for \"d2500e97c949b2a2e8330e357558411d2893aaafdb03d1ef0422a5a6fb1cb12f\""
	Sep 16 10:41:40 functional-016570 containerd[4401]: time="2024-09-16T10:41:40.824060570Z" level=info msg="StartContainer for \"d2500e97c949b2a2e8330e357558411d2893aaafdb03d1ef0422a5a6fb1cb12f\" returns successfully"
	Sep 16 10:41:43 functional-016570 containerd[4401]: time="2024-09-16T10:41:43.532936358Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Sep 16 10:41:43 functional-016570 containerd[4401]: time="2024-09-16T10:41:43.956147430Z" level=info msg="StopPodSandbox for \"5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961\""
	Sep 16 10:41:43 functional-016570 containerd[4401]: time="2024-09-16T10:41:43.956202070Z" level=info msg="Container to stop \"c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Sep 16 10:41:43 functional-016570 containerd[4401]: time="2024-09-16T10:41:43.986199611Z" level=info msg="shim disconnected" id=5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961 namespace=k8s.io
	Sep 16 10:41:43 functional-016570 containerd[4401]: time="2024-09-16T10:41:43.986437102Z" level=warning msg="cleaning up after shim disconnected" id=5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961 namespace=k8s.io
	Sep 16 10:41:43 functional-016570 containerd[4401]: time="2024-09-16T10:41:43.986464976Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 16 10:41:44 functional-016570 containerd[4401]: time="2024-09-16T10:41:44.000890426Z" level=info msg="TearDown network for sandbox \"5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961\" successfully"
	Sep 16 10:41:44 functional-016570 containerd[4401]: time="2024-09-16T10:41:44.000939894Z" level=info msg="StopPodSandbox for \"5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961\" returns successfully"
	Sep 16 10:41:44 functional-016570 containerd[4401]: time="2024-09-16T10:41:44.057763587Z" level=info msg="RemoveContainer for \"c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171\""
	Sep 16 10:41:44 functional-016570 containerd[4401]: time="2024-09-16T10:41:44.064042865Z" level=info msg="RemoveContainer for \"c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171\" returns successfully"
	Sep 16 10:41:54 functional-016570 containerd[4401]: time="2024-09-16T10:41:54.955284514Z" level=info msg="CreateContainer within sandbox \"b81ffde02718d4aa5d690a7b2df31a8489e3a52138ff033bbcd631c55c48cadf\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:2,}"
	Sep 16 10:41:54 functional-016570 containerd[4401]: time="2024-09-16T10:41:54.968501482Z" level=info msg="CreateContainer within sandbox \"b81ffde02718d4aa5d690a7b2df31a8489e3a52138ff033bbcd631c55c48cadf\" for &ContainerMetadata{Name:storage-provisioner,Attempt:2,} returns container id \"4d490c9b7ae90d00a29d06ff834c0ce770ddfef68473b74c781044e7a283344c\""
	Sep 16 10:41:54 functional-016570 containerd[4401]: time="2024-09-16T10:41:54.969110908Z" level=info msg="StartContainer for \"4d490c9b7ae90d00a29d06ff834c0ce770ddfef68473b74c781044e7a283344c\""
	Sep 16 10:41:55 functional-016570 containerd[4401]: time="2024-09-16T10:41:55.011537068Z" level=info msg="StartContainer for \"4d490c9b7ae90d00a29d06ff834c0ce770ddfef68473b74c781044e7a283344c\" returns successfully"
	
	
	==> coredns [b8bd1849da6c464b7fbc64f004ea8f6e93596b309cb23e3f75f0493a6c22ebd5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:59975 - 12600 "HINFO IN 4686966597786162674.2744546229077384558. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011873953s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [fd0c81e7a39a2566405ad2950426958ab0d7abfe073ce6517f67e87f2cc2dabe] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44994 - 49719 "HINFO IN 5811560446017322614.7472127089541594346. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008166038s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-016570
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-016570
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=functional-016570
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_40_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:40:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-016570
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:41:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:41:43 +0000   Mon, 16 Sep 2024 10:40:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:41:43 +0000   Mon, 16 Sep 2024 10:40:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:41:43 +0000   Mon, 16 Sep 2024 10:40:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:41:43 +0000   Mon, 16 Sep 2024 10:40:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-016570
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 7e0bbc72eb56431496705823b26dcd4d
	  System UUID:                9e1cbc84-d044-4524-b143-ca93a6dc5aa0
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-59qm7                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     80s
	  kube-system                 etcd-functional-016570                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         85s
	  kube-system                 kindnet-5qjpd                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      80s
	  kube-system                 kube-apiserver-functional-016570             250m (3%)     0 (0%)      0 (0%)           0 (0%)         19s
	  kube-system                 kube-controller-manager-functional-016570    200m (2%)     0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-proxy-w8qkq                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 kube-scheduler-functional-016570             100m (1%)     0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         79s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 79s                kube-proxy       
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   NodeAllocatableEnforced  86s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 86s                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 86s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  85s                kubelet          Node functional-016570 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    85s                kubelet          Node functional-016570 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     85s                kubelet          Node functional-016570 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           81s                node-controller  Node functional-016570 event: Registered Node functional-016570 in Controller
	  Normal   Starting                 30s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 30s                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  26s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  23s (x8 over 26s)  kubelet          Node functional-016570 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    23s (x7 over 26s)  kubelet          Node functional-016570 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     23s (x7 over 26s)  kubelet          Node functional-016570 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           17s                node-controller  Node functional-016570 event: Registered Node functional-016570 in Controller
	
	
	==> dmesg <==
	[Sep16 10:17]  #2
	[  +0.001391]  #3
	[  +0.000000]  #4
	[  +0.003060] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.003238] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.002037] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.002135]  #5
	[  +0.000696]  #6
	[  +0.003195]  #7
	[  +0.058540] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.423486] i8042: Warning: Keylock active
	[  +0.007424] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.002994] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000654] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000607] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000622] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000638] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000657] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000607] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000608] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.588189] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +7.251152] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [861dc747735da2748e3f4b24b824a36d0a52f89bbf91f6d93373e8e94ec47110] <==
	{"level":"info","ts":"2024-09-16T10:41:40.736533Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:41:40.736587Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:41:40.736603Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:41:40.736848Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:41:40.736868Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:41:40.737630Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-09-16T10:41:40.737696Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-09-16T10:41:40.737861Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:41:40.737967Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:41:42.324428Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-16T10:41:42.324481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T10:41:42.324523Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-16T10:41:42.324536Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:41:42.324550Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-16T10:41:42.324585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-09-16T10:41:42.324592Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-16T10:41:42.326180Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-016570 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:41:42.326181Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:41:42.326207Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:41:42.326768Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:41:42.326898Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:41:42.328280Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:41:42.328298Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:41:42.329057Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:41:42.329116Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> etcd [b4905826c508e92d29a39ef565f1c838d026ed2e1af8256da846ca2ca7e33d25] <==
	{"level":"info","ts":"2024-09-16T10:40:33.951156Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-16T10:40:33.951169Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-16T10:40:33.952133Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-016570 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:40:33.952149Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:40:33.952176Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:40:33.952174Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:40:33.952458Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:40:33.952538Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:40:33.952886Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:40:33.953028Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:40:33.953056Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:40:33.953277Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:40:33.954142Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:40:33.955433Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:40:33.956074Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-16T10:41:32.555101Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-16T10:41:32.555186Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-016570","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-09-16T10:41:32.555307Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:41:32.555359Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:41:32.556893Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:41:32.556932Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T10:41:32.558356Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-09-16T10:41:32.560054Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:41:32.560161Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:41:32.560190Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-016570","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 10:42:03 up 24 min,  0 users,  load average: 1.07, 0.89, 0.57
	Linux functional-016570 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [bf96dac81b725b0cdd05c80d46fccb31fba58eb314cbefaf4fa45648dd564d75] <==
	I0916 10:40:44.322865       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 10:40:44.323089       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0916 10:40:44.323250       1 main.go:148] setting mtu 1500 for CNI 
	I0916 10:40:44.323270       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 10:40:44.323292       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 10:40:44.644895       1 controller.go:334] Starting controller kube-network-policies
	I0916 10:40:44.644910       1 controller.go:338] Waiting for informer caches to sync
	I0916 10:40:44.644915       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 10:40:45.019853       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 10:40:45.019909       1 metrics.go:61] Registering metrics
	I0916 10:40:45.019969       1 controller.go:374] Syncing nftables rules
	I0916 10:40:54.644919       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:40:54.644955       1 main.go:299] handling current node
	I0916 10:41:04.651830       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:41:04.651862       1 main.go:299] handling current node
	I0916 10:41:14.648734       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:41:14.648784       1 main.go:299] handling current node
	
	
	==> kindnet [f40f6265fc1c666726cfc4dfc8b0637a32e85401949dfb2edee2619b1765db77] <==
	W0916 10:41:25.379051       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:41:25.379120       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0916 10:41:27.061180       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:41:27.061218       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0916 10:41:27.212636       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:41:27.212709       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0916 10:41:27.370312       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:41:27.370396       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0916 10:41:27.496085       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:41:27.496151       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0916 10:41:31.899869       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:41:31.899923       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0916 10:41:32.540969       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:41:32.541033       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0916 10:41:32.592684       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:41:32.592731       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0916 10:41:33.006411       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:41:33.006451       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	I0916 10:41:43.445970       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 10:41:43.446004       1 metrics.go:61] Registering metrics
	I0916 10:41:43.446082       1 controller.go:374] Syncing nftables rules
	I0916 10:41:43.844790       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:41:43.844836       1 main.go:299] handling current node
	I0916 10:41:53.844621       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:41:53.844659       1 main.go:299] handling current node
	
	
	==> kube-apiserver [d2500e97c949b2a2e8330e357558411d2893aaafdb03d1ef0422a5a6fb1cb12f] <==
	I0916 10:41:43.420112       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 10:41:43.420493       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:41:43.420521       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:41:43.422027       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 10:41:43.422383       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 10:41:43.422423       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:41:43.422975       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:41:43.423049       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:41:43.423247       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:41:43.424833       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:41:43.424920       1 policy_source.go:224] refreshing policies
	I0916 10:41:43.439068       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:41:43.466711       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:41:43.467901       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:41:43.471775       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 10:41:43.520505       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:41:44.269900       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0916 10:41:44.531189       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0916 10:41:44.532603       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:41:44.536818       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:41:44.865983       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:41:44.962527       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:41:44.972551       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:41:45.027823       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:41:45.034699       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [485f2c5cef235c0182e1a64e3a548bea54de9894193e120b2b717a72b9ef1bff] <==
	I0916 10:41:23.729467       1 serving.go:386] Generated self-signed cert in-memory
	I0916 10:41:24.021064       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0916 10:41:24.021095       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:41:24.022441       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0916 10:41:24.022441       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0916 10:41:24.022737       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0916 10:41:24.022823       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0916 10:41:34.024812       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-controller-manager [c8262cd23469ca086f00137b1fc38c96429b63d514a16f9a905da144ecd2b73c] <==
	I0916 10:41:46.780797       1 shared_informer.go:320] Caches are synced for disruption
	I0916 10:41:46.780846       1 shared_informer.go:320] Caches are synced for ephemeral
	I0916 10:41:46.780848       1 shared_informer.go:320] Caches are synced for daemon sets
	I0916 10:41:46.783063       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0916 10:41:46.783327       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0916 10:41:46.784246       1 shared_informer.go:320] Caches are synced for node
	I0916 10:41:46.784273       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0916 10:41:46.784322       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0916 10:41:46.784362       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0916 10:41:46.784364       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0916 10:41:46.784400       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0916 10:41:46.784405       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0916 10:41:46.784444       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-016570"
	I0916 10:41:46.786641       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0916 10:41:46.788698       1 shared_informer.go:320] Caches are synced for PV protection
	I0916 10:41:46.850786       1 shared_informer.go:320] Caches are synced for namespace
	I0916 10:41:46.880950       1 shared_informer.go:320] Caches are synced for service account
	I0916 10:41:46.891102       1 shared_informer.go:320] Caches are synced for endpoint
	I0916 10:41:46.962433       1 shared_informer.go:320] Caches are synced for cronjob
	I0916 10:41:46.980794       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0916 10:41:46.986446       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:41:47.004815       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:41:47.398605       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:41:47.430012       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:41:47.430044       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [2810e4a54675045b91d6e2b6996d5595fca99d1ed910f700a9440b05c934282a] <==
	I0916 10:41:23.344360       1 server_linux.go:66] "Using iptables proxy"
	E0916 10:41:23.466145       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-016570\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0916 10:41:24.666043       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-016570\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0916 10:41:26.980270       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-016570\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0916 10:41:31.272411       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-016570\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0916 10:41:40.759863       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-016570\": dial tcp 192.168.49.2:8441: connect: connection refused"
	I0916 10:41:57.573875       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:41:57.573958       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:41:57.593375       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:41:57.593437       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:41:57.595286       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:41:57.595663       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:41:57.595695       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:41:57.596863       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:41:57.596985       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:41:57.597145       1 config.go:199] "Starting service config controller"
	I0916 10:41:57.597451       1 config.go:328] "Starting node config controller"
	I0916 10:41:57.597468       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:41:57.597345       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:41:57.697968       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:41:57.698030       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:41:57.698035       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [80095e847084fa5775086821e6be1bb6e7bae0fe6a66745b19cb9b75e266bc3f] <==
	I0916 10:40:44.050300       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:40:44.179052       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:40:44.179113       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:40:44.196854       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:40:44.196926       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:40:44.198841       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:40:44.199284       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:40:44.199313       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:40:44.200417       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:40:44.200434       1 config.go:199] "Starting service config controller"
	I0916 10:40:44.200460       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:40:44.200461       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:40:44.200579       1 config.go:328] "Starting node config controller"
	I0916 10:40:44.200592       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:40:44.300956       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:40:44.300944       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:40:44.300945       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [0906c5e415b9c8b7cf0dc6e35daf1caed07aec8e00dc68457a9203cdeaa0fcee] <==
	E0916 10:40:35.625982       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:35.626126       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:40:35.626272       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:35.626178       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 10:40:35.627185       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.485194       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:40:36.485283       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.534754       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:40:36.534802       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.553649       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 10:40:36.553693       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.739511       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 10:40:36.739572       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.763525       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 10:40:36.763583       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.768041       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 10:40:36.768094       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.782504       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 10:40:36.782564       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.919306       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:40:36.919357       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 10:40:39.247701       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:41:22.445138       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0916 10:41:22.445225       1 run.go:72] "command failed" err="finished without leader elect"
	I0916 10:41:22.445244       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	
	
	==> kube-scheduler [9ff9913af2feb41a804690d65aef168822cd2ac0a456e3642182c64337903889] <==
	W0916 10:41:33.309245       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0916 10:41:33.309313       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:41:34.856594       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0916 10:41:34.856642       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:41:39.800183       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0916 10:41:39.800261       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:41:40.256311       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0916 10:41:40.256386       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:41:40.442961       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0916 10:41:40.443018       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:41:40.559680       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0916 10:41:40.559763       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:41:43.341030       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0916 10:41:43.341153       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:41:43.341209       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 10:41:43.341177       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:41:43.341296       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 10:41:43.341326       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:41:43.341402       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 10:41:43.341424       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:41:43.341470       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:41:43.341491       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:41:43.341579       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:41:43.341738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0916 10:41:43.775320       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:41:43 functional-016570 kubelet[5394]: I0916 10:41:43.532467    5394 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 16 10:41:43 functional-016570 kubelet[5394]: I0916 10:41:43.533188    5394 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 16 10:41:43 functional-016570 kubelet[5394]: I0916 10:41:43.937013    5394 apiserver.go:52] "Watching apiserver"
	Sep 16 10:41:43 functional-016570 kubelet[5394]: time="2024-09-16T10:41:43Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/perf_event/kubepods/burstable/pod5333b7f22b4ca6fa3369f64c875d053e/5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961: device or resource busy"
	Sep 16 10:41:43 functional-016570 kubelet[5394]: time="2024-09-16T10:41:43Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/freezer/kubepods/burstable/pod5333b7f22b4ca6fa3369f64c875d053e/5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961: device or resource busy"
	Sep 16 10:41:43 functional-016570 kubelet[5394]: time="2024-09-16T10:41:43Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/systemd/kubepods/burstable/pod5333b7f22b4ca6fa3369f64c875d053e/5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961: device or resource busy"
	Sep 16 10:41:43 functional-016570 kubelet[5394]: time="2024-09-16T10:41:43Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/unified/kubepods/burstable/pod5333b7f22b4ca6fa3369f64c875d053e/5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961: device or resource busy"
	Sep 16 10:41:43 functional-016570 kubelet[5394]: time="2024-09-16T10:41:43Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/devices/kubepods/burstable/pod5333b7f22b4ca6fa3369f64c875d053e/5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961: device or resource busy"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.038387    5394 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.039162    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9924f10d-5beb-43b1-9782-44644a015b56-tmp\") pod \"storage-provisioner\" (UID: \"9924f10d-5beb-43b1-9782-44644a015b56\") " pod="kube-system/storage-provisioner"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.039226    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4a00283-1d69-49c4-8c60-264ef3fd7aca-lib-modules\") pod \"kube-proxy-w8qkq\" (UID: \"b4a00283-1d69-49c4-8c60-264ef3fd7aca\") " pod="kube-system/kube-proxy-w8qkq"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.039244    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8ee89403-0943-480c-9f48-4b25a0198f6d-cni-cfg\") pod \"kindnet-5qjpd\" (UID: \"8ee89403-0943-480c-9f48-4b25a0198f6d\") " pod="kube-system/kindnet-5qjpd"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.039258    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ee89403-0943-480c-9f48-4b25a0198f6d-xtables-lock\") pod \"kindnet-5qjpd\" (UID: \"8ee89403-0943-480c-9f48-4b25a0198f6d\") " pod="kube-system/kindnet-5qjpd"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.039324    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ee89403-0943-480c-9f48-4b25a0198f6d-lib-modules\") pod \"kindnet-5qjpd\" (UID: \"8ee89403-0943-480c-9f48-4b25a0198f6d\") " pod="kube-system/kindnet-5qjpd"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.039348    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4a00283-1d69-49c4-8c60-264ef3fd7aca-xtables-lock\") pod \"kube-proxy-w8qkq\" (UID: \"b4a00283-1d69-49c4-8c60-264ef3fd7aca\") " pod="kube-system/kube-proxy-w8qkq"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.056492    5394 scope.go:117] "RemoveContainer" containerID="c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.056894    5394 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-016570" podUID="03b56925-37e8-4f4c-947d-8798a9b0b1e8"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.080422    5394 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-functional-016570"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.241180    5394 scope.go:117] "RemoveContainer" containerID="490a48762f6299610dde37755b7d1a88c8a68ac418dfbdab150f25fa336a052b"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: E0916 10:41:44.242240    5394 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:storage-provisioner,Image:gcr.io/k8s-minikube/storage-provisioner:v5,Command:[/storage-provisioner],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bvpkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},Sta
rtupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod storage-provisioner_kube-system(9924f10d-5beb-43b1-9782-44644a015b56): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: E0916 10:41:44.243435    5394 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/storage-provisioner" podUID="9924f10d-5beb-43b1-9782-44644a015b56"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.264872    5394 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-functional-016570" podStartSLOduration=0.264847622 podStartE2EDuration="264.847622ms" podCreationTimestamp="2024-09-16 10:41:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:41:44.264526883 +0000 UTC m=+10.389456446" watchObservedRunningTime="2024-09-16 10:41:44.264847622 +0000 UTC m=+10.389777177"
	Sep 16 10:41:45 functional-016570 kubelet[5394]: I0916 10:41:45.059426    5394 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-016570" podUID="03b56925-37e8-4f4c-947d-8798a9b0b1e8"
	Sep 16 10:41:45 functional-016570 kubelet[5394]: I0916 10:41:45.955614    5394 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5333b7f22b4ca6fa3369f64c875d053e" path="/var/lib/kubelet/pods/5333b7f22b4ca6fa3369f64c875d053e/volumes"
	Sep 16 10:41:54 functional-016570 kubelet[5394]: I0916 10:41:54.952651    5394 scope.go:117] "RemoveContainer" containerID="490a48762f6299610dde37755b7d1a88c8a68ac418dfbdab150f25fa336a052b"
	
	
	==> storage-provisioner [490a48762f6299610dde37755b7d1a88c8a68ac418dfbdab150f25fa336a052b] <==
	I0916 10:41:23.246262       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0916 10:41:23.248507       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [4d490c9b7ae90d00a29d06ff834c0ce770ddfef68473b74c781044e7a283344c] <==
	I0916 10:41:55.019536       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:41:55.026724       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:41:55.026761       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-016570 -n functional-016570
helpers_test.go:261: (dbg) Run:  kubectl --context functional-016570 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context functional-016570 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (435.241µs)
helpers_test.go:263: kubectl --context functional-016570 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestFunctional/serial/ComponentHealth (1.98s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-016570 apply -f testdata/invalidsvc.yaml
functional_test.go:2321: (dbg) Non-zero exit: kubectl --context functional-016570 apply -f testdata/invalidsvc.yaml: fork/exec /usr/local/bin/kubectl: exec format error (427.899µs)
functional_test.go:2323: kubectl --context functional-016570 apply -f testdata/invalidsvc.yaml failed: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestFunctional/serial/InvalidService (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (4.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-016570 --alsologtostderr -v=1]
functional_test.go:918: output didn't produce a URL
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-016570 --alsologtostderr -v=1] ...
functional_test.go:910: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-016570 --alsologtostderr -v=1] stdout:
functional_test.go:910: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-016570 --alsologtostderr -v=1] stderr:
I0916 10:42:08.155418   55420 out.go:345] Setting OutFile to fd 1 ...
I0916 10:42:08.155662   55420 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:42:08.155692   55420 out.go:358] Setting ErrFile to fd 2...
I0916 10:42:08.155705   55420 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:42:08.156049   55420 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
I0916 10:42:08.156440   55420 mustload.go:65] Loading cluster: functional-016570
I0916 10:42:08.157052   55420 config.go:182] Loaded profile config "functional-016570": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0916 10:42:08.157747   55420 cli_runner.go:164] Run: docker container inspect functional-016570 --format={{.State.Status}}
I0916 10:42:08.179877   55420 host.go:66] Checking if "functional-016570" exists ...
I0916 10:42:08.180217   55420 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0916 10:42:08.245325   55420 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-16 10:42:08.23110295 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_6
4 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0916 10:42:08.245486   55420 api_server.go:166] Checking apiserver status ...
I0916 10:42:08.245537   55420 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0916 10:42:08.245578   55420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-016570
I0916 10:42:08.270382   55420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/functional-016570/id_rsa Username:docker}
I0916 10:42:08.366290   55420 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5628/cgroup
I0916 10:42:08.375958   55420 api_server.go:182] apiserver freezer: "10:freezer:/docker/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b/kubepods/burstable/pod36dc10fb67abc1f7e7d56f544fb5fbe5/d2500e97c949b2a2e8330e357558411d2893aaafdb03d1ef0422a5a6fb1cb12f"
I0916 10:42:08.376038   55420 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b/kubepods/burstable/pod36dc10fb67abc1f7e7d56f544fb5fbe5/d2500e97c949b2a2e8330e357558411d2893aaafdb03d1ef0422a5a6fb1cb12f/freezer.state
I0916 10:42:08.384657   55420 api_server.go:204] freezer state: "THAWED"
I0916 10:42:08.384690   55420 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I0916 10:42:08.388791   55420 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W0916 10:42:08.388833   55420 out.go:270] * Enabling dashboard ...
* Enabling dashboard ...
I0916 10:42:08.388976   55420 config.go:182] Loaded profile config "functional-016570": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0916 10:42:08.388992   55420 addons.go:69] Setting dashboard=true in profile "functional-016570"
I0916 10:42:08.388999   55420 addons.go:234] Setting addon dashboard=true in "functional-016570"
I0916 10:42:08.389023   55420 host.go:66] Checking if "functional-016570" exists ...
I0916 10:42:08.389365   55420 cli_runner.go:164] Run: docker container inspect functional-016570 --format={{.State.Status}}
I0916 10:42:08.411959   55420 out.go:177]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0916 10:42:08.413488   55420 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0916 10:42:08.415382   55420 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0916 10:42:08.415402   55420 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0916 10:42:08.415450   55420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-016570
I0916 10:42:08.433197   55420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/functional-016570/id_rsa Username:docker}
I0916 10:42:08.543271   55420 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0916 10:42:08.543323   55420 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0916 10:42:08.562002   55420 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0916 10:42:08.562027   55420 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0916 10:42:08.581178   55420 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0916 10:42:08.581203   55420 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0916 10:42:08.599342   55420 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0916 10:42:08.599365   55420 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0916 10:42:08.620911   55420 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
I0916 10:42:08.620940   55420 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0916 10:42:08.640456   55420 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0916 10:42:08.640481   55420 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0916 10:42:08.663009   55420 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0916 10:42:08.663039   55420 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0916 10:42:08.684127   55420 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0916 10:42:08.684152   55420 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0916 10:42:08.701814   55420 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0916 10:42:08.701838   55420 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0916 10:42:08.719857   55420 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0916 10:42:09.438948   55420 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-016570 addons enable metrics-server

                                                
                                                
I0916 10:42:09.441351   55420 addons.go:197] Writing out "functional-016570" config to set dashboard=true...
W0916 10:42:09.441666   55420 out.go:270] * Verifying dashboard health ...
* Verifying dashboard health ...
I0916 10:42:09.442652   55420 kapi.go:59] client config for functional-016570: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextPr
otos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0916 10:42:09.522709   55420 service.go:214] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  0876e78b-0e24-4e9d-a079-ec2e0144ef3c 561 0 2024-09-16 10:42:09 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2024-09-16 10:42:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.107.44.131,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.107.44.131],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0916 10:42:09.522919   55420 out.go:270] * Launching proxy ...
* Launching proxy ...
I0916 10:42:09.522995   55420 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-016570 proxy --port 36195]
I0916 10:42:09.525324   55420 out.go:201] 
W0916 10:42:09.526758   55420 out.go:270] X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: proxy start: fork/exec /usr/local/bin/kubectl: exec format error
X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: proxy start: fork/exec /usr/local/bin/kubectl: exec format error
W0916 10:42:09.526776   55420 out.go:270] * 
* 
W0916 10:42:09.529063   55420 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log               │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log               │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0916 10:42:09.529938   55420 out.go:201] 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-016570
helpers_test.go:235: (dbg) docker inspect functional-016570:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b",
	        "Created": "2024-09-16T10:40:22.437729239Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 44343,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:40:22.549860744Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b/hostname",
	        "HostsPath": "/var/lib/docker/containers/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b/hosts",
	        "LogPath": "/var/lib/docker/containers/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b-json.log",
	        "Name": "/functional-016570",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-016570:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-016570",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/89f1441ae4b7c160cc54fa69d84178c0aef1412348cbc8b5731e08fc84a1cd48-init/diff:/var/lib/docker/overlay2/e391a31d00d345931d18da68f8a7d7338830f6d9b98fe203461225d03a65d594/diff",
	                "MergedDir": "/var/lib/docker/overlay2/89f1441ae4b7c160cc54fa69d84178c0aef1412348cbc8b5731e08fc84a1cd48/merged",
	                "UpperDir": "/var/lib/docker/overlay2/89f1441ae4b7c160cc54fa69d84178c0aef1412348cbc8b5731e08fc84a1cd48/diff",
	                "WorkDir": "/var/lib/docker/overlay2/89f1441ae4b7c160cc54fa69d84178c0aef1412348cbc8b5731e08fc84a1cd48/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-016570",
	                "Source": "/var/lib/docker/volumes/functional-016570/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-016570",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-016570",
	                "name.minikube.sigs.k8s.io": "functional-016570",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cea48287f6ef85c384c2d0fd7e890db0e03a4817385b96024c8e0a0688fd7962",
	            "SandboxKey": "/var/run/docker/netns/cea48287f6ef",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-016570": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "0f0536305092a99e49a50f7adb5b668b5d3bb1c3c21d82e43be0c155c4e7cfe5",
	                    "EndpointID": "9201dc1ccce436b0c1f5c3cef087a9b08f43da4627ab9f3717ae9bbdfb5de9f3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-016570",
	                        "389fc216adb5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-016570 -n functional-016570
helpers_test.go:244: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-016570 logs -n 25: (1.744889067s)
helpers_test.go:252: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	|-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| start     | -p functional-016570                                                     | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:41 UTC | 16 Sep 24 10:42 UTC |
	|           | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |         |         |                     |                     |
	|           | --wait=all                                                               |                   |         |         |                     |                     |
	| cp        | functional-016570 cp                                                     | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|           | testdata/cp-test.txt                                                     |                   |         |         |                     |                     |
	|           | /home/docker/cp-test.txt                                                 |                   |         |         |                     |                     |
	| config    | functional-016570 config unset                                           | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|           | cpus                                                                     |                   |         |         |                     |                     |
	| config    | functional-016570 config get                                             | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|           | cpus                                                                     |                   |         |         |                     |                     |
	| config    | functional-016570 config set                                             | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|           | cpus 2                                                                   |                   |         |         |                     |                     |
	| config    | functional-016570 config get                                             | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|           | cpus                                                                     |                   |         |         |                     |                     |
	| config    | functional-016570 config unset                                           | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|           | cpus                                                                     |                   |         |         |                     |                     |
	| ssh       | functional-016570 ssh -n                                                 | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|           | functional-016570 sudo cat                                               |                   |         |         |                     |                     |
	|           | /home/docker/cp-test.txt                                                 |                   |         |         |                     |                     |
	| config    | functional-016570 config get                                             | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|           | cpus                                                                     |                   |         |         |                     |                     |
	| service   | functional-016570 service list                                           | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|           | -o json                                                                  |                   |         |         |                     |                     |
	| start     | -p functional-016570                                                     | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|           | --dry-run --memory                                                       |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                  |                   |         |         |                     |                     |
	|           | --driver=docker                                                          |                   |         |         |                     |                     |
	|           | --container-runtime=containerd                                           |                   |         |         |                     |                     |
	| start     | -p functional-016570                                                     | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|           | --dry-run --memory                                                       |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                                  |                   |         |         |                     |                     |
	|           | --driver=docker                                                          |                   |         |         |                     |                     |
	|           | --container-runtime=containerd                                           |                   |         |         |                     |                     |
	| cp        | functional-016570 cp                                                     | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|           | functional-016570:/home/docker/cp-test.txt                               |                   |         |         |                     |                     |
	|           | /tmp/TestFunctionalparallelCpCmd2928196455/001/cp-test.txt               |                   |         |         |                     |                     |
	| service   | functional-016570 service                                                | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|           | --namespace=default --https                                              |                   |         |         |                     |                     |
	|           | --url hello-node                                                         |                   |         |         |                     |                     |
	| start     | -p functional-016570                                                     | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|           | --dry-run --alsologtostderr                                              |                   |         |         |                     |                     |
	|           | -v=1 --driver=docker                                                     |                   |         |         |                     |                     |
	|           | --container-runtime=containerd                                           |                   |         |         |                     |                     |
	| ssh       | functional-016570 ssh -n                                                 | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|           | functional-016570 sudo cat                                               |                   |         |         |                     |                     |
	|           | /home/docker/cp-test.txt                                                 |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                                       | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|           | -p functional-016570                                                     |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                                   |                   |         |         |                     |                     |
	| service   | functional-016570                                                        | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|           | service hello-node --url                                                 |                   |         |         |                     |                     |
	|           | --format={{.IP}}                                                         |                   |         |         |                     |                     |
	| cp        | functional-016570 cp                                                     | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|           | testdata/cp-test.txt                                                     |                   |         |         |                     |                     |
	|           | /tmp/does/not/exist/cp-test.txt                                          |                   |         |         |                     |                     |
	| service   | functional-016570 service                                                | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|           | hello-node --url                                                         |                   |         |         |                     |                     |
	| ssh       | functional-016570 ssh -n                                                 | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|           | functional-016570 sudo cat                                               |                   |         |         |                     |                     |
	|           | /tmp/does/not/exist/cp-test.txt                                          |                   |         |         |                     |                     |
	| ssh       | functional-016570 ssh echo                                               | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|           | hello                                                                    |                   |         |         |                     |                     |
	| ssh       | functional-016570 ssh cat                                                | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|           | /etc/hostname                                                            |                   |         |         |                     |                     |
	| tunnel    | functional-016570 tunnel                                                 | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|           | --alsologtostderr                                                        |                   |         |         |                     |                     |
	| tunnel    | functional-016570 tunnel                                                 | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|           | --alsologtostderr                                                        |                   |         |         |                     |                     |
	|-----------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:42:07
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:42:07.925802   55153 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:42:07.926076   55153 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:42:07.926087   55153 out.go:358] Setting ErrFile to fd 2...
	I0916 10:42:07.926094   55153 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:42:07.926329   55153 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 10:42:07.926917   55153 out.go:352] Setting JSON to false
	I0916 10:42:07.928083   55153 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1472,"bootTime":1726481856,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:42:07.928201   55153 start.go:139] virtualization: kvm guest
	I0916 10:42:07.930891   55153 out.go:177] * [functional-016570] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:42:07.933206   55153 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:42:07.933437   55153 notify.go:220] Checking for updates...
	I0916 10:42:07.936378   55153 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:42:07.937840   55153 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:42:07.939249   55153 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 10:42:07.940760   55153 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:42:07.942139   55153 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:42:07.944069   55153 config.go:182] Loaded profile config "functional-016570": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:42:07.944793   55153 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:42:07.980732   55153 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:42:07.980810   55153 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:42:08.038252   55153 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-16 10:42:08.026410213 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:42:08.038402   55153 docker.go:318] overlay module found
	I0916 10:42:08.040535   55153 out.go:177] * Using the docker driver based on existing profile
	I0916 10:42:08.042029   55153 start.go:297] selected driver: docker
	I0916 10:42:08.042043   55153 start.go:901] validating driver "docker" against &{Name:functional-016570 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-016570 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:42:08.042118   55153 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:42:08.042187   55153 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:42:08.096294   55153 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-16 10:42:08.085371862 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:42:08.096876   55153 cni.go:84] Creating CNI manager for ""
	I0916 10:42:08.096923   55153 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 10:42:08.096974   55153 start.go:340] cluster config:
	{Name:functional-016570 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-016570 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:42:08.098919   55153 out.go:177] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	4d490c9b7ae90       6e38f40d628db       15 seconds ago       Running             storage-provisioner       2                   b81ffde02718d       storage-provisioner
	d2500e97c949b       6bab7719df100       30 seconds ago       Running             kube-apiserver            0                   9d885083d4265       kube-apiserver-functional-016570
	c8262cd23469c       175ffd71cce3d       30 seconds ago       Running             kube-controller-manager   2                   8b5d374851050       kube-controller-manager-functional-016570
	861dc747735da       2e96e5913fc06       30 seconds ago       Running             etcd                      1                   2cdebcb8c7807       etcd-functional-016570
	f40f6265fc1c6       12968670680f4       47 seconds ago       Running             kindnet-cni               1                   c7f56f796b013       kindnet-5qjpd
	490a48762f629       6e38f40d628db       47 seconds ago       Exited              storage-provisioner       1                   b81ffde02718d       storage-provisioner
	2810e4a546750       60c005f310ff3       47 seconds ago       Running             kube-proxy                1                   f4ed79f8dffeb       kube-proxy-w8qkq
	485f2c5cef235       175ffd71cce3d       47 seconds ago       Exited              kube-controller-manager   1                   8b5d374851050       kube-controller-manager-functional-016570
	9ff9913af2feb       9aa1fad941575       47 seconds ago       Running             kube-scheduler            1                   caa2007696d1b       kube-scheduler-functional-016570
	b8bd1849da6c4       c69fa2e9cbf5f       47 seconds ago       Running             coredns                   1                   3d9a434f8b6e5       coredns-7c65d6cfc9-59qm7
	fd0c81e7a39a2       c69fa2e9cbf5f       About a minute ago   Exited              coredns                   0                   3d9a434f8b6e5       coredns-7c65d6cfc9-59qm7
	bf96dac81b725       12968670680f4       About a minute ago   Exited              kindnet-cni               0                   c7f56f796b013       kindnet-5qjpd
	80095e847084f       60c005f310ff3       About a minute ago   Exited              kube-proxy                0                   f4ed79f8dffeb       kube-proxy-w8qkq
	0906c5e415b9c       9aa1fad941575       About a minute ago   Exited              kube-scheduler            0                   caa2007696d1b       kube-scheduler-functional-016570
	b4905826c508e       2e96e5913fc06       About a minute ago   Exited              etcd                      0                   2cdebcb8c7807       etcd-functional-016570
	
	
	==> containerd <==
	Sep 16 10:41:43 functional-016570 containerd[4401]: time="2024-09-16T10:41:43.986437102Z" level=warning msg="cleaning up after shim disconnected" id=5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961 namespace=k8s.io
	Sep 16 10:41:43 functional-016570 containerd[4401]: time="2024-09-16T10:41:43.986464976Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 16 10:41:44 functional-016570 containerd[4401]: time="2024-09-16T10:41:44.000890426Z" level=info msg="TearDown network for sandbox \"5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961\" successfully"
	Sep 16 10:41:44 functional-016570 containerd[4401]: time="2024-09-16T10:41:44.000939894Z" level=info msg="StopPodSandbox for \"5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961\" returns successfully"
	Sep 16 10:41:44 functional-016570 containerd[4401]: time="2024-09-16T10:41:44.057763587Z" level=info msg="RemoveContainer for \"c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171\""
	Sep 16 10:41:44 functional-016570 containerd[4401]: time="2024-09-16T10:41:44.064042865Z" level=info msg="RemoveContainer for \"c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171\" returns successfully"
	Sep 16 10:41:54 functional-016570 containerd[4401]: time="2024-09-16T10:41:54.955284514Z" level=info msg="CreateContainer within sandbox \"b81ffde02718d4aa5d690a7b2df31a8489e3a52138ff033bbcd631c55c48cadf\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:2,}"
	Sep 16 10:41:54 functional-016570 containerd[4401]: time="2024-09-16T10:41:54.968501482Z" level=info msg="CreateContainer within sandbox \"b81ffde02718d4aa5d690a7b2df31a8489e3a52138ff033bbcd631c55c48cadf\" for &ContainerMetadata{Name:storage-provisioner,Attempt:2,} returns container id \"4d490c9b7ae90d00a29d06ff834c0ce770ddfef68473b74c781044e7a283344c\""
	Sep 16 10:41:54 functional-016570 containerd[4401]: time="2024-09-16T10:41:54.969110908Z" level=info msg="StartContainer for \"4d490c9b7ae90d00a29d06ff834c0ce770ddfef68473b74c781044e7a283344c\""
	Sep 16 10:41:55 functional-016570 containerd[4401]: time="2024-09-16T10:41:55.011537068Z" level=info msg="StartContainer for \"4d490c9b7ae90d00a29d06ff834c0ce770ddfef68473b74c781044e7a283344c\" returns successfully"
	Sep 16 10:42:10 functional-016570 containerd[4401]: time="2024-09-16T10:42:10.031887436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:dashboard-metrics-scraper-c5db448b4-jvhn6,Uid:f6bb1f19-917d-404c-9fab-b966f900a8c6,Namespace:kubernetes-dashboard,Attempt:0,}"
	Sep 16 10:42:10 functional-016570 containerd[4401]: time="2024-09-16T10:42:10.038361696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kubernetes-dashboard-695b96c756-64tpc,Uid:8930ff3f-4f5f-41f0-94be-b2685f45ca6c,Namespace:kubernetes-dashboard,Attempt:0,}"
	Sep 16 10:42:10 functional-016570 containerd[4401]: time="2024-09-16T10:42:10.081290451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 10:42:10 functional-016570 containerd[4401]: time="2024-09-16T10:42:10.081519917Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 10:42:10 functional-016570 containerd[4401]: time="2024-09-16T10:42:10.081632718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:42:10 functional-016570 containerd[4401]: time="2024-09-16T10:42:10.082467994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:42:10 functional-016570 containerd[4401]: time="2024-09-16T10:42:10.134165522Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 10:42:10 functional-016570 containerd[4401]: time="2024-09-16T10:42:10.134288920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 10:42:10 functional-016570 containerd[4401]: time="2024-09-16T10:42:10.134627837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:42:10 functional-016570 containerd[4401]: time="2024-09-16T10:42:10.135160649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:42:10 functional-016570 containerd[4401]: time="2024-09-16T10:42:10.239472370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:dashboard-metrics-scraper-c5db448b4-jvhn6,Uid:f6bb1f19-917d-404c-9fab-b966f900a8c6,Namespace:kubernetes-dashboard,Attempt:0,} returns sandbox id \"395f1e784acbe7fe418e3072ee9988263c9eb72e57f3f6cf96ac518590ba79da\""
	Sep 16 10:42:10 functional-016570 containerd[4401]: time="2024-09-16T10:42:10.244086489Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Sep 16 10:42:10 functional-016570 containerd[4401]: time="2024-09-16T10:42:10.246483361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kubernetes-dashboard-695b96c756-64tpc,Uid:8930ff3f-4f5f-41f0-94be-b2685f45ca6c,Namespace:kubernetes-dashboard,Attempt:0,} returns sandbox id \"4817ed22e184b214d46870728ee1653ce49c9055d735e02434c8cea1b7d1de44\""
	Sep 16 10:42:10 functional-016570 containerd[4401]: time="2024-09-16T10:42:10.249544155Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 16 10:42:10 functional-016570 containerd[4401]: time="2024-09-16T10:42:10.864690766Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	
	
	==> coredns [b8bd1849da6c464b7fbc64f004ea8f6e93596b309cb23e3f75f0493a6c22ebd5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:59975 - 12600 "HINFO IN 4686966597786162674.2744546229077384558. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011873953s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [fd0c81e7a39a2566405ad2950426958ab0d7abfe073ce6517f67e87f2cc2dabe] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44994 - 49719 "HINFO IN 5811560446017322614.7472127089541594346. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008166038s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-016570
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-016570
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=functional-016570
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_40_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:40:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-016570
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:42:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:41:43 +0000   Mon, 16 Sep 2024 10:40:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:41:43 +0000   Mon, 16 Sep 2024 10:40:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:41:43 +0000   Mon, 16 Sep 2024 10:40:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:41:43 +0000   Mon, 16 Sep 2024 10:40:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-016570
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 7e0bbc72eb56431496705823b26dcd4d
	  System UUID:                9e1cbc84-d044-4524-b143-ca93a6dc5aa0
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-59qm7                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     88s
	  kube-system                 etcd-functional-016570                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         93s
	  kube-system                 kindnet-5qjpd                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      88s
	  kube-system                 kube-apiserver-functional-016570             250m (3%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-controller-manager-functional-016570    200m (2%)     0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 kube-proxy-w8qkq                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 kube-scheduler-functional-016570             100m (1%)     0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-jvhn6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-64tpc        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 86s                kube-proxy       
	  Normal   Starting                 13s                kube-proxy       
	  Normal   NodeAllocatableEnforced  94s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 94s                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 94s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  93s                kubelet          Node functional-016570 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    93s                kubelet          Node functional-016570 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     93s                kubelet          Node functional-016570 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           89s                node-controller  Node functional-016570 event: Registered Node functional-016570 in Controller
	  Normal   Starting                 38s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 38s                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  34s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  31s (x8 over 34s)  kubelet          Node functional-016570 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    31s (x7 over 34s)  kubelet          Node functional-016570 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     31s (x7 over 34s)  kubelet          Node functional-016570 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           25s                node-controller  Node functional-016570 event: Registered Node functional-016570 in Controller
	
	
	==> dmesg <==
	[Sep16 10:17]  #2
	[  +0.001391]  #3
	[  +0.000000]  #4
	[  +0.003060] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.003238] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.002037] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.002135]  #5
	[  +0.000696]  #6
	[  +0.003195]  #7
	[  +0.058540] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.423486] i8042: Warning: Keylock active
	[  +0.007424] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.002994] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000654] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000607] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000622] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000638] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000657] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000607] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000608] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.588189] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +7.251152] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [861dc747735da2748e3f4b24b824a36d0a52f89bbf91f6d93373e8e94ec47110] <==
	{"level":"info","ts":"2024-09-16T10:41:40.736533Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:41:40.736587Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:41:40.736603Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:41:40.736848Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:41:40.736868Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:41:40.737630Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-09-16T10:41:40.737696Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-09-16T10:41:40.737861Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:41:40.737967Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:41:42.324428Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-16T10:41:42.324481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T10:41:42.324523Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-16T10:41:42.324536Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:41:42.324550Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-16T10:41:42.324585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-09-16T10:41:42.324592Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-16T10:41:42.326180Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-016570 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:41:42.326181Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:41:42.326207Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:41:42.326768Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:41:42.326898Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:41:42.328280Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:41:42.328298Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:41:42.329057Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:41:42.329116Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> etcd [b4905826c508e92d29a39ef565f1c838d026ed2e1af8256da846ca2ca7e33d25] <==
	{"level":"info","ts":"2024-09-16T10:40:33.951156Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-16T10:40:33.951169Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-16T10:40:33.952133Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-016570 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:40:33.952149Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:40:33.952176Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:40:33.952174Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:40:33.952458Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:40:33.952538Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:40:33.952886Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:40:33.953028Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:40:33.953056Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:40:33.953277Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:40:33.954142Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:40:33.955433Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:40:33.956074Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-16T10:41:32.555101Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-16T10:41:32.555186Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-016570","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-09-16T10:41:32.555307Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:41:32.555359Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:41:32.556893Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:41:32.556932Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T10:41:32.558356Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-09-16T10:41:32.560054Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:41:32.560161Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:41:32.560190Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-016570","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 10:42:11 up 24 min,  0 users,  load average: 1.30, 0.94, 0.59
	Linux functional-016570 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [bf96dac81b725b0cdd05c80d46fccb31fba58eb314cbefaf4fa45648dd564d75] <==
	I0916 10:40:44.322865       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 10:40:44.323089       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0916 10:40:44.323250       1 main.go:148] setting mtu 1500 for CNI 
	I0916 10:40:44.323270       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 10:40:44.323292       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 10:40:44.644895       1 controller.go:334] Starting controller kube-network-policies
	I0916 10:40:44.644910       1 controller.go:338] Waiting for informer caches to sync
	I0916 10:40:44.644915       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 10:40:45.019853       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 10:40:45.019909       1 metrics.go:61] Registering metrics
	I0916 10:40:45.019969       1 controller.go:374] Syncing nftables rules
	I0916 10:40:54.644919       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:40:54.644955       1 main.go:299] handling current node
	I0916 10:41:04.651830       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:41:04.651862       1 main.go:299] handling current node
	I0916 10:41:14.648734       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:41:14.648784       1 main.go:299] handling current node
	
	
	==> kindnet [f40f6265fc1c666726cfc4dfc8b0637a32e85401949dfb2edee2619b1765db77] <==
	W0916 10:41:27.061180       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:41:27.061218       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0916 10:41:27.212636       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:41:27.212709       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0916 10:41:27.370312       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:41:27.370396       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0916 10:41:27.496085       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:41:27.496151       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0916 10:41:31.899869       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:41:31.899923       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0916 10:41:32.540969       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:41:32.541033       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0916 10:41:32.592684       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:41:32.592731       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0916 10:41:33.006411       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:41:33.006451       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	I0916 10:41:43.445970       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 10:41:43.446004       1 metrics.go:61] Registering metrics
	I0916 10:41:43.446082       1 controller.go:374] Syncing nftables rules
	I0916 10:41:43.844790       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:41:43.844836       1 main.go:299] handling current node
	I0916 10:41:53.844621       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:41:53.844659       1 main.go:299] handling current node
	I0916 10:42:03.848081       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:42:03.848124       1 main.go:299] handling current node
	
	
	==> kube-apiserver [d2500e97c949b2a2e8330e357558411d2893aaafdb03d1ef0422a5a6fb1cb12f] <==
	I0916 10:41:43.422383       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 10:41:43.422423       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:41:43.422975       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:41:43.423049       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:41:43.423247       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:41:43.424833       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:41:43.424920       1 policy_source.go:224] refreshing policies
	I0916 10:41:43.439068       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:41:43.466711       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:41:43.467901       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:41:43.471775       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 10:41:43.520505       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:41:44.269900       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0916 10:41:44.531189       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0916 10:41:44.532603       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:41:44.536818       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:41:44.865983       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:41:44.962527       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:41:44.972551       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:41:45.027823       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:41:45.034699       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 10:42:09.231235       1 controller.go:615] quota admission added evaluator for: namespaces
	I0916 10:42:09.266000       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 10:42:09.379723       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.44.131"}
	I0916 10:42:09.429057       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.65.100"}
	
	
	==> kube-controller-manager [485f2c5cef235c0182e1a64e3a548bea54de9894193e120b2b717a72b9ef1bff] <==
	I0916 10:41:23.729467       1 serving.go:386] Generated self-signed cert in-memory
	I0916 10:41:24.021064       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0916 10:41:24.021095       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:41:24.022441       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0916 10:41:24.022441       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0916 10:41:24.022737       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0916 10:41:24.022823       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0916 10:41:34.024812       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-controller-manager [c8262cd23469ca086f00137b1fc38c96429b63d514a16f9a905da144ecd2b73c] <==
	I0916 10:42:09.302022       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="32.383941ms"
	E0916 10:42:09.302063       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:42:09.302132       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="21.672713ms"
	E0916 10:42:09.302164       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:42:09.322862       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="19.394348ms"
	E0916 10:42:09.322901       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:42:09.322862       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="19.742072ms"
	E0916 10:42:09.322918       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:42:09.330631       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="5.717139ms"
	E0916 10:42:09.330672       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:42:09.330731       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="6.175138ms"
	E0916 10:42:09.330740       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:42:09.338767       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="3.604226ms"
	E0916 10:42:09.338811       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:42:09.339183       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="4.001104ms"
	E0916 10:42:09.339225       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:42:09.420221       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="39.404744ms"
	I0916 10:42:09.422278       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="41.065043ms"
	I0916 10:42:09.430005       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="7.676168ms"
	I0916 10:42:09.430194       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="129.11µs"
	I0916 10:42:09.445018       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="42.392µs"
	I0916 10:42:09.520924       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="100.632911ms"
	I0916 10:42:09.521034       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="57.631µs"
	I0916 10:42:09.521082       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="25.339µs"
	I0916 10:42:09.531158       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="49.005µs"
	
	
	==> kube-proxy [2810e4a54675045b91d6e2b6996d5595fca99d1ed910f700a9440b05c934282a] <==
	I0916 10:41:23.344360       1 server_linux.go:66] "Using iptables proxy"
	E0916 10:41:23.466145       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-016570\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0916 10:41:24.666043       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-016570\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0916 10:41:26.980270       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-016570\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0916 10:41:31.272411       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-016570\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0916 10:41:40.759863       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-016570\": dial tcp 192.168.49.2:8441: connect: connection refused"
	I0916 10:41:57.573875       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:41:57.573958       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:41:57.593375       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:41:57.593437       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:41:57.595286       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:41:57.595663       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:41:57.595695       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:41:57.596863       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:41:57.596985       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:41:57.597145       1 config.go:199] "Starting service config controller"
	I0916 10:41:57.597451       1 config.go:328] "Starting node config controller"
	I0916 10:41:57.597468       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:41:57.597345       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:41:57.697968       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:41:57.698030       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:41:57.698035       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [80095e847084fa5775086821e6be1bb6e7bae0fe6a66745b19cb9b75e266bc3f] <==
	I0916 10:40:44.050300       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:40:44.179052       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:40:44.179113       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:40:44.196854       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:40:44.196926       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:40:44.198841       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:40:44.199284       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:40:44.199313       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:40:44.200417       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:40:44.200434       1 config.go:199] "Starting service config controller"
	I0916 10:40:44.200460       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:40:44.200461       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:40:44.200579       1 config.go:328] "Starting node config controller"
	I0916 10:40:44.200592       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:40:44.300956       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:40:44.300944       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:40:44.300945       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [0906c5e415b9c8b7cf0dc6e35daf1caed07aec8e00dc68457a9203cdeaa0fcee] <==
	E0916 10:40:35.625982       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:35.626126       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:40:35.626272       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:35.626178       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 10:40:35.627185       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.485194       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:40:36.485283       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.534754       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:40:36.534802       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.553649       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 10:40:36.553693       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.739511       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 10:40:36.739572       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.763525       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 10:40:36.763583       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.768041       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 10:40:36.768094       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.782504       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 10:40:36.782564       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.919306       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:40:36.919357       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 10:40:39.247701       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:41:22.445138       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0916 10:41:22.445225       1 run.go:72] "command failed" err="finished without leader elect"
	I0916 10:41:22.445244       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	
	
	==> kube-scheduler [9ff9913af2feb41a804690d65aef168822cd2ac0a456e3642182c64337903889] <==
	W0916 10:41:33.309245       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0916 10:41:33.309313       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:41:34.856594       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0916 10:41:34.856642       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:41:39.800183       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0916 10:41:39.800261       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:41:40.256311       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0916 10:41:40.256386       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:41:40.442961       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0916 10:41:40.443018       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:41:40.559680       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0916 10:41:40.559763       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:41:43.341030       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0916 10:41:43.341153       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:41:43.341209       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 10:41:43.341177       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:41:43.341296       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 10:41:43.341326       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:41:43.341402       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 10:41:43.341424       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:41:43.341470       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:41:43.341491       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:41:43.341579       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:41:43.341738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0916 10:41:43.775320       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:41:43 functional-016570 kubelet[5394]: time="2024-09-16T10:41:43Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/devices/kubepods/burstable/pod5333b7f22b4ca6fa3369f64c875d053e/5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961: device or resource busy"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.038387    5394 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.039162    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9924f10d-5beb-43b1-9782-44644a015b56-tmp\") pod \"storage-provisioner\" (UID: \"9924f10d-5beb-43b1-9782-44644a015b56\") " pod="kube-system/storage-provisioner"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.039226    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4a00283-1d69-49c4-8c60-264ef3fd7aca-lib-modules\") pod \"kube-proxy-w8qkq\" (UID: \"b4a00283-1d69-49c4-8c60-264ef3fd7aca\") " pod="kube-system/kube-proxy-w8qkq"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.039244    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8ee89403-0943-480c-9f48-4b25a0198f6d-cni-cfg\") pod \"kindnet-5qjpd\" (UID: \"8ee89403-0943-480c-9f48-4b25a0198f6d\") " pod="kube-system/kindnet-5qjpd"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.039258    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ee89403-0943-480c-9f48-4b25a0198f6d-xtables-lock\") pod \"kindnet-5qjpd\" (UID: \"8ee89403-0943-480c-9f48-4b25a0198f6d\") " pod="kube-system/kindnet-5qjpd"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.039324    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ee89403-0943-480c-9f48-4b25a0198f6d-lib-modules\") pod \"kindnet-5qjpd\" (UID: \"8ee89403-0943-480c-9f48-4b25a0198f6d\") " pod="kube-system/kindnet-5qjpd"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.039348    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4a00283-1d69-49c4-8c60-264ef3fd7aca-xtables-lock\") pod \"kube-proxy-w8qkq\" (UID: \"b4a00283-1d69-49c4-8c60-264ef3fd7aca\") " pod="kube-system/kube-proxy-w8qkq"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.056492    5394 scope.go:117] "RemoveContainer" containerID="c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.056894    5394 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-016570" podUID="03b56925-37e8-4f4c-947d-8798a9b0b1e8"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.080422    5394 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-functional-016570"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.241180    5394 scope.go:117] "RemoveContainer" containerID="490a48762f6299610dde37755b7d1a88c8a68ac418dfbdab150f25fa336a052b"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: E0916 10:41:44.242240    5394 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:storage-provisioner,Image:gcr.io/k8s-minikube/storage-provisioner:v5,Command:[/storage-provisioner],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bvpkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},Sta
rtupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod storage-provisioner_kube-system(9924f10d-5beb-43b1-9782-44644a015b56): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: E0916 10:41:44.243435    5394 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/storage-provisioner" podUID="9924f10d-5beb-43b1-9782-44644a015b56"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.264872    5394 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-functional-016570" podStartSLOduration=0.264847622 podStartE2EDuration="264.847622ms" podCreationTimestamp="2024-09-16 10:41:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:41:44.264526883 +0000 UTC m=+10.389456446" watchObservedRunningTime="2024-09-16 10:41:44.264847622 +0000 UTC m=+10.389777177"
	Sep 16 10:41:45 functional-016570 kubelet[5394]: I0916 10:41:45.059426    5394 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-016570" podUID="03b56925-37e8-4f4c-947d-8798a9b0b1e8"
	Sep 16 10:41:45 functional-016570 kubelet[5394]: I0916 10:41:45.955614    5394 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5333b7f22b4ca6fa3369f64c875d053e" path="/var/lib/kubelet/pods/5333b7f22b4ca6fa3369f64c875d053e/volumes"
	Sep 16 10:41:54 functional-016570 kubelet[5394]: I0916 10:41:54.952651    5394 scope.go:117] "RemoveContainer" containerID="490a48762f6299610dde37755b7d1a88c8a68ac418dfbdab150f25fa336a052b"
	Sep 16 10:42:09 functional-016570 kubelet[5394]: E0916 10:42:09.426875    5394 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5333b7f22b4ca6fa3369f64c875d053e" containerName="kube-apiserver"
	Sep 16 10:42:09 functional-016570 kubelet[5394]: I0916 10:42:09.427533    5394 memory_manager.go:354] "RemoveStaleState removing state" podUID="5333b7f22b4ca6fa3369f64c875d053e" containerName="kube-apiserver"
	Sep 16 10:42:09 functional-016570 kubelet[5394]: I0916 10:42:09.622483    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f6bb1f19-917d-404c-9fab-b966f900a8c6-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-jvhn6\" (UID: \"f6bb1f19-917d-404c-9fab-b966f900a8c6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-jvhn6"
	Sep 16 10:42:09 functional-016570 kubelet[5394]: I0916 10:42:09.622547    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8930ff3f-4f5f-41f0-94be-b2685f45ca6c-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-64tpc\" (UID: \"8930ff3f-4f5f-41f0-94be-b2685f45ca6c\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-64tpc"
	Sep 16 10:42:09 functional-016570 kubelet[5394]: I0916 10:42:09.622580    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcblv\" (UniqueName: \"kubernetes.io/projected/f6bb1f19-917d-404c-9fab-b966f900a8c6-kube-api-access-gcblv\") pod \"dashboard-metrics-scraper-c5db448b4-jvhn6\" (UID: \"f6bb1f19-917d-404c-9fab-b966f900a8c6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-jvhn6"
	Sep 16 10:42:09 functional-016570 kubelet[5394]: I0916 10:42:09.622611    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfd5x\" (UniqueName: \"kubernetes.io/projected/8930ff3f-4f5f-41f0-94be-b2685f45ca6c-kube-api-access-gfd5x\") pod \"kubernetes-dashboard-695b96c756-64tpc\" (UID: \"8930ff3f-4f5f-41f0-94be-b2685f45ca6c\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-64tpc"
	Sep 16 10:42:09 functional-016570 kubelet[5394]: I0916 10:42:09.732338    5394 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	
	
	==> storage-provisioner [490a48762f6299610dde37755b7d1a88c8a68ac418dfbdab150f25fa336a052b] <==
	I0916 10:41:23.246262       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0916 10:41:23.248507       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [4d490c9b7ae90d00a29d06ff834c0ce770ddfef68473b74c781044e7a283344c] <==
	I0916 10:41:55.019536       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:41:55.026724       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:41:55.026761       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-016570 -n functional-016570
helpers_test.go:261: (dbg) Run:  kubectl --context functional-016570 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context functional-016570 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (467.09µs)
helpers_test.go:263: kubectl --context functional-016570 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestFunctional/parallel/DashboardCmd (4.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (3.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-016570 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1629: (dbg) Non-zero exit: kubectl --context functional-016570 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8: fork/exec /usr/local/bin/kubectl: exec format error (457.592µs)
functional_test.go:1633: failed to create hello-node deployment with this command "kubectl --context functional-016570 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8": fork/exec /usr/local/bin/kubectl: exec format error.
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-016570 describe po hello-node-connect
functional_test.go:1602: (dbg) Non-zero exit: kubectl --context functional-016570 describe po hello-node-connect: fork/exec /usr/local/bin/kubectl: exec format error (395.237µs)
functional_test.go:1604: "kubectl --context functional-016570 describe po hello-node-connect" failed: fork/exec /usr/local/bin/kubectl: exec format error
functional_test.go:1606: hello-node pod describe:
functional_test.go:1608: (dbg) Run:  kubectl --context functional-016570 logs -l app=hello-node-connect
functional_test.go:1608: (dbg) Non-zero exit: kubectl --context functional-016570 logs -l app=hello-node-connect: fork/exec /usr/local/bin/kubectl: exec format error (353.763µs)
functional_test.go:1610: "kubectl --context functional-016570 logs -l app=hello-node-connect" failed: fork/exec /usr/local/bin/kubectl: exec format error
functional_test.go:1612: hello-node logs:
functional_test.go:1614: (dbg) Run:  kubectl --context functional-016570 describe svc hello-node-connect
functional_test.go:1614: (dbg) Non-zero exit: kubectl --context functional-016570 describe svc hello-node-connect: fork/exec /usr/local/bin/kubectl: exec format error (360.231µs)
functional_test.go:1616: "kubectl --context functional-016570 describe svc hello-node-connect" failed: fork/exec /usr/local/bin/kubectl: exec format error
functional_test.go:1618: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-016570
helpers_test.go:235: (dbg) docker inspect functional-016570:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b",
	        "Created": "2024-09-16T10:40:22.437729239Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 44343,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:40:22.549860744Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b/hostname",
	        "HostsPath": "/var/lib/docker/containers/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b/hosts",
	        "LogPath": "/var/lib/docker/containers/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b-json.log",
	        "Name": "/functional-016570",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-016570:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-016570",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/89f1441ae4b7c160cc54fa69d84178c0aef1412348cbc8b5731e08fc84a1cd48-init/diff:/var/lib/docker/overlay2/e391a31d00d345931d18da68f8a7d7338830f6d9b98fe203461225d03a65d594/diff",
	                "MergedDir": "/var/lib/docker/overlay2/89f1441ae4b7c160cc54fa69d84178c0aef1412348cbc8b5731e08fc84a1cd48/merged",
	                "UpperDir": "/var/lib/docker/overlay2/89f1441ae4b7c160cc54fa69d84178c0aef1412348cbc8b5731e08fc84a1cd48/diff",
	                "WorkDir": "/var/lib/docker/overlay2/89f1441ae4b7c160cc54fa69d84178c0aef1412348cbc8b5731e08fc84a1cd48/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-016570",
	                "Source": "/var/lib/docker/volumes/functional-016570/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-016570",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-016570",
	                "name.minikube.sigs.k8s.io": "functional-016570",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cea48287f6ef85c384c2d0fd7e890db0e03a4817385b96024c8e0a0688fd7962",
	            "SandboxKey": "/var/run/docker/netns/cea48287f6ef",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-016570": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "0f0536305092a99e49a50f7adb5b668b5d3bb1c3c21d82e43be0c155c4e7cfe5",
	                    "EndpointID": "9201dc1ccce436b0c1f5c3cef087a9b08f43da4627ab9f3717ae9bbdfb5de9f3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-016570",
	                        "389fc216adb5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-016570 -n functional-016570
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-016570 logs -n 25: (2.711371439s)
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|-----------|------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                            Args                            |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| config    | functional-016570 config get                               | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|           | cpus                                                       |                   |         |         |                     |                     |
	| service   | functional-016570 service list                             | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|           | -o json                                                    |                   |         |         |                     |                     |
	| start     | -p functional-016570                                       | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|           | --dry-run --memory                                         |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                    |                   |         |         |                     |                     |
	|           | --driver=docker                                            |                   |         |         |                     |                     |
	|           | --container-runtime=containerd                             |                   |         |         |                     |                     |
	| start     | -p functional-016570                                       | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|           | --dry-run --memory                                         |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                    |                   |         |         |                     |                     |
	|           | --driver=docker                                            |                   |         |         |                     |                     |
	|           | --container-runtime=containerd                             |                   |         |         |                     |                     |
	| cp        | functional-016570 cp                                       | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|           | functional-016570:/home/docker/cp-test.txt                 |                   |         |         |                     |                     |
	|           | /tmp/TestFunctionalparallelCpCmd2928196455/001/cp-test.txt |                   |         |         |                     |                     |
	| service   | functional-016570 service                                  | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|           | --namespace=default --https                                |                   |         |         |                     |                     |
	|           | --url hello-node                                           |                   |         |         |                     |                     |
	| start     | -p functional-016570                                       | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|           | --dry-run --alsologtostderr                                |                   |         |         |                     |                     |
	|           | -v=1 --driver=docker                                       |                   |         |         |                     |                     |
	|           | --container-runtime=containerd                             |                   |         |         |                     |                     |
	| ssh       | functional-016570 ssh -n                                   | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|           | functional-016570 sudo cat                                 |                   |         |         |                     |                     |
	|           | /home/docker/cp-test.txt                                   |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                         | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|           | -p functional-016570                                       |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                     |                   |         |         |                     |                     |
	| service   | functional-016570                                          | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|           | service hello-node --url                                   |                   |         |         |                     |                     |
	|           | --format={{.IP}}                                           |                   |         |         |                     |                     |
	| cp        | functional-016570 cp                                       | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|           | testdata/cp-test.txt                                       |                   |         |         |                     |                     |
	|           | /tmp/does/not/exist/cp-test.txt                            |                   |         |         |                     |                     |
	| service   | functional-016570 service                                  | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|           | hello-node --url                                           |                   |         |         |                     |                     |
	| ssh       | functional-016570 ssh -n                                   | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|           | functional-016570 sudo cat                                 |                   |         |         |                     |                     |
	|           | /tmp/does/not/exist/cp-test.txt                            |                   |         |         |                     |                     |
	| ssh       | functional-016570 ssh echo                                 | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|           | hello                                                      |                   |         |         |                     |                     |
	| ssh       | functional-016570 ssh cat                                  | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|           | /etc/hostname                                              |                   |         |         |                     |                     |
	| tunnel    | functional-016570 tunnel                                   | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|           | --alsologtostderr                                          |                   |         |         |                     |                     |
	| tunnel    | functional-016570 tunnel                                   | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|           | --alsologtostderr                                          |                   |         |         |                     |                     |
	| tunnel    | functional-016570 tunnel                                   | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|           | --alsologtostderr                                          |                   |         |         |                     |                     |
	| license   |                                                            | minikube          | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	| ssh       | functional-016570 ssh sudo                                 | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|           | systemctl is-active docker                                 |                   |         |         |                     |                     |
	| ssh       | functional-016570 ssh sudo                                 | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|           | systemctl is-active crio                                   |                   |         |         |                     |                     |
	| image     | functional-016570 image load --daemon                      | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|           | kicbase/echo-server:functional-016570                      |                   |         |         |                     |                     |
	|           | --alsologtostderr                                          |                   |         |         |                     |                     |
	| addons    | functional-016570 addons list                              | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	| addons    | functional-016570 addons list                              | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|           | -o json                                                    |                   |         |         |                     |                     |
	| image     | functional-016570 image ls                                 | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|-----------|------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:42:07
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:42:07.925802   55153 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:42:07.926076   55153 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:42:07.926087   55153 out.go:358] Setting ErrFile to fd 2...
	I0916 10:42:07.926094   55153 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:42:07.926329   55153 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 10:42:07.926917   55153 out.go:352] Setting JSON to false
	I0916 10:42:07.928083   55153 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1472,"bootTime":1726481856,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:42:07.928201   55153 start.go:139] virtualization: kvm guest
	I0916 10:42:07.930891   55153 out.go:177] * [functional-016570] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:42:07.933206   55153 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:42:07.933437   55153 notify.go:220] Checking for updates...
	I0916 10:42:07.936378   55153 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:42:07.937840   55153 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:42:07.939249   55153 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 10:42:07.940760   55153 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:42:07.942139   55153 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:42:07.944069   55153 config.go:182] Loaded profile config "functional-016570": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:42:07.944793   55153 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:42:07.980732   55153 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:42:07.980810   55153 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:42:08.038252   55153 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-16 10:42:08.026410213 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:42:08.038402   55153 docker.go:318] overlay module found
	I0916 10:42:08.040535   55153 out.go:177] * Using the docker driver based on existing profile
	I0916 10:42:08.042029   55153 start.go:297] selected driver: docker
	I0916 10:42:08.042043   55153 start.go:901] validating driver "docker" against &{Name:functional-016570 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-016570 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:42:08.042118   55153 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:42:08.042187   55153 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:42:08.096294   55153 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-16 10:42:08.085371862 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:42:08.096876   55153 cni.go:84] Creating CNI manager for ""
	I0916 10:42:08.096923   55153 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 10:42:08.096974   55153 start.go:340] cluster config:
	{Name:functional-016570 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-016570 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:42:08.098919   55153 out.go:177] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	7f85730bd4d94       115053965e86b       4 seconds ago        Running             dashboard-metrics-scraper   0                   395f1e784acbe       dashboard-metrics-scraper-c5db448b4-jvhn6
	4d490c9b7ae90       6e38f40d628db       21 seconds ago       Running             storage-provisioner         2                   b81ffde02718d       storage-provisioner
	d2500e97c949b       6bab7719df100       36 seconds ago       Running             kube-apiserver              0                   9d885083d4265       kube-apiserver-functional-016570
	c8262cd23469c       175ffd71cce3d       36 seconds ago       Running             kube-controller-manager     2                   8b5d374851050       kube-controller-manager-functional-016570
	861dc747735da       2e96e5913fc06       36 seconds ago       Running             etcd                        1                   2cdebcb8c7807       etcd-functional-016570
	f40f6265fc1c6       12968670680f4       53 seconds ago       Running             kindnet-cni                 1                   c7f56f796b013       kindnet-5qjpd
	490a48762f629       6e38f40d628db       53 seconds ago       Exited              storage-provisioner         1                   b81ffde02718d       storage-provisioner
	2810e4a546750       60c005f310ff3       53 seconds ago       Running             kube-proxy                  1                   f4ed79f8dffeb       kube-proxy-w8qkq
	485f2c5cef235       175ffd71cce3d       53 seconds ago       Exited              kube-controller-manager     1                   8b5d374851050       kube-controller-manager-functional-016570
	9ff9913af2feb       9aa1fad941575       53 seconds ago       Running             kube-scheduler              1                   caa2007696d1b       kube-scheduler-functional-016570
	b8bd1849da6c4       c69fa2e9cbf5f       53 seconds ago       Running             coredns                     1                   3d9a434f8b6e5       coredns-7c65d6cfc9-59qm7
	fd0c81e7a39a2       c69fa2e9cbf5f       About a minute ago   Exited              coredns                     0                   3d9a434f8b6e5       coredns-7c65d6cfc9-59qm7
	bf96dac81b725       12968670680f4       About a minute ago   Exited              kindnet-cni                 0                   c7f56f796b013       kindnet-5qjpd
	80095e847084f       60c005f310ff3       About a minute ago   Exited              kube-proxy                  0                   f4ed79f8dffeb       kube-proxy-w8qkq
	0906c5e415b9c       9aa1fad941575       About a minute ago   Exited              kube-scheduler              0                   caa2007696d1b       kube-scheduler-functional-016570
	b4905826c508e       2e96e5913fc06       About a minute ago   Exited              etcd                        0                   2cdebcb8c7807       etcd-functional-016570
	
	
	==> containerd <==
	Sep 16 10:42:10 functional-016570 containerd[4401]: time="2024-09-16T10:42:10.246483361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kubernetes-dashboard-695b96c756-64tpc,Uid:8930ff3f-4f5f-41f0-94be-b2685f45ca6c,Namespace:kubernetes-dashboard,Attempt:0,} returns sandbox id \"4817ed22e184b214d46870728ee1653ce49c9055d735e02434c8cea1b7d1de44\""
	Sep 16 10:42:10 functional-016570 containerd[4401]: time="2024-09-16T10:42:10.249544155Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 16 10:42:10 functional-016570 containerd[4401]: time="2024-09-16T10:42:10.864690766Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 16 10:42:12 functional-016570 containerd[4401]: time="2024-09-16T10:42:12.802821956Z" level=info msg="ImageCreate event name:\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 10:42:12 functional-016570 containerd[4401]: time="2024-09-16T10:42:12.803624095Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=19757298"
	Sep 16 10:42:12 functional-016570 containerd[4401]: time="2024-09-16T10:42:12.804936014Z" level=info msg="ImageCreate event name:\"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 10:42:12 functional-016570 containerd[4401]: time="2024-09-16T10:42:12.806015778Z" level=info msg="Pulled image \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" with image id \"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7\", repo tag \"\", repo digest \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\", size \"19746404\" in 2.561692105s"
	Sep 16 10:42:12 functional-016570 containerd[4401]: time="2024-09-16T10:42:12.806055104Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" returns image reference \"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7\""
	Sep 16 10:42:12 functional-016570 containerd[4401]: time="2024-09-16T10:42:12.807348878Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 16 10:42:12 functional-016570 containerd[4401]: time="2024-09-16T10:42:12.808543096Z" level=info msg="CreateContainer within sandbox \"395f1e784acbe7fe418e3072ee9988263c9eb72e57f3f6cf96ac518590ba79da\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,}"
	Sep 16 10:42:12 functional-016570 containerd[4401]: time="2024-09-16T10:42:12.808771404Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 16 10:42:12 functional-016570 containerd[4401]: time="2024-09-16T10:42:12.819891593Z" level=info msg="CreateContainer within sandbox \"395f1e784acbe7fe418e3072ee9988263c9eb72e57f3f6cf96ac518590ba79da\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,} returns container id \"7f85730bd4d94cf4fd50f2f026d885dc30954aa647a12600ead0766929560615\""
	Sep 16 10:42:12 functional-016570 containerd[4401]: time="2024-09-16T10:42:12.820422212Z" level=info msg="StartContainer for \"7f85730bd4d94cf4fd50f2f026d885dc30954aa647a12600ead0766929560615\""
	Sep 16 10:42:12 functional-016570 containerd[4401]: time="2024-09-16T10:42:12.860740687Z" level=info msg="StartContainer for \"7f85730bd4d94cf4fd50f2f026d885dc30954aa647a12600ead0766929560615\" returns successfully"
	Sep 16 10:42:13 functional-016570 containerd[4401]: time="2024-09-16T10:42:13.406600747Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 16 10:42:15 functional-016570 containerd[4401]: time="2024-09-16T10:42:15.364367680Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-016570\""
	Sep 16 10:42:15 functional-016570 containerd[4401]: time="2024-09-16T10:42:15.368267160Z" level=info msg="ImageCreate event name:\"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 10:42:15 functional-016570 containerd[4401]: time="2024-09-16T10:42:15.368636212Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-016570\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 10:42:16 functional-016570 containerd[4401]: time="2024-09-16T10:42:16.526168544Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-016570\""
	Sep 16 10:42:16 functional-016570 containerd[4401]: time="2024-09-16T10:42:16.527693363Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-016570\""
	Sep 16 10:42:16 functional-016570 containerd[4401]: time="2024-09-16T10:42:16.528855686Z" level=info msg="ImageDelete event name:\"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\""
	Sep 16 10:42:16 functional-016570 containerd[4401]: time="2024-09-16T10:42:16.536011694Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-016570\" returns successfully"
	Sep 16 10:42:16 functional-016570 containerd[4401]: time="2024-09-16T10:42:16.930330952Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-016570\""
	Sep 16 10:42:16 functional-016570 containerd[4401]: time="2024-09-16T10:42:16.935275465Z" level=info msg="ImageCreate event name:\"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 10:42:16 functional-016570 containerd[4401]: time="2024-09-16T10:42:16.935983899Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-016570\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> coredns [b8bd1849da6c464b7fbc64f004ea8f6e93596b309cb23e3f75f0493a6c22ebd5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:59975 - 12600 "HINFO IN 4686966597786162674.2744546229077384558. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011873953s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [fd0c81e7a39a2566405ad2950426958ab0d7abfe073ce6517f67e87f2cc2dabe] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44994 - 49719 "HINFO IN 5811560446017322614.7472127089541594346. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008166038s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-016570
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-016570
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=functional-016570
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_40_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:40:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-016570
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:42:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:41:43 +0000   Mon, 16 Sep 2024 10:40:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:41:43 +0000   Mon, 16 Sep 2024 10:40:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:41:43 +0000   Mon, 16 Sep 2024 10:40:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:41:43 +0000   Mon, 16 Sep 2024 10:40:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-016570
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 7e0bbc72eb56431496705823b26dcd4d
	  System UUID:                9e1cbc84-d044-4524-b143-ca93a6dc5aa0
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-59qm7                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     94s
	  kube-system                 etcd-functional-016570                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         99s
	  kube-system                 kindnet-5qjpd                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      94s
	  kube-system                 kube-apiserver-functional-016570             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-functional-016570    200m (2%)     0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-proxy-w8qkq                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 kube-scheduler-functional-016570             100m (1%)     0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-jvhn6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-64tpc        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 93s                kube-proxy       
	  Normal   Starting                 19s                kube-proxy       
	  Normal   NodeAllocatableEnforced  100s               kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 100s               kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 100s               kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  99s                kubelet          Node functional-016570 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    99s                kubelet          Node functional-016570 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     99s                kubelet          Node functional-016570 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           95s                node-controller  Node functional-016570 event: Registered Node functional-016570 in Controller
	  Normal   Starting                 44s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 44s                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  40s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  37s (x8 over 40s)  kubelet          Node functional-016570 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    37s (x7 over 40s)  kubelet          Node functional-016570 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     37s (x7 over 40s)  kubelet          Node functional-016570 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           31s                node-controller  Node functional-016570 event: Registered Node functional-016570 in Controller
	
	
	==> dmesg <==
	[Sep16 10:17]  #2
	[  +0.001391]  #3
	[  +0.000000]  #4
	[  +0.003060] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.003238] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.002037] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.002135]  #5
	[  +0.000696]  #6
	[  +0.003195]  #7
	[  +0.058540] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.423486] i8042: Warning: Keylock active
	[  +0.007424] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.002994] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000654] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000607] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000622] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000638] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000657] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000607] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000608] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.588189] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +7.251152] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [861dc747735da2748e3f4b24b824a36d0a52f89bbf91f6d93373e8e94ec47110] <==
	{"level":"info","ts":"2024-09-16T10:41:40.736533Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:41:40.736587Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:41:40.736603Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:41:40.736848Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:41:40.736868Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:41:40.737630Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-09-16T10:41:40.737696Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-09-16T10:41:40.737861Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:41:40.737967Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:41:42.324428Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-16T10:41:42.324481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T10:41:42.324523Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-16T10:41:42.324536Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:41:42.324550Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-16T10:41:42.324585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-09-16T10:41:42.324592Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-16T10:41:42.326180Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-016570 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:41:42.326181Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:41:42.326207Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:41:42.326768Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:41:42.326898Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:41:42.328280Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:41:42.328298Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:41:42.329057Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:41:42.329116Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> etcd [b4905826c508e92d29a39ef565f1c838d026ed2e1af8256da846ca2ca7e33d25] <==
	{"level":"info","ts":"2024-09-16T10:40:33.951156Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-16T10:40:33.951169Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-16T10:40:33.952133Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-016570 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:40:33.952149Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:40:33.952176Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:40:33.952174Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:40:33.952458Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:40:33.952538Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:40:33.952886Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:40:33.953028Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:40:33.953056Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:40:33.953277Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:40:33.954142Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:40:33.955433Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:40:33.956074Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-16T10:41:32.555101Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-16T10:41:32.555186Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-016570","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-09-16T10:41:32.555307Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:41:32.555359Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:41:32.556893Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:41:32.556932Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T10:41:32.558356Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-09-16T10:41:32.560054Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:41:32.560161Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:41:32.560190Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-016570","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 10:42:17 up 24 min,  0 users,  load average: 1.28, 0.94, 0.59
	Linux functional-016570 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [bf96dac81b725b0cdd05c80d46fccb31fba58eb314cbefaf4fa45648dd564d75] <==
	I0916 10:40:44.322865       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 10:40:44.323089       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0916 10:40:44.323250       1 main.go:148] setting mtu 1500 for CNI 
	I0916 10:40:44.323270       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 10:40:44.323292       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 10:40:44.644895       1 controller.go:334] Starting controller kube-network-policies
	I0916 10:40:44.644910       1 controller.go:338] Waiting for informer caches to sync
	I0916 10:40:44.644915       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 10:40:45.019853       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 10:40:45.019909       1 metrics.go:61] Registering metrics
	I0916 10:40:45.019969       1 controller.go:374] Syncing nftables rules
	I0916 10:40:54.644919       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:40:54.644955       1 main.go:299] handling current node
	I0916 10:41:04.651830       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:41:04.651862       1 main.go:299] handling current node
	I0916 10:41:14.648734       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:41:14.648784       1 main.go:299] handling current node
	
	
	==> kindnet [f40f6265fc1c666726cfc4dfc8b0637a32e85401949dfb2edee2619b1765db77] <==
	W0916 10:41:27.212636       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:41:27.212709       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0916 10:41:27.370312       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:41:27.370396       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0916 10:41:27.496085       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:41:27.496151       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0916 10:41:31.899869       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:41:31.899923       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0916 10:41:32.540969       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:41:32.541033       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0916 10:41:32.592684       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:41:32.592731       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0916 10:41:33.006411       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:41:33.006451       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	I0916 10:41:43.445970       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 10:41:43.446004       1 metrics.go:61] Registering metrics
	I0916 10:41:43.446082       1 controller.go:374] Syncing nftables rules
	I0916 10:41:43.844790       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:41:43.844836       1 main.go:299] handling current node
	I0916 10:41:53.844621       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:41:53.844659       1 main.go:299] handling current node
	I0916 10:42:03.848081       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:42:03.848124       1 main.go:299] handling current node
	I0916 10:42:13.845336       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:42:13.845386       1 main.go:299] handling current node
	
	
	==> kube-apiserver [d2500e97c949b2a2e8330e357558411d2893aaafdb03d1ef0422a5a6fb1cb12f] <==
	I0916 10:41:43.422383       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 10:41:43.422423       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:41:43.422975       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:41:43.423049       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:41:43.423247       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:41:43.424833       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:41:43.424920       1 policy_source.go:224] refreshing policies
	I0916 10:41:43.439068       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:41:43.466711       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:41:43.467901       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:41:43.471775       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 10:41:43.520505       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:41:44.269900       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0916 10:41:44.531189       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0916 10:41:44.532603       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:41:44.536818       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:41:44.865983       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:41:44.962527       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:41:44.972551       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:41:45.027823       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:41:45.034699       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 10:42:09.231235       1 controller.go:615] quota admission added evaluator for: namespaces
	I0916 10:42:09.266000       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 10:42:09.379723       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.44.131"}
	I0916 10:42:09.429057       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.65.100"}
	
	
	==> kube-controller-manager [485f2c5cef235c0182e1a64e3a548bea54de9894193e120b2b717a72b9ef1bff] <==
	I0916 10:41:23.729467       1 serving.go:386] Generated self-signed cert in-memory
	I0916 10:41:24.021064       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0916 10:41:24.021095       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:41:24.022441       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0916 10:41:24.022441       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0916 10:41:24.022737       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0916 10:41:24.022823       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0916 10:41:34.024812       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-controller-manager [c8262cd23469ca086f00137b1fc38c96429b63d514a16f9a905da144ecd2b73c] <==
	I0916 10:42:09.302132       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="21.672713ms"
	E0916 10:42:09.302164       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:42:09.322862       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="19.394348ms"
	E0916 10:42:09.322901       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:42:09.322862       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="19.742072ms"
	E0916 10:42:09.322918       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:42:09.330631       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="5.717139ms"
	E0916 10:42:09.330672       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:42:09.330731       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="6.175138ms"
	E0916 10:42:09.330740       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:42:09.338767       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="3.604226ms"
	E0916 10:42:09.338811       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:42:09.339183       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="4.001104ms"
	E0916 10:42:09.339225       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:42:09.420221       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="39.404744ms"
	I0916 10:42:09.422278       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="41.065043ms"
	I0916 10:42:09.430005       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="7.676168ms"
	I0916 10:42:09.430194       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="129.11µs"
	I0916 10:42:09.445018       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="42.392µs"
	I0916 10:42:09.520924       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="100.632911ms"
	I0916 10:42:09.521034       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="57.631µs"
	I0916 10:42:09.521082       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="25.339µs"
	I0916 10:42:09.531158       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="49.005µs"
	I0916 10:42:13.153143       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="10.141174ms"
	I0916 10:42:13.153260       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="55.696µs"
	
	
	==> kube-proxy [2810e4a54675045b91d6e2b6996d5595fca99d1ed910f700a9440b05c934282a] <==
	I0916 10:41:23.344360       1 server_linux.go:66] "Using iptables proxy"
	E0916 10:41:23.466145       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-016570\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0916 10:41:24.666043       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-016570\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0916 10:41:26.980270       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-016570\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0916 10:41:31.272411       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-016570\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0916 10:41:40.759863       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-016570\": dial tcp 192.168.49.2:8441: connect: connection refused"
	I0916 10:41:57.573875       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:41:57.573958       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:41:57.593375       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:41:57.593437       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:41:57.595286       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:41:57.595663       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:41:57.595695       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:41:57.596863       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:41:57.596985       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:41:57.597145       1 config.go:199] "Starting service config controller"
	I0916 10:41:57.597451       1 config.go:328] "Starting node config controller"
	I0916 10:41:57.597468       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:41:57.597345       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:41:57.697968       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:41:57.698030       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:41:57.698035       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [80095e847084fa5775086821e6be1bb6e7bae0fe6a66745b19cb9b75e266bc3f] <==
	I0916 10:40:44.050300       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:40:44.179052       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:40:44.179113       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:40:44.196854       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:40:44.196926       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:40:44.198841       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:40:44.199284       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:40:44.199313       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:40:44.200417       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:40:44.200434       1 config.go:199] "Starting service config controller"
	I0916 10:40:44.200460       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:40:44.200461       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:40:44.200579       1 config.go:328] "Starting node config controller"
	I0916 10:40:44.200592       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:40:44.300956       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:40:44.300944       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:40:44.300945       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [0906c5e415b9c8b7cf0dc6e35daf1caed07aec8e00dc68457a9203cdeaa0fcee] <==
	E0916 10:40:35.625982       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:35.626126       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:40:35.626272       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:35.626178       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 10:40:35.627185       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.485194       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:40:36.485283       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.534754       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:40:36.534802       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.553649       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 10:40:36.553693       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.739511       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 10:40:36.739572       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.763525       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 10:40:36.763583       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.768041       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 10:40:36.768094       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.782504       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 10:40:36.782564       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.919306       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:40:36.919357       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 10:40:39.247701       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:41:22.445138       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0916 10:41:22.445225       1 run.go:72] "command failed" err="finished without leader elect"
	I0916 10:41:22.445244       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	
	
	==> kube-scheduler [9ff9913af2feb41a804690d65aef168822cd2ac0a456e3642182c64337903889] <==
	W0916 10:41:33.309245       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0916 10:41:33.309313       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:41:34.856594       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0916 10:41:34.856642       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:41:39.800183       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0916 10:41:39.800261       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:41:40.256311       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0916 10:41:40.256386       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:41:40.442961       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0916 10:41:40.443018       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:41:40.559680       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0916 10:41:40.559763       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:41:43.341030       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0916 10:41:43.341153       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:41:43.341209       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 10:41:43.341177       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:41:43.341296       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 10:41:43.341326       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:41:43.341402       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 10:41:43.341424       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:41:43.341470       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:41:43.341491       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:41:43.341579       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:41:43.341738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0916 10:41:43.775320       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:41:43 functional-016570 kubelet[5394]: time="2024-09-16T10:41:43Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/devices/kubepods/burstable/pod5333b7f22b4ca6fa3369f64c875d053e/5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961: device or resource busy"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.038387    5394 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.039162    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9924f10d-5beb-43b1-9782-44644a015b56-tmp\") pod \"storage-provisioner\" (UID: \"9924f10d-5beb-43b1-9782-44644a015b56\") " pod="kube-system/storage-provisioner"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.039226    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4a00283-1d69-49c4-8c60-264ef3fd7aca-lib-modules\") pod \"kube-proxy-w8qkq\" (UID: \"b4a00283-1d69-49c4-8c60-264ef3fd7aca\") " pod="kube-system/kube-proxy-w8qkq"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.039244    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8ee89403-0943-480c-9f48-4b25a0198f6d-cni-cfg\") pod \"kindnet-5qjpd\" (UID: \"8ee89403-0943-480c-9f48-4b25a0198f6d\") " pod="kube-system/kindnet-5qjpd"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.039258    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ee89403-0943-480c-9f48-4b25a0198f6d-xtables-lock\") pod \"kindnet-5qjpd\" (UID: \"8ee89403-0943-480c-9f48-4b25a0198f6d\") " pod="kube-system/kindnet-5qjpd"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.039324    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ee89403-0943-480c-9f48-4b25a0198f6d-lib-modules\") pod \"kindnet-5qjpd\" (UID: \"8ee89403-0943-480c-9f48-4b25a0198f6d\") " pod="kube-system/kindnet-5qjpd"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.039348    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4a00283-1d69-49c4-8c60-264ef3fd7aca-xtables-lock\") pod \"kube-proxy-w8qkq\" (UID: \"b4a00283-1d69-49c4-8c60-264ef3fd7aca\") " pod="kube-system/kube-proxy-w8qkq"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.056492    5394 scope.go:117] "RemoveContainer" containerID="c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.056894    5394 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-016570" podUID="03b56925-37e8-4f4c-947d-8798a9b0b1e8"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.080422    5394 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-functional-016570"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.241180    5394 scope.go:117] "RemoveContainer" containerID="490a48762f6299610dde37755b7d1a88c8a68ac418dfbdab150f25fa336a052b"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: E0916 10:41:44.242240    5394 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:storage-provisioner,Image:gcr.io/k8s-minikube/storage-provisioner:v5,Command:[/storage-provisioner],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bvpkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},Sta
rtupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod storage-provisioner_kube-system(9924f10d-5beb-43b1-9782-44644a015b56): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: E0916 10:41:44.243435    5394 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/storage-provisioner" podUID="9924f10d-5beb-43b1-9782-44644a015b56"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.264872    5394 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-functional-016570" podStartSLOduration=0.264847622 podStartE2EDuration="264.847622ms" podCreationTimestamp="2024-09-16 10:41:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:41:44.264526883 +0000 UTC m=+10.389456446" watchObservedRunningTime="2024-09-16 10:41:44.264847622 +0000 UTC m=+10.389777177"
	Sep 16 10:41:45 functional-016570 kubelet[5394]: I0916 10:41:45.059426    5394 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-016570" podUID="03b56925-37e8-4f4c-947d-8798a9b0b1e8"
	Sep 16 10:41:45 functional-016570 kubelet[5394]: I0916 10:41:45.955614    5394 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5333b7f22b4ca6fa3369f64c875d053e" path="/var/lib/kubelet/pods/5333b7f22b4ca6fa3369f64c875d053e/volumes"
	Sep 16 10:41:54 functional-016570 kubelet[5394]: I0916 10:41:54.952651    5394 scope.go:117] "RemoveContainer" containerID="490a48762f6299610dde37755b7d1a88c8a68ac418dfbdab150f25fa336a052b"
	Sep 16 10:42:09 functional-016570 kubelet[5394]: E0916 10:42:09.426875    5394 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5333b7f22b4ca6fa3369f64c875d053e" containerName="kube-apiserver"
	Sep 16 10:42:09 functional-016570 kubelet[5394]: I0916 10:42:09.427533    5394 memory_manager.go:354] "RemoveStaleState removing state" podUID="5333b7f22b4ca6fa3369f64c875d053e" containerName="kube-apiserver"
	Sep 16 10:42:09 functional-016570 kubelet[5394]: I0916 10:42:09.622483    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f6bb1f19-917d-404c-9fab-b966f900a8c6-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-jvhn6\" (UID: \"f6bb1f19-917d-404c-9fab-b966f900a8c6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-jvhn6"
	Sep 16 10:42:09 functional-016570 kubelet[5394]: I0916 10:42:09.622547    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8930ff3f-4f5f-41f0-94be-b2685f45ca6c-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-64tpc\" (UID: \"8930ff3f-4f5f-41f0-94be-b2685f45ca6c\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-64tpc"
	Sep 16 10:42:09 functional-016570 kubelet[5394]: I0916 10:42:09.622580    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcblv\" (UniqueName: \"kubernetes.io/projected/f6bb1f19-917d-404c-9fab-b966f900a8c6-kube-api-access-gcblv\") pod \"dashboard-metrics-scraper-c5db448b4-jvhn6\" (UID: \"f6bb1f19-917d-404c-9fab-b966f900a8c6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-jvhn6"
	Sep 16 10:42:09 functional-016570 kubelet[5394]: I0916 10:42:09.622611    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfd5x\" (UniqueName: \"kubernetes.io/projected/8930ff3f-4f5f-41f0-94be-b2685f45ca6c-kube-api-access-gfd5x\") pod \"kubernetes-dashboard-695b96c756-64tpc\" (UID: \"8930ff3f-4f5f-41f0-94be-b2685f45ca6c\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-64tpc"
	Sep 16 10:42:09 functional-016570 kubelet[5394]: I0916 10:42:09.732338    5394 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	
	
	==> storage-provisioner [490a48762f6299610dde37755b7d1a88c8a68ac418dfbdab150f25fa336a052b] <==
	I0916 10:41:23.246262       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0916 10:41:23.248507       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [4d490c9b7ae90d00a29d06ff834c0ce770ddfef68473b74c781044e7a283344c] <==
	I0916 10:41:55.019536       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:41:55.026724       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:41:55.026761       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:42:12.454479       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:42:12.454665       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-016570_189be736-a61c-4399-97b3-ea0b09de3894!
	I0916 10:42:12.454679       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1e3e2c42-8555-41e5-b1cf-7a6ddf78f6d7", APIVersion:"v1", ResourceVersion:"588", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-016570_189be736-a61c-4399-97b3-ea0b09de3894 became leader
	I0916 10:42:12.555037       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-016570_189be736-a61c-4399-97b3-ea0b09de3894!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-016570 -n functional-016570
helpers_test.go:261: (dbg) Run:  kubectl --context functional-016570 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context functional-016570 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (572.478µs)
helpers_test.go:263: kubectl --context functional-016570 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (3.43s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (90.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [9924f10d-5beb-43b1-9782-44644a015b56] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004246038s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-016570 get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context functional-016570 get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (444.429µs)
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-016570 get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context functional-016570 get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (528.032µs)
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-016570 get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context functional-016570 get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (502.711µs)
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-016570 get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context functional-016570 get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (477.806µs)
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-016570 get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context functional-016570 get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (496.059µs)
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-016570 get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context functional-016570 get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (541.077µs)
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-016570 get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context functional-016570 get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (499.579µs)
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-016570 get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context functional-016570 get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (610.362µs)
E0916 10:42:34.907978   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:42:40.029452   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-016570 get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context functional-016570 get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (615.56µs)
E0916 10:42:50.270857   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-016570 get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context functional-016570 get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (496.143µs)
E0916 10:43:10.753019   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-016570 get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context functional-016570 get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (551.264µs)
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-016570 get storageclass -o=json
functional_test_pvc_test.go:49: (dbg) Non-zero exit: kubectl --context functional-016570 get storageclass -o=json: fork/exec /usr/local/bin/kubectl: exec format error (496.432µs)
functional_test_pvc_test.go:65: failed to check for storage class: fork/exec /usr/local/bin/kubectl: exec format error
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-016570 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:69: (dbg) Non-zero exit: kubectl --context functional-016570 apply -f testdata/storage-provisioner/pvc.yaml: fork/exec /usr/local/bin/kubectl: exec format error (347.865µs)
functional_test_pvc_test.go:71: kubectl apply pvc.yaml failed: args "kubectl --context functional-016570 apply -f testdata/storage-provisioner/pvc.yaml": fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-016570
helpers_test.go:235: (dbg) docker inspect functional-016570:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b",
	        "Created": "2024-09-16T10:40:22.437729239Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 44343,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:40:22.549860744Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b/hostname",
	        "HostsPath": "/var/lib/docker/containers/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b/hosts",
	        "LogPath": "/var/lib/docker/containers/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b-json.log",
	        "Name": "/functional-016570",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-016570:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-016570",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/89f1441ae4b7c160cc54fa69d84178c0aef1412348cbc8b5731e08fc84a1cd48-init/diff:/var/lib/docker/overlay2/e391a31d00d345931d18da68f8a7d7338830f6d9b98fe203461225d03a65d594/diff",
	                "MergedDir": "/var/lib/docker/overlay2/89f1441ae4b7c160cc54fa69d84178c0aef1412348cbc8b5731e08fc84a1cd48/merged",
	                "UpperDir": "/var/lib/docker/overlay2/89f1441ae4b7c160cc54fa69d84178c0aef1412348cbc8b5731e08fc84a1cd48/diff",
	                "WorkDir": "/var/lib/docker/overlay2/89f1441ae4b7c160cc54fa69d84178c0aef1412348cbc8b5731e08fc84a1cd48/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-016570",
	                "Source": "/var/lib/docker/volumes/functional-016570/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-016570",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-016570",
	                "name.minikube.sigs.k8s.io": "functional-016570",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cea48287f6ef85c384c2d0fd7e890db0e03a4817385b96024c8e0a0688fd7962",
	            "SandboxKey": "/var/run/docker/netns/cea48287f6ef",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-016570": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "0f0536305092a99e49a50f7adb5b668b5d3bb1c3c21d82e43be0c155c4e7cfe5",
	                    "EndpointID": "9201dc1ccce436b0c1f5c3cef087a9b08f43da4627ab9f3717ae9bbdfb5de9f3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-016570",
	                        "389fc216adb5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-016570 -n functional-016570
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-016570 logs -n 25: (1.400241299s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|----------------|------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-016570 ssh sudo cat                                         | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|                | /etc/ssl/certs/51391683.0                                              |                   |         |         |                     |                     |
	| ssh            | functional-016570 ssh -- ls                                            | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|                | -la /mount-9p                                                          |                   |         |         |                     |                     |
	| ssh            | functional-016570 ssh sudo cat                                         | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|                | /etc/ssl/certs/111892.pem                                              |                   |         |         |                     |                     |
	| ssh            | functional-016570 ssh sudo                                             | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|                | umount -f /mount-9p                                                    |                   |         |         |                     |                     |
	| ssh            | functional-016570 ssh sudo cat                                         | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|                | /usr/share/ca-certificates/111892.pem                                  |                   |         |         |                     |                     |
	| ssh            | functional-016570 ssh sudo cat                                         | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|                | /etc/ssl/certs/3ec20f2e.0                                              |                   |         |         |                     |                     |
	| mount          | -p functional-016570                                                   | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1743652396/001:/mount3 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                   |         |         |                     |                     |
	| mount          | -p functional-016570                                                   | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1743652396/001:/mount2 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                   |         |         |                     |                     |
	| ssh            | functional-016570 ssh findmnt                                          | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|                | -T /mount1                                                             |                   |         |         |                     |                     |
	| mount          | -p functional-016570                                                   | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1743652396/001:/mount1 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                   |         |         |                     |                     |
	| ssh            | functional-016570 ssh sudo cat                                         | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|                | /etc/test/nested/copy/11189/hosts                                      |                   |         |         |                     |                     |
	| ssh            | functional-016570 ssh findmnt                                          | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|                | -T /mount1                                                             |                   |         |         |                     |                     |
	| ssh            | functional-016570 ssh findmnt                                          | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|                | -T /mount2                                                             |                   |         |         |                     |                     |
	| ssh            | functional-016570 ssh findmnt                                          | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|                | -T /mount3                                                             |                   |         |         |                     |                     |
	| mount          | -p functional-016570                                                   | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|                | --kill=true                                                            |                   |         |         |                     |                     |
	| image          | functional-016570                                                      | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|                | image ls --format short                                                |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| image          | functional-016570                                                      | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|                | image ls --format yaml                                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| ssh            | functional-016570 ssh pgrep                                            | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|                | buildkitd                                                              |                   |         |         |                     |                     |
	| image          | functional-016570                                                      | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|                | image ls --format json                                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| image          | functional-016570 image build -t                                       | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|                | localhost/my-image:functional-016570                                   |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                   |         |         |                     |                     |
	| image          | functional-016570                                                      | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|                | image ls --format table                                                |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                   |         |         |                     |                     |
	| update-context | functional-016570                                                      | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|                | update-context                                                         |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                   |         |         |                     |                     |
	| update-context | functional-016570                                                      | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|                | update-context                                                         |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                   |         |         |                     |                     |
	| update-context | functional-016570                                                      | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|                | update-context                                                         |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                   |         |         |                     |                     |
	| image          | functional-016570 image ls                                             | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|----------------|------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:42:07
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:42:07.925802   55153 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:42:07.926076   55153 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:42:07.926087   55153 out.go:358] Setting ErrFile to fd 2...
	I0916 10:42:07.926094   55153 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:42:07.926329   55153 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 10:42:07.926917   55153 out.go:352] Setting JSON to false
	I0916 10:42:07.928083   55153 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1472,"bootTime":1726481856,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:42:07.928201   55153 start.go:139] virtualization: kvm guest
	I0916 10:42:07.930891   55153 out.go:177] * [functional-016570] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:42:07.933206   55153 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:42:07.933437   55153 notify.go:220] Checking for updates...
	I0916 10:42:07.936378   55153 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:42:07.937840   55153 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:42:07.939249   55153 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 10:42:07.940760   55153 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:42:07.942139   55153 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:42:07.944069   55153 config.go:182] Loaded profile config "functional-016570": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:42:07.944793   55153 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:42:07.980732   55153 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:42:07.980810   55153 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:42:08.038252   55153 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-16 10:42:08.026410213 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:42:08.038402   55153 docker.go:318] overlay module found
	I0916 10:42:08.040535   55153 out.go:177] * Using the docker driver based on existing profile
	I0916 10:42:08.042029   55153 start.go:297] selected driver: docker
	I0916 10:42:08.042043   55153 start.go:901] validating driver "docker" against &{Name:functional-016570 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-016570 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:42:08.042118   55153 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:42:08.042187   55153 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:42:08.096294   55153 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-16 10:42:08.085371862 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:42:08.096876   55153 cni.go:84] Creating CNI manager for ""
	I0916 10:42:08.096923   55153 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 10:42:08.096974   55153 start.go:340] cluster config:
	{Name:functional-016570 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-016570 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:42:08.098919   55153 out.go:177] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	c4fcde4fb7e45       07655ddf2eebe       About a minute ago   Running             kubernetes-dashboard        0                   4817ed22e184b       kubernetes-dashboard-695b96c756-64tpc
	7f85730bd4d94       115053965e86b       About a minute ago   Running             dashboard-metrics-scraper   0                   395f1e784acbe       dashboard-metrics-scraper-c5db448b4-jvhn6
	4d490c9b7ae90       6e38f40d628db       About a minute ago   Running             storage-provisioner         2                   b81ffde02718d       storage-provisioner
	d2500e97c949b       6bab7719df100       About a minute ago   Running             kube-apiserver              0                   9d885083d4265       kube-apiserver-functional-016570
	c8262cd23469c       175ffd71cce3d       About a minute ago   Running             kube-controller-manager     2                   8b5d374851050       kube-controller-manager-functional-016570
	861dc747735da       2e96e5913fc06       About a minute ago   Running             etcd                        1                   2cdebcb8c7807       etcd-functional-016570
	f40f6265fc1c6       12968670680f4       2 minutes ago        Running             kindnet-cni                 1                   c7f56f796b013       kindnet-5qjpd
	490a48762f629       6e38f40d628db       2 minutes ago        Exited              storage-provisioner         1                   b81ffde02718d       storage-provisioner
	2810e4a546750       60c005f310ff3       2 minutes ago        Running             kube-proxy                  1                   f4ed79f8dffeb       kube-proxy-w8qkq
	485f2c5cef235       175ffd71cce3d       2 minutes ago        Exited              kube-controller-manager     1                   8b5d374851050       kube-controller-manager-functional-016570
	9ff9913af2feb       9aa1fad941575       2 minutes ago        Running             kube-scheduler              1                   caa2007696d1b       kube-scheduler-functional-016570
	b8bd1849da6c4       c69fa2e9cbf5f       2 minutes ago        Running             coredns                     1                   3d9a434f8b6e5       coredns-7c65d6cfc9-59qm7
	fd0c81e7a39a2       c69fa2e9cbf5f       2 minutes ago        Exited              coredns                     0                   3d9a434f8b6e5       coredns-7c65d6cfc9-59qm7
	bf96dac81b725       12968670680f4       2 minutes ago        Exited              kindnet-cni                 0                   c7f56f796b013       kindnet-5qjpd
	80095e847084f       60c005f310ff3       2 minutes ago        Exited              kube-proxy                  0                   f4ed79f8dffeb       kube-proxy-w8qkq
	0906c5e415b9c       9aa1fad941575       3 minutes ago        Exited              kube-scheduler              0                   caa2007696d1b       kube-scheduler-functional-016570
	b4905826c508e       2e96e5913fc06       3 minutes ago        Exited              etcd                        0                   2cdebcb8c7807       etcd-functional-016570
	
	
	==> containerd <==
	Sep 16 10:42:20 functional-016570 containerd[4401]: time="2024-09-16T10:42:20.078251600Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-016570\""
	Sep 16 10:42:20 functional-016570 containerd[4401]: time="2024-09-16T10:42:20.080115910Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-016570\""
	Sep 16 10:42:20 functional-016570 containerd[4401]: time="2024-09-16T10:42:20.081280051Z" level=info msg="ImageDelete event name:\"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\""
	Sep 16 10:42:20 functional-016570 containerd[4401]: time="2024-09-16T10:42:20.088503497Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-016570\" returns successfully"
	Sep 16 10:42:20 functional-016570 containerd[4401]: time="2024-09-16T10:42:20.643952210Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-016570\""
	Sep 16 10:42:20 functional-016570 containerd[4401]: time="2024-09-16T10:42:20.647727926Z" level=info msg="ImageCreate event name:\"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 10:42:20 functional-016570 containerd[4401]: time="2024-09-16T10:42:20.648138801Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-016570\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 10:42:28 functional-016570 containerd[4401]: time="2024-09-16T10:42:28.187882158Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 10:42:28 functional-016570 containerd[4401]: time="2024-09-16T10:42:28.187958628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 10:42:28 functional-016570 containerd[4401]: time="2024-09-16T10:42:28.187970077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:42:28 functional-016570 containerd[4401]: time="2024-09-16T10:42:28.188054147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:42:28 functional-016570 containerd[4401]: time="2024-09-16T10:42:28.253688924Z" level=info msg="shim disconnected" id=wgjhzs4w3eryozynvqjnyl3v2 namespace=k8s.io
	Sep 16 10:42:28 functional-016570 containerd[4401]: time="2024-09-16T10:42:28.253761781Z" level=warning msg="cleaning up after shim disconnected" id=wgjhzs4w3eryozynvqjnyl3v2 namespace=k8s.io
	Sep 16 10:42:28 functional-016570 containerd[4401]: time="2024-09-16T10:42:28.253777174Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 16 10:42:29 functional-016570 containerd[4401]: time="2024-09-16T10:42:29.198740849Z" level=info msg="ImageCreate event name:\"localhost/my-image:functional-016570\""
	Sep 16 10:42:29 functional-016570 containerd[4401]: time="2024-09-16T10:42:29.205047234Z" level=info msg="ImageCreate event name:\"sha256:9d0b23f97fa55a94d55a6e3c8e6fb368b23e3cf63ca877fc70708a413ac19b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 10:42:29 functional-016570 containerd[4401]: time="2024-09-16T10:42:29.205565756Z" level=info msg="ImageUpdate event name:\"localhost/my-image:functional-016570\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 10:42:33 functional-016570 containerd[4401]: time="2024-09-16T10:42:33.940234546Z" level=info msg="StopPodSandbox for \"5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961\""
	Sep 16 10:42:33 functional-016570 containerd[4401]: time="2024-09-16T10:42:33.940985619Z" level=info msg="TearDown network for sandbox \"5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961\" successfully"
	Sep 16 10:42:33 functional-016570 containerd[4401]: time="2024-09-16T10:42:33.941119037Z" level=info msg="StopPodSandbox for \"5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961\" returns successfully"
	Sep 16 10:42:33 functional-016570 containerd[4401]: time="2024-09-16T10:42:33.942693045Z" level=info msg="RemovePodSandbox for \"5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961\""
	Sep 16 10:42:33 functional-016570 containerd[4401]: time="2024-09-16T10:42:33.942731750Z" level=info msg="Forcibly stopping sandbox \"5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961\""
	Sep 16 10:42:33 functional-016570 containerd[4401]: time="2024-09-16T10:42:33.942805830Z" level=info msg="TearDown network for sandbox \"5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961\" successfully"
	Sep 16 10:42:33 functional-016570 containerd[4401]: time="2024-09-16T10:42:33.947520190Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 16 10:42:33 functional-016570 containerd[4401]: time="2024-09-16T10:42:33.947611038Z" level=info msg="RemovePodSandbox \"5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961\" returns successfully"
	
	
	==> coredns [b8bd1849da6c464b7fbc64f004ea8f6e93596b309cb23e3f75f0493a6c22ebd5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:59975 - 12600 "HINFO IN 4686966597786162674.2744546229077384558. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011873953s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [fd0c81e7a39a2566405ad2950426958ab0d7abfe073ce6517f67e87f2cc2dabe] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44994 - 49719 "HINFO IN 5811560446017322614.7472127089541594346. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008166038s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-016570
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-016570
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=functional-016570
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_40_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:40:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-016570
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:43:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:42:34 +0000   Mon, 16 Sep 2024 10:40:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:42:34 +0000   Mon, 16 Sep 2024 10:40:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:42:34 +0000   Mon, 16 Sep 2024 10:40:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:42:34 +0000   Mon, 16 Sep 2024 10:40:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-016570
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 7e0bbc72eb56431496705823b26dcd4d
	  System UUID:                9e1cbc84-d044-4524-b143-ca93a6dc5aa0
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-59qm7                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m55s
	  kube-system                 etcd-functional-016570                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         3m
	  kube-system                 kindnet-5qjpd                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m55s
	  kube-system                 kube-apiserver-functional-016570             250m (3%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-functional-016570    200m (2%)     0 (0%)      0 (0%)           0 (0%)         3m
	  kube-system                 kube-proxy-w8qkq                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m55s
	  kube-system                 kube-scheduler-functional-016570             100m (1%)     0 (0%)      0 (0%)           0 (0%)         3m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m54s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-jvhn6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-64tpc        0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 2m53s                kube-proxy       
	  Normal   Starting                 100s                 kube-proxy       
	  Normal   NodeAllocatableEnforced  3m1s                 kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 3m1s                 kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 3m1s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  3m                   kubelet          Node functional-016570 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m                   kubelet          Node functional-016570 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m                   kubelet          Node functional-016570 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m56s                node-controller  Node functional-016570 event: Registered Node functional-016570 in Controller
	  Normal   Starting                 2m5s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m5s                 kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  2m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  118s (x8 over 2m1s)  kubelet          Node functional-016570 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    118s (x7 over 2m1s)  kubelet          Node functional-016570 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     118s (x7 over 2m1s)  kubelet          Node functional-016570 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           112s                 node-controller  Node functional-016570 event: Registered Node functional-016570 in Controller
	
	
	==> dmesg <==
	[Sep16 10:17]  #2
	[  +0.001391]  #3
	[  +0.000000]  #4
	[  +0.003060] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.003238] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.002037] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.002135]  #5
	[  +0.000696]  #6
	[  +0.003195]  #7
	[  +0.058540] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.423486] i8042: Warning: Keylock active
	[  +0.007424] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.002994] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000654] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000607] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000622] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000638] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000657] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000607] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000608] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.588189] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +7.251152] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [861dc747735da2748e3f4b24b824a36d0a52f89bbf91f6d93373e8e94ec47110] <==
	{"level":"info","ts":"2024-09-16T10:41:40.736533Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:41:40.736587Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:41:40.736603Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:41:40.736848Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:41:40.736868Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:41:40.737630Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-09-16T10:41:40.737696Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-09-16T10:41:40.737861Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:41:40.737967Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:41:42.324428Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-16T10:41:42.324481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T10:41:42.324523Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-16T10:41:42.324536Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:41:42.324550Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-16T10:41:42.324585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-09-16T10:41:42.324592Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-16T10:41:42.326180Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-016570 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:41:42.326181Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:41:42.326207Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:41:42.326768Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:41:42.326898Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:41:42.328280Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:41:42.328298Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:41:42.329057Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:41:42.329116Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> etcd [b4905826c508e92d29a39ef565f1c838d026ed2e1af8256da846ca2ca7e33d25] <==
	{"level":"info","ts":"2024-09-16T10:40:33.951156Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-16T10:40:33.951169Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-16T10:40:33.952133Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-016570 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:40:33.952149Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:40:33.952176Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:40:33.952174Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:40:33.952458Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:40:33.952538Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:40:33.952886Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:40:33.953028Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:40:33.953056Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:40:33.953277Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:40:33.954142Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:40:33.955433Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:40:33.956074Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-16T10:41:32.555101Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-16T10:41:32.555186Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-016570","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-09-16T10:41:32.555307Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:41:32.555359Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:41:32.556893Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:41:32.556932Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T10:41:32.558356Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-09-16T10:41:32.560054Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:41:32.560161Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:41:32.560190Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-016570","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 10:43:38 up 26 min,  0 users,  load average: 0.48, 0.77, 0.56
	Linux functional-016570 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [bf96dac81b725b0cdd05c80d46fccb31fba58eb314cbefaf4fa45648dd564d75] <==
	I0916 10:40:44.322865       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 10:40:44.323089       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0916 10:40:44.323250       1 main.go:148] setting mtu 1500 for CNI 
	I0916 10:40:44.323270       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 10:40:44.323292       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 10:40:44.644895       1 controller.go:334] Starting controller kube-network-policies
	I0916 10:40:44.644910       1 controller.go:338] Waiting for informer caches to sync
	I0916 10:40:44.644915       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 10:40:45.019853       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 10:40:45.019909       1 metrics.go:61] Registering metrics
	I0916 10:40:45.019969       1 controller.go:374] Syncing nftables rules
	I0916 10:40:54.644919       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:40:54.644955       1 main.go:299] handling current node
	I0916 10:41:04.651830       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:41:04.651862       1 main.go:299] handling current node
	I0916 10:41:14.648734       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:41:14.648784       1 main.go:299] handling current node
	
	
	==> kindnet [f40f6265fc1c666726cfc4dfc8b0637a32e85401949dfb2edee2619b1765db77] <==
	I0916 10:41:43.446082       1 controller.go:374] Syncing nftables rules
	I0916 10:41:43.844790       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:41:43.844836       1 main.go:299] handling current node
	I0916 10:41:53.844621       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:41:53.844659       1 main.go:299] handling current node
	I0916 10:42:03.848081       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:42:03.848124       1 main.go:299] handling current node
	I0916 10:42:13.845336       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:42:13.845386       1 main.go:299] handling current node
	I0916 10:42:23.845243       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:42:23.845316       1 main.go:299] handling current node
	I0916 10:42:33.854345       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:42:33.854415       1 main.go:299] handling current node
	I0916 10:42:43.845161       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:42:43.845200       1 main.go:299] handling current node
	I0916 10:42:53.854236       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:42:53.854273       1 main.go:299] handling current node
	I0916 10:43:03.854360       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:43:03.854397       1 main.go:299] handling current node
	I0916 10:43:13.845283       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:43:13.845319       1 main.go:299] handling current node
	I0916 10:43:23.844563       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:43:23.844627       1 main.go:299] handling current node
	I0916 10:43:33.850489       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:43:33.850527       1 main.go:299] handling current node
	
	
	==> kube-apiserver [d2500e97c949b2a2e8330e357558411d2893aaafdb03d1ef0422a5a6fb1cb12f] <==
	I0916 10:41:43.422383       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 10:41:43.422423       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:41:43.422975       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:41:43.423049       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:41:43.423247       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:41:43.424833       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:41:43.424920       1 policy_source.go:224] refreshing policies
	I0916 10:41:43.439068       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:41:43.466711       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:41:43.467901       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:41:43.471775       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 10:41:43.520505       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:41:44.269900       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0916 10:41:44.531189       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0916 10:41:44.532603       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:41:44.536818       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:41:44.865983       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:41:44.962527       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:41:44.972551       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:41:45.027823       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:41:45.034699       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 10:42:09.231235       1 controller.go:615] quota admission added evaluator for: namespaces
	I0916 10:42:09.266000       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 10:42:09.379723       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.44.131"}
	I0916 10:42:09.429057       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.65.100"}
	
	
	==> kube-controller-manager [485f2c5cef235c0182e1a64e3a548bea54de9894193e120b2b717a72b9ef1bff] <==
	I0916 10:41:23.729467       1 serving.go:386] Generated self-signed cert in-memory
	I0916 10:41:24.021064       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0916 10:41:24.021095       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:41:24.022441       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0916 10:41:24.022441       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0916 10:41:24.022737       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0916 10:41:24.022823       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0916 10:41:34.024812       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-controller-manager [c8262cd23469ca086f00137b1fc38c96429b63d514a16f9a905da144ecd2b73c] <==
	E0916 10:42:09.322901       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:42:09.322862       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="19.742072ms"
	E0916 10:42:09.322918       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:42:09.330631       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="5.717139ms"
	E0916 10:42:09.330672       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:42:09.330731       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="6.175138ms"
	E0916 10:42:09.330740       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:42:09.338767       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="3.604226ms"
	E0916 10:42:09.338811       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:42:09.339183       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="4.001104ms"
	E0916 10:42:09.339225       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:42:09.420221       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="39.404744ms"
	I0916 10:42:09.422278       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="41.065043ms"
	I0916 10:42:09.430005       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="7.676168ms"
	I0916 10:42:09.430194       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="129.11µs"
	I0916 10:42:09.445018       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="42.392µs"
	I0916 10:42:09.520924       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="100.632911ms"
	I0916 10:42:09.521034       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="57.631µs"
	I0916 10:42:09.521082       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="25.339µs"
	I0916 10:42:09.531158       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="49.005µs"
	I0916 10:42:13.153143       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="10.141174ms"
	I0916 10:42:13.153260       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="55.696µs"
	I0916 10:42:19.221057       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="50.188807ms"
	I0916 10:42:19.221165       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="62.698µs"
	I0916 10:42:34.976946       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-016570"
	
	
	==> kube-proxy [2810e4a54675045b91d6e2b6996d5595fca99d1ed910f700a9440b05c934282a] <==
	I0916 10:41:23.344360       1 server_linux.go:66] "Using iptables proxy"
	E0916 10:41:23.466145       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-016570\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0916 10:41:24.666043       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-016570\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0916 10:41:26.980270       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-016570\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0916 10:41:31.272411       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-016570\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0916 10:41:40.759863       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-016570\": dial tcp 192.168.49.2:8441: connect: connection refused"
	I0916 10:41:57.573875       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:41:57.573958       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:41:57.593375       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:41:57.593437       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:41:57.595286       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:41:57.595663       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:41:57.595695       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:41:57.596863       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:41:57.596985       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:41:57.597145       1 config.go:199] "Starting service config controller"
	I0916 10:41:57.597451       1 config.go:328] "Starting node config controller"
	I0916 10:41:57.597468       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:41:57.597345       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:41:57.697968       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:41:57.698030       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:41:57.698035       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [80095e847084fa5775086821e6be1bb6e7bae0fe6a66745b19cb9b75e266bc3f] <==
	I0916 10:40:44.050300       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:40:44.179052       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:40:44.179113       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:40:44.196854       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:40:44.196926       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:40:44.198841       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:40:44.199284       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:40:44.199313       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:40:44.200417       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:40:44.200434       1 config.go:199] "Starting service config controller"
	I0916 10:40:44.200460       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:40:44.200461       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:40:44.200579       1 config.go:328] "Starting node config controller"
	I0916 10:40:44.200592       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:40:44.300956       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:40:44.300944       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:40:44.300945       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [0906c5e415b9c8b7cf0dc6e35daf1caed07aec8e00dc68457a9203cdeaa0fcee] <==
	E0916 10:40:35.625982       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:35.626126       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:40:35.626272       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:35.626178       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 10:40:35.627185       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.485194       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:40:36.485283       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.534754       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:40:36.534802       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.553649       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 10:40:36.553693       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.739511       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 10:40:36.739572       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.763525       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 10:40:36.763583       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.768041       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 10:40:36.768094       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.782504       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 10:40:36.782564       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.919306       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:40:36.919357       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 10:40:39.247701       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:41:22.445138       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0916 10:41:22.445225       1 run.go:72] "command failed" err="finished without leader elect"
	I0916 10:41:22.445244       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	
	
	==> kube-scheduler [9ff9913af2feb41a804690d65aef168822cd2ac0a456e3642182c64337903889] <==
	W0916 10:41:33.309245       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0916 10:41:33.309313       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:41:34.856594       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0916 10:41:34.856642       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:41:39.800183       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0916 10:41:39.800261       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:41:40.256311       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0916 10:41:40.256386       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:41:40.442961       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0916 10:41:40.443018       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:41:40.559680       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0916 10:41:40.559763       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:41:43.341030       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0916 10:41:43.341153       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:41:43.341209       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 10:41:43.341177       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:41:43.341296       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 10:41:43.341326       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:41:43.341402       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 10:41:43.341424       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:41:43.341470       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:41:43.341491       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:41:43.341579       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:41:43.341738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0916 10:41:43.775320       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.038387    5394 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.039162    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9924f10d-5beb-43b1-9782-44644a015b56-tmp\") pod \"storage-provisioner\" (UID: \"9924f10d-5beb-43b1-9782-44644a015b56\") " pod="kube-system/storage-provisioner"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.039226    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4a00283-1d69-49c4-8c60-264ef3fd7aca-lib-modules\") pod \"kube-proxy-w8qkq\" (UID: \"b4a00283-1d69-49c4-8c60-264ef3fd7aca\") " pod="kube-system/kube-proxy-w8qkq"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.039244    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8ee89403-0943-480c-9f48-4b25a0198f6d-cni-cfg\") pod \"kindnet-5qjpd\" (UID: \"8ee89403-0943-480c-9f48-4b25a0198f6d\") " pod="kube-system/kindnet-5qjpd"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.039258    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ee89403-0943-480c-9f48-4b25a0198f6d-xtables-lock\") pod \"kindnet-5qjpd\" (UID: \"8ee89403-0943-480c-9f48-4b25a0198f6d\") " pod="kube-system/kindnet-5qjpd"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.039324    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ee89403-0943-480c-9f48-4b25a0198f6d-lib-modules\") pod \"kindnet-5qjpd\" (UID: \"8ee89403-0943-480c-9f48-4b25a0198f6d\") " pod="kube-system/kindnet-5qjpd"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.039348    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4a00283-1d69-49c4-8c60-264ef3fd7aca-xtables-lock\") pod \"kube-proxy-w8qkq\" (UID: \"b4a00283-1d69-49c4-8c60-264ef3fd7aca\") " pod="kube-system/kube-proxy-w8qkq"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.056492    5394 scope.go:117] "RemoveContainer" containerID="c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.056894    5394 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-016570" podUID="03b56925-37e8-4f4c-947d-8798a9b0b1e8"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.080422    5394 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-functional-016570"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.241180    5394 scope.go:117] "RemoveContainer" containerID="490a48762f6299610dde37755b7d1a88c8a68ac418dfbdab150f25fa336a052b"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: E0916 10:41:44.242240    5394 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:storage-provisioner,Image:gcr.io/k8s-minikube/storage-provisioner:v5,Command:[/storage-provisioner],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bvpkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},Sta
rtupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod storage-provisioner_kube-system(9924f10d-5beb-43b1-9782-44644a015b56): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: E0916 10:41:44.243435    5394 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/storage-provisioner" podUID="9924f10d-5beb-43b1-9782-44644a015b56"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.264872    5394 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-functional-016570" podStartSLOduration=0.264847622 podStartE2EDuration="264.847622ms" podCreationTimestamp="2024-09-16 10:41:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:41:44.264526883 +0000 UTC m=+10.389456446" watchObservedRunningTime="2024-09-16 10:41:44.264847622 +0000 UTC m=+10.389777177"
	Sep 16 10:41:45 functional-016570 kubelet[5394]: I0916 10:41:45.059426    5394 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-016570" podUID="03b56925-37e8-4f4c-947d-8798a9b0b1e8"
	Sep 16 10:41:45 functional-016570 kubelet[5394]: I0916 10:41:45.955614    5394 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5333b7f22b4ca6fa3369f64c875d053e" path="/var/lib/kubelet/pods/5333b7f22b4ca6fa3369f64c875d053e/volumes"
	Sep 16 10:41:54 functional-016570 kubelet[5394]: I0916 10:41:54.952651    5394 scope.go:117] "RemoveContainer" containerID="490a48762f6299610dde37755b7d1a88c8a68ac418dfbdab150f25fa336a052b"
	Sep 16 10:42:09 functional-016570 kubelet[5394]: E0916 10:42:09.426875    5394 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5333b7f22b4ca6fa3369f64c875d053e" containerName="kube-apiserver"
	Sep 16 10:42:09 functional-016570 kubelet[5394]: I0916 10:42:09.427533    5394 memory_manager.go:354] "RemoveStaleState removing state" podUID="5333b7f22b4ca6fa3369f64c875d053e" containerName="kube-apiserver"
	Sep 16 10:42:09 functional-016570 kubelet[5394]: I0916 10:42:09.622483    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f6bb1f19-917d-404c-9fab-b966f900a8c6-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-jvhn6\" (UID: \"f6bb1f19-917d-404c-9fab-b966f900a8c6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-jvhn6"
	Sep 16 10:42:09 functional-016570 kubelet[5394]: I0916 10:42:09.622547    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8930ff3f-4f5f-41f0-94be-b2685f45ca6c-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-64tpc\" (UID: \"8930ff3f-4f5f-41f0-94be-b2685f45ca6c\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-64tpc"
	Sep 16 10:42:09 functional-016570 kubelet[5394]: I0916 10:42:09.622580    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcblv\" (UniqueName: \"kubernetes.io/projected/f6bb1f19-917d-404c-9fab-b966f900a8c6-kube-api-access-gcblv\") pod \"dashboard-metrics-scraper-c5db448b4-jvhn6\" (UID: \"f6bb1f19-917d-404c-9fab-b966f900a8c6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-jvhn6"
	Sep 16 10:42:09 functional-016570 kubelet[5394]: I0916 10:42:09.622611    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfd5x\" (UniqueName: \"kubernetes.io/projected/8930ff3f-4f5f-41f0-94be-b2685f45ca6c-kube-api-access-gfd5x\") pod \"kubernetes-dashboard-695b96c756-64tpc\" (UID: \"8930ff3f-4f5f-41f0-94be-b2685f45ca6c\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-64tpc"
	Sep 16 10:42:09 functional-016570 kubelet[5394]: I0916 10:42:09.732338    5394 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Sep 16 10:42:19 functional-016570 kubelet[5394]: I0916 10:42:19.171891    5394 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-jvhn6" podStartSLOduration=7.606827724 podStartE2EDuration="10.171864194s" podCreationTimestamp="2024-09-16 10:42:09 +0000 UTC" firstStartedPulling="2024-09-16 10:42:10.242082896 +0000 UTC m=+36.367012449" lastFinishedPulling="2024-09-16 10:42:12.807119369 +0000 UTC m=+38.932048919" observedRunningTime="2024-09-16 10:42:13.143610827 +0000 UTC m=+39.268540390" watchObservedRunningTime="2024-09-16 10:42:19.171864194 +0000 UTC m=+45.296793791"
	
	
	==> kubernetes-dashboard [c4fcde4fb7e4558929a10d0dec11db9887811e378006e6e73e32d54112fa03d7] <==
	2024/09/16 10:42:18 Starting overwatch
	2024/09/16 10:42:18 Using namespace: kubernetes-dashboard
	2024/09/16 10:42:18 Using in-cluster config to connect to apiserver
	2024/09/16 10:42:18 Using secret token for csrf signing
	2024/09/16 10:42:18 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/16 10:42:18 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/16 10:42:18 Successful initial request to the apiserver, version: v1.31.1
	2024/09/16 10:42:18 Generating JWE encryption key
	2024/09/16 10:42:18 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/16 10:42:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/16 10:42:19 Initializing JWE encryption key from synchronized object
	2024/09/16 10:42:19 Creating in-cluster Sidecar client
	2024/09/16 10:42:19 Serving insecurely on HTTP port: 9090
	2024/09/16 10:42:19 Successful request to sidecar
	
	
	==> storage-provisioner [490a48762f6299610dde37755b7d1a88c8a68ac418dfbdab150f25fa336a052b] <==
	I0916 10:41:23.246262       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0916 10:41:23.248507       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [4d490c9b7ae90d00a29d06ff834c0ce770ddfef68473b74c781044e7a283344c] <==
	I0916 10:41:55.019536       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:41:55.026724       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:41:55.026761       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:42:12.454479       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:42:12.454665       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-016570_189be736-a61c-4399-97b3-ea0b09de3894!
	I0916 10:42:12.454679       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1e3e2c42-8555-41e5-b1cf-7a6ddf78f6d7", APIVersion:"v1", ResourceVersion:"588", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-016570_189be736-a61c-4399-97b3-ea0b09de3894 became leader
	I0916 10:42:12.555037       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-016570_189be736-a61c-4399-97b3-ea0b09de3894!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-016570 -n functional-016570
helpers_test.go:261: (dbg) Run:  kubectl --context functional-016570 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context functional-016570 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (452.404µs)
helpers_test.go:263: kubectl --context functional-016570 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
E0916 10:43:51.714830   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: no such file or directory" logger="UnhandledError"
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (90.67s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-016570 replace --force -f testdata/mysql.yaml
functional_test.go:1793: (dbg) Non-zero exit: kubectl --context functional-016570 replace --force -f testdata/mysql.yaml: fork/exec /usr/local/bin/kubectl: exec format error (432.981µs)
functional_test.go:1795: failed to kubectl replace mysql: args "kubectl --context functional-016570 replace --force -f testdata/mysql.yaml" failed: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-016570
helpers_test.go:235: (dbg) docker inspect functional-016570:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b",
	        "Created": "2024-09-16T10:40:22.437729239Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 44343,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:40:22.549860744Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b/hostname",
	        "HostsPath": "/var/lib/docker/containers/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b/hosts",
	        "LogPath": "/var/lib/docker/containers/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b-json.log",
	        "Name": "/functional-016570",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-016570:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-016570",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/89f1441ae4b7c160cc54fa69d84178c0aef1412348cbc8b5731e08fc84a1cd48-init/diff:/var/lib/docker/overlay2/e391a31d00d345931d18da68f8a7d7338830f6d9b98fe203461225d03a65d594/diff",
	                "MergedDir": "/var/lib/docker/overlay2/89f1441ae4b7c160cc54fa69d84178c0aef1412348cbc8b5731e08fc84a1cd48/merged",
	                "UpperDir": "/var/lib/docker/overlay2/89f1441ae4b7c160cc54fa69d84178c0aef1412348cbc8b5731e08fc84a1cd48/diff",
	                "WorkDir": "/var/lib/docker/overlay2/89f1441ae4b7c160cc54fa69d84178c0aef1412348cbc8b5731e08fc84a1cd48/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-016570",
	                "Source": "/var/lib/docker/volumes/functional-016570/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-016570",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-016570",
	                "name.minikube.sigs.k8s.io": "functional-016570",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cea48287f6ef85c384c2d0fd7e890db0e03a4817385b96024c8e0a0688fd7962",
	            "SandboxKey": "/var/run/docker/netns/cea48287f6ef",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-016570": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "0f0536305092a99e49a50f7adb5b668b5d3bb1c3c21d82e43be0c155c4e7cfe5",
	                    "EndpointID": "9201dc1ccce436b0c1f5c3cef087a9b08f43da4627ab9f3717ae9bbdfb5de9f3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-016570",
	                        "389fc216adb5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-016570 -n functional-016570
helpers_test.go:244: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-016570 logs -n 25: (1.532375405s)
helpers_test.go:252: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| image   | functional-016570 image rm                                                       | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | kicbase/echo-server:functional-016570                                            |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                |                   |         |         |                     |                     |
	| ssh     | functional-016570 ssh -- ls                                                      | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | -la /mount-9p                                                                    |                   |         |         |                     |                     |
	| image   | functional-016570 image ls                                                       | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	| ssh     | functional-016570 ssh cat                                                        | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /mount-9p/test-1726483338750366627                                               |                   |         |         |                     |                     |
	| image   | functional-016570 image load                                                     | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                |                   |         |         |                     |                     |
	| ssh     | functional-016570 ssh mount |                                                    | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|         | grep 9p; ls -la /mount-9p; cat                                                   |                   |         |         |                     |                     |
	|         | /mount-9p/pod-dates                                                              |                   |         |         |                     |                     |
	| ssh     | functional-016570 ssh sudo                                                       | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | umount -f /mount-9p                                                              |                   |         |         |                     |                     |
	| image   | functional-016570 image ls                                                       | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	| image   | functional-016570 image save --daemon                                            | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | kicbase/echo-server:functional-016570                                            |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                |                   |         |         |                     |                     |
	| ssh     | functional-016570 ssh findmnt                                                    | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|         | -T /mount-9p | grep 9p                                                           |                   |         |         |                     |                     |
	| mount   | -p functional-016570                                                             | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdspecific-port2011867580/001:/mount-9p         |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --port 46464                                              |                   |         |         |                     |                     |
	| ssh     | functional-016570 ssh sudo cat                                                   | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /etc/ssl/certs/11189.pem                                                         |                   |         |         |                     |                     |
	| ssh     | functional-016570 ssh sudo cat                                                   | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /usr/share/ca-certificates/11189.pem                                             |                   |         |         |                     |                     |
	| ssh     | functional-016570 ssh findmnt                                                    | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | -T /mount-9p | grep 9p                                                           |                   |         |         |                     |                     |
	| ssh     | functional-016570 ssh sudo cat                                                   | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /etc/ssl/certs/51391683.0                                                        |                   |         |         |                     |                     |
	| ssh     | functional-016570 ssh -- ls                                                      | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | -la /mount-9p                                                                    |                   |         |         |                     |                     |
	| ssh     | functional-016570 ssh sudo cat                                                   | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /etc/ssl/certs/111892.pem                                                        |                   |         |         |                     |                     |
	| ssh     | functional-016570 ssh sudo                                                       | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|         | umount -f /mount-9p                                                              |                   |         |         |                     |                     |
	| ssh     | functional-016570 ssh sudo cat                                                   | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /usr/share/ca-certificates/111892.pem                                            |                   |         |         |                     |                     |
	| ssh     | functional-016570 ssh sudo cat                                                   | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /etc/ssl/certs/3ec20f2e.0                                                        |                   |         |         |                     |                     |
	| mount   | -p functional-016570                                                             | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1743652396/001:/mount3           |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                           |                   |         |         |                     |                     |
	| mount   | -p functional-016570                                                             | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1743652396/001:/mount2           |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                           |                   |         |         |                     |                     |
	| ssh     | functional-016570 ssh findmnt                                                    | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|         | -T /mount1                                                                       |                   |         |         |                     |                     |
	| mount   | -p functional-016570                                                             | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdVerifyCleanup1743652396/001:/mount1           |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                           |                   |         |         |                     |                     |
	| ssh     | functional-016570 ssh sudo cat                                                   | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|         | /etc/test/nested/copy/11189/hosts                                                |                   |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:42:07
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:42:07.925802   55153 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:42:07.926076   55153 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:42:07.926087   55153 out.go:358] Setting ErrFile to fd 2...
	I0916 10:42:07.926094   55153 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:42:07.926329   55153 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 10:42:07.926917   55153 out.go:352] Setting JSON to false
	I0916 10:42:07.928083   55153 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1472,"bootTime":1726481856,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:42:07.928201   55153 start.go:139] virtualization: kvm guest
	I0916 10:42:07.930891   55153 out.go:177] * [functional-016570] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:42:07.933206   55153 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:42:07.933437   55153 notify.go:220] Checking for updates...
	I0916 10:42:07.936378   55153 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:42:07.937840   55153 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:42:07.939249   55153 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 10:42:07.940760   55153 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:42:07.942139   55153 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:42:07.944069   55153 config.go:182] Loaded profile config "functional-016570": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:42:07.944793   55153 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:42:07.980732   55153 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:42:07.980810   55153 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:42:08.038252   55153 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-16 10:42:08.026410213 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:42:08.038402   55153 docker.go:318] overlay module found
	I0916 10:42:08.040535   55153 out.go:177] * Using the docker driver based on existing profile
	I0916 10:42:08.042029   55153 start.go:297] selected driver: docker
	I0916 10:42:08.042043   55153 start.go:901] validating driver "docker" against &{Name:functional-016570 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-016570 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:42:08.042118   55153 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:42:08.042187   55153 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:42:08.096294   55153 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-16 10:42:08.085371862 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:42:08.096876   55153 cni.go:84] Creating CNI manager for ""
	I0916 10:42:08.096923   55153 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 10:42:08.096974   55153 start.go:340] cluster config:
	{Name:functional-016570 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-016570 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:42:08.098919   55153 out.go:177] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	c4fcde4fb7e45       07655ddf2eebe       5 seconds ago        Running             kubernetes-dashboard        0                   4817ed22e184b       kubernetes-dashboard-695b96c756-64tpc
	7f85730bd4d94       115053965e86b       11 seconds ago       Running             dashboard-metrics-scraper   0                   395f1e784acbe       dashboard-metrics-scraper-c5db448b4-jvhn6
	4d490c9b7ae90       6e38f40d628db       29 seconds ago       Running             storage-provisioner         2                   b81ffde02718d       storage-provisioner
	d2500e97c949b       6bab7719df100       43 seconds ago       Running             kube-apiserver              0                   9d885083d4265       kube-apiserver-functional-016570
	c8262cd23469c       175ffd71cce3d       43 seconds ago       Running             kube-controller-manager     2                   8b5d374851050       kube-controller-manager-functional-016570
	861dc747735da       2e96e5913fc06       43 seconds ago       Running             etcd                        1                   2cdebcb8c7807       etcd-functional-016570
	f40f6265fc1c6       12968670680f4       About a minute ago   Running             kindnet-cni                 1                   c7f56f796b013       kindnet-5qjpd
	490a48762f629       6e38f40d628db       About a minute ago   Exited              storage-provisioner         1                   b81ffde02718d       storage-provisioner
	2810e4a546750       60c005f310ff3       About a minute ago   Running             kube-proxy                  1                   f4ed79f8dffeb       kube-proxy-w8qkq
	485f2c5cef235       175ffd71cce3d       About a minute ago   Exited              kube-controller-manager     1                   8b5d374851050       kube-controller-manager-functional-016570
	9ff9913af2feb       9aa1fad941575       About a minute ago   Running             kube-scheduler              1                   caa2007696d1b       kube-scheduler-functional-016570
	b8bd1849da6c4       c69fa2e9cbf5f       About a minute ago   Running             coredns                     1                   3d9a434f8b6e5       coredns-7c65d6cfc9-59qm7
	fd0c81e7a39a2       c69fa2e9cbf5f       About a minute ago   Exited              coredns                     0                   3d9a434f8b6e5       coredns-7c65d6cfc9-59qm7
	bf96dac81b725       12968670680f4       About a minute ago   Exited              kindnet-cni                 0                   c7f56f796b013       kindnet-5qjpd
	80095e847084f       60c005f310ff3       About a minute ago   Exited              kube-proxy                  0                   f4ed79f8dffeb       kube-proxy-w8qkq
	0906c5e415b9c       9aa1fad941575       About a minute ago   Exited              kube-scheduler              0                   caa2007696d1b       kube-scheduler-functional-016570
	b4905826c508e       2e96e5913fc06       About a minute ago   Exited              etcd                        0                   2cdebcb8c7807       etcd-functional-016570
	
	
	==> containerd <==
	Sep 16 10:42:16 functional-016570 containerd[4401]: time="2024-09-16T10:42:16.935275465Z" level=info msg="ImageCreate event name:\"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 10:42:16 functional-016570 containerd[4401]: time="2024-09-16T10:42:16.935983899Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-016570\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 10:42:18 functional-016570 containerd[4401]: time="2024-09-16T10:42:18.819763422Z" level=info msg="ImageCreate event name:\"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 10:42:18 functional-016570 containerd[4401]: time="2024-09-16T10:42:18.820779768Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: active requests=0, bytes read=75799822"
	Sep 16 10:42:18 functional-016570 containerd[4401]: time="2024-09-16T10:42:18.822358623Z" level=info msg="ImageCreate event name:\"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 10:42:18 functional-016570 containerd[4401]: time="2024-09-16T10:42:18.824264999Z" level=info msg="Pulled image \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" with image id \"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558\", repo tag \"\", repo digest \"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\", size \"75788960\" in 6.016875863s"
	Sep 16 10:42:18 functional-016570 containerd[4401]: time="2024-09-16T10:42:18.824311531Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\" returns image reference \"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558\""
	Sep 16 10:42:18 functional-016570 containerd[4401]: time="2024-09-16T10:42:18.826798779Z" level=info msg="CreateContainer within sandbox \"4817ed22e184b214d46870728ee1653ce49c9055d735e02434c8cea1b7d1de44\" for container &ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,}"
	Sep 16 10:42:18 functional-016570 containerd[4401]: time="2024-09-16T10:42:18.842373486Z" level=info msg="CreateContainer within sandbox \"4817ed22e184b214d46870728ee1653ce49c9055d735e02434c8cea1b7d1de44\" for &ContainerMetadata{Name:kubernetes-dashboard,Attempt:0,} returns container id \"c4fcde4fb7e4558929a10d0dec11db9887811e378006e6e73e32d54112fa03d7\""
	Sep 16 10:42:18 functional-016570 containerd[4401]: time="2024-09-16T10:42:18.844319395Z" level=info msg="StartContainer for \"c4fcde4fb7e4558929a10d0dec11db9887811e378006e6e73e32d54112fa03d7\""
	Sep 16 10:42:18 functional-016570 containerd[4401]: time="2024-09-16T10:42:18.903825270Z" level=info msg="StartContainer for \"c4fcde4fb7e4558929a10d0dec11db9887811e378006e6e73e32d54112fa03d7\" returns successfully"
	Sep 16 10:42:18 functional-016570 containerd[4401]: time="2024-09-16T10:42:18.954316347Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-016570\""
	Sep 16 10:42:18 functional-016570 containerd[4401]: time="2024-09-16T10:42:18.956395266Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-016570\""
	Sep 16 10:42:18 functional-016570 containerd[4401]: time="2024-09-16T10:42:18.959399542Z" level=info msg="ImageDelete event name:\"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\""
	Sep 16 10:42:18 functional-016570 containerd[4401]: time="2024-09-16T10:42:18.965591644Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-016570\" returns successfully"
	Sep 16 10:42:19 functional-016570 containerd[4401]: time="2024-09-16T10:42:19.245859298Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-016570\""
	Sep 16 10:42:19 functional-016570 containerd[4401]: time="2024-09-16T10:42:19.249325832Z" level=info msg="ImageCreate event name:\"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 10:42:19 functional-016570 containerd[4401]: time="2024-09-16T10:42:19.249660198Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-016570\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 10:42:20 functional-016570 containerd[4401]: time="2024-09-16T10:42:20.078251600Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-016570\""
	Sep 16 10:42:20 functional-016570 containerd[4401]: time="2024-09-16T10:42:20.080115910Z" level=info msg="ImageDelete event name:\"docker.io/kicbase/echo-server:functional-016570\""
	Sep 16 10:42:20 functional-016570 containerd[4401]: time="2024-09-16T10:42:20.081280051Z" level=info msg="ImageDelete event name:\"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\""
	Sep 16 10:42:20 functional-016570 containerd[4401]: time="2024-09-16T10:42:20.088503497Z" level=info msg="RemoveImage \"kicbase/echo-server:functional-016570\" returns successfully"
	Sep 16 10:42:20 functional-016570 containerd[4401]: time="2024-09-16T10:42:20.643952210Z" level=info msg="ImageCreate event name:\"docker.io/kicbase/echo-server:functional-016570\""
	Sep 16 10:42:20 functional-016570 containerd[4401]: time="2024-09-16T10:42:20.647727926Z" level=info msg="ImageCreate event name:\"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 10:42:20 functional-016570 containerd[4401]: time="2024-09-16T10:42:20.648138801Z" level=info msg="ImageUpdate event name:\"docker.io/kicbase/echo-server:functional-016570\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	
	
	==> coredns [b8bd1849da6c464b7fbc64f004ea8f6e93596b309cb23e3f75f0493a6c22ebd5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:59975 - 12600 "HINFO IN 4686966597786162674.2744546229077384558. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011873953s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [fd0c81e7a39a2566405ad2950426958ab0d7abfe073ce6517f67e87f2cc2dabe] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44994 - 49719 "HINFO IN 5811560446017322614.7472127089541594346. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008166038s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-016570
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-016570
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=functional-016570
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_40_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:40:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-016570
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:42:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:41:43 +0000   Mon, 16 Sep 2024 10:40:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:41:43 +0000   Mon, 16 Sep 2024 10:40:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:41:43 +0000   Mon, 16 Sep 2024 10:40:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:41:43 +0000   Mon, 16 Sep 2024 10:40:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-016570
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 7e0bbc72eb56431496705823b26dcd4d
	  System UUID:                9e1cbc84-d044-4524-b143-ca93a6dc5aa0
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-59qm7                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     101s
	  kube-system                 etcd-functional-016570                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         106s
	  kube-system                 kindnet-5qjpd                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      101s
	  kube-system                 kube-apiserver-functional-016570             250m (3%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-controller-manager-functional-016570    200m (2%)     0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-proxy-w8qkq                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-scheduler-functional-016570             100m (1%)     0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-jvhn6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-64tpc        0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 100s               kube-proxy       
	  Normal   Starting                 26s                kube-proxy       
	  Normal   NodeAllocatableEnforced  107s               kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 107s               kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 107s               kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  106s               kubelet          Node functional-016570 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    106s               kubelet          Node functional-016570 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     106s               kubelet          Node functional-016570 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           102s               node-controller  Node functional-016570 event: Registered Node functional-016570 in Controller
	  Normal   Starting                 51s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 51s                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  47s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  44s (x8 over 47s)  kubelet          Node functional-016570 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    44s (x7 over 47s)  kubelet          Node functional-016570 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     44s (x7 over 47s)  kubelet          Node functional-016570 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           38s                node-controller  Node functional-016570 event: Registered Node functional-016570 in Controller
	
	
	==> dmesg <==
	[Sep16 10:17]  #2
	[  +0.001391]  #3
	[  +0.000000]  #4
	[  +0.003060] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.003238] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.002037] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.002135]  #5
	[  +0.000696]  #6
	[  +0.003195]  #7
	[  +0.058540] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.423486] i8042: Warning: Keylock active
	[  +0.007424] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.002994] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000654] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000607] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000622] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000638] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000657] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000607] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000608] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.588189] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +7.251152] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [861dc747735da2748e3f4b24b824a36d0a52f89bbf91f6d93373e8e94ec47110] <==
	{"level":"info","ts":"2024-09-16T10:41:40.736533Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:41:40.736587Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:41:40.736603Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:41:40.736848Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:41:40.736868Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:41:40.737630Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-09-16T10:41:40.737696Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-09-16T10:41:40.737861Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:41:40.737967Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:41:42.324428Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-16T10:41:42.324481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T10:41:42.324523Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-16T10:41:42.324536Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:41:42.324550Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-16T10:41:42.324585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-09-16T10:41:42.324592Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-16T10:41:42.326180Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-016570 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:41:42.326181Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:41:42.326207Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:41:42.326768Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:41:42.326898Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:41:42.328280Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:41:42.328298Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:41:42.329057Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:41:42.329116Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> etcd [b4905826c508e92d29a39ef565f1c838d026ed2e1af8256da846ca2ca7e33d25] <==
	{"level":"info","ts":"2024-09-16T10:40:33.951156Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-16T10:40:33.951169Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-16T10:40:33.952133Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-016570 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:40:33.952149Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:40:33.952176Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:40:33.952174Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:40:33.952458Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:40:33.952538Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:40:33.952886Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:40:33.953028Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:40:33.953056Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:40:33.953277Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:40:33.954142Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:40:33.955433Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:40:33.956074Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-16T10:41:32.555101Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-16T10:41:32.555186Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-016570","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-09-16T10:41:32.555307Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:41:32.555359Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:41:32.556893Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:41:32.556932Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T10:41:32.558356Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-09-16T10:41:32.560054Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:41:32.560161Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:41:32.560190Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-016570","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 10:42:24 up 24 min,  0 users,  load average: 1.39, 0.97, 0.61
	Linux functional-016570 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [bf96dac81b725b0cdd05c80d46fccb31fba58eb314cbefaf4fa45648dd564d75] <==
	I0916 10:40:44.322865       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 10:40:44.323089       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0916 10:40:44.323250       1 main.go:148] setting mtu 1500 for CNI 
	I0916 10:40:44.323270       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 10:40:44.323292       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 10:40:44.644895       1 controller.go:334] Starting controller kube-network-policies
	I0916 10:40:44.644910       1 controller.go:338] Waiting for informer caches to sync
	I0916 10:40:44.644915       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 10:40:45.019853       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 10:40:45.019909       1 metrics.go:61] Registering metrics
	I0916 10:40:45.019969       1 controller.go:374] Syncing nftables rules
	I0916 10:40:54.644919       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:40:54.644955       1 main.go:299] handling current node
	I0916 10:41:04.651830       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:41:04.651862       1 main.go:299] handling current node
	I0916 10:41:14.648734       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:41:14.648784       1 main.go:299] handling current node
	
	
	==> kindnet [f40f6265fc1c666726cfc4dfc8b0637a32e85401949dfb2edee2619b1765db77] <==
	W0916 10:41:27.370312       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:41:27.370396       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0916 10:41:27.496085       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:41:27.496151       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0916 10:41:31.899869       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:41:31.899923       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0916 10:41:32.540969       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:41:32.541033       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0916 10:41:32.592684       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:41:32.592731       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0916 10:41:33.006411       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:41:33.006451       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	I0916 10:41:43.445970       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 10:41:43.446004       1 metrics.go:61] Registering metrics
	I0916 10:41:43.446082       1 controller.go:374] Syncing nftables rules
	I0916 10:41:43.844790       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:41:43.844836       1 main.go:299] handling current node
	I0916 10:41:53.844621       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:41:53.844659       1 main.go:299] handling current node
	I0916 10:42:03.848081       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:42:03.848124       1 main.go:299] handling current node
	I0916 10:42:13.845336       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:42:13.845386       1 main.go:299] handling current node
	I0916 10:42:23.845243       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:42:23.845316       1 main.go:299] handling current node
	
	
	==> kube-apiserver [d2500e97c949b2a2e8330e357558411d2893aaafdb03d1ef0422a5a6fb1cb12f] <==
	I0916 10:41:43.422383       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 10:41:43.422423       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:41:43.422975       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:41:43.423049       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:41:43.423247       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:41:43.424833       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:41:43.424920       1 policy_source.go:224] refreshing policies
	I0916 10:41:43.439068       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:41:43.466711       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:41:43.467901       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:41:43.471775       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 10:41:43.520505       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:41:44.269900       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0916 10:41:44.531189       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0916 10:41:44.532603       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:41:44.536818       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:41:44.865983       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:41:44.962527       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:41:44.972551       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:41:45.027823       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:41:45.034699       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 10:42:09.231235       1 controller.go:615] quota admission added evaluator for: namespaces
	I0916 10:42:09.266000       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 10:42:09.379723       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.44.131"}
	I0916 10:42:09.429057       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.65.100"}
	
	
	==> kube-controller-manager [485f2c5cef235c0182e1a64e3a548bea54de9894193e120b2b717a72b9ef1bff] <==
	I0916 10:41:23.729467       1 serving.go:386] Generated self-signed cert in-memory
	I0916 10:41:24.021064       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0916 10:41:24.021095       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:41:24.022441       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0916 10:41:24.022441       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0916 10:41:24.022737       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0916 10:41:24.022823       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0916 10:41:34.024812       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-controller-manager [c8262cd23469ca086f00137b1fc38c96429b63d514a16f9a905da144ecd2b73c] <==
	I0916 10:42:09.322862       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="19.394348ms"
	E0916 10:42:09.322901       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:42:09.322862       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="19.742072ms"
	E0916 10:42:09.322918       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:42:09.330631       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="5.717139ms"
	E0916 10:42:09.330672       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:42:09.330731       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="6.175138ms"
	E0916 10:42:09.330740       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:42:09.338767       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="3.604226ms"
	E0916 10:42:09.338811       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:42:09.339183       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="4.001104ms"
	E0916 10:42:09.339225       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:42:09.420221       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="39.404744ms"
	I0916 10:42:09.422278       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="41.065043ms"
	I0916 10:42:09.430005       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="7.676168ms"
	I0916 10:42:09.430194       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="129.11µs"
	I0916 10:42:09.445018       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="42.392µs"
	I0916 10:42:09.520924       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="100.632911ms"
	I0916 10:42:09.521034       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="57.631µs"
	I0916 10:42:09.521082       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="25.339µs"
	I0916 10:42:09.531158       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="49.005µs"
	I0916 10:42:13.153143       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="10.141174ms"
	I0916 10:42:13.153260       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="55.696µs"
	I0916 10:42:19.221057       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="50.188807ms"
	I0916 10:42:19.221165       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="62.698µs"
	
	
	==> kube-proxy [2810e4a54675045b91d6e2b6996d5595fca99d1ed910f700a9440b05c934282a] <==
	I0916 10:41:23.344360       1 server_linux.go:66] "Using iptables proxy"
	E0916 10:41:23.466145       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-016570\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0916 10:41:24.666043       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-016570\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0916 10:41:26.980270       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-016570\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0916 10:41:31.272411       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-016570\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0916 10:41:40.759863       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-016570\": dial tcp 192.168.49.2:8441: connect: connection refused"
	I0916 10:41:57.573875       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:41:57.573958       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:41:57.593375       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:41:57.593437       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:41:57.595286       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:41:57.595663       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:41:57.595695       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:41:57.596863       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:41:57.596985       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:41:57.597145       1 config.go:199] "Starting service config controller"
	I0916 10:41:57.597451       1 config.go:328] "Starting node config controller"
	I0916 10:41:57.597468       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:41:57.597345       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:41:57.697968       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:41:57.698030       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:41:57.698035       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [80095e847084fa5775086821e6be1bb6e7bae0fe6a66745b19cb9b75e266bc3f] <==
	I0916 10:40:44.050300       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:40:44.179052       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:40:44.179113       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:40:44.196854       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:40:44.196926       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:40:44.198841       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:40:44.199284       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:40:44.199313       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:40:44.200417       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:40:44.200434       1 config.go:199] "Starting service config controller"
	I0916 10:40:44.200460       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:40:44.200461       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:40:44.200579       1 config.go:328] "Starting node config controller"
	I0916 10:40:44.200592       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:40:44.300956       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:40:44.300944       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:40:44.300945       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [0906c5e415b9c8b7cf0dc6e35daf1caed07aec8e00dc68457a9203cdeaa0fcee] <==
	E0916 10:40:35.625982       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:35.626126       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:40:35.626272       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:35.626178       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 10:40:35.627185       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.485194       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:40:36.485283       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.534754       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:40:36.534802       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.553649       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 10:40:36.553693       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.739511       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 10:40:36.739572       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.763525       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 10:40:36.763583       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.768041       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 10:40:36.768094       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.782504       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 10:40:36.782564       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.919306       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:40:36.919357       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 10:40:39.247701       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:41:22.445138       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0916 10:41:22.445225       1 run.go:72] "command failed" err="finished without leader elect"
	I0916 10:41:22.445244       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	
	
	==> kube-scheduler [9ff9913af2feb41a804690d65aef168822cd2ac0a456e3642182c64337903889] <==
	W0916 10:41:33.309245       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0916 10:41:33.309313       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:41:34.856594       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0916 10:41:34.856642       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:41:39.800183       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0916 10:41:39.800261       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:41:40.256311       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0916 10:41:40.256386       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:41:40.442961       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0916 10:41:40.443018       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:41:40.559680       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0916 10:41:40.559763       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:41:43.341030       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0916 10:41:43.341153       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:41:43.341209       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 10:41:43.341177       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:41:43.341296       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 10:41:43.341326       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:41:43.341402       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 10:41:43.341424       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:41:43.341470       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:41:43.341491       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:41:43.341579       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:41:43.341738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0916 10:41:43.775320       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.038387    5394 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.039162    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9924f10d-5beb-43b1-9782-44644a015b56-tmp\") pod \"storage-provisioner\" (UID: \"9924f10d-5beb-43b1-9782-44644a015b56\") " pod="kube-system/storage-provisioner"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.039226    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4a00283-1d69-49c4-8c60-264ef3fd7aca-lib-modules\") pod \"kube-proxy-w8qkq\" (UID: \"b4a00283-1d69-49c4-8c60-264ef3fd7aca\") " pod="kube-system/kube-proxy-w8qkq"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.039244    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8ee89403-0943-480c-9f48-4b25a0198f6d-cni-cfg\") pod \"kindnet-5qjpd\" (UID: \"8ee89403-0943-480c-9f48-4b25a0198f6d\") " pod="kube-system/kindnet-5qjpd"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.039258    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ee89403-0943-480c-9f48-4b25a0198f6d-xtables-lock\") pod \"kindnet-5qjpd\" (UID: \"8ee89403-0943-480c-9f48-4b25a0198f6d\") " pod="kube-system/kindnet-5qjpd"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.039324    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ee89403-0943-480c-9f48-4b25a0198f6d-lib-modules\") pod \"kindnet-5qjpd\" (UID: \"8ee89403-0943-480c-9f48-4b25a0198f6d\") " pod="kube-system/kindnet-5qjpd"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.039348    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4a00283-1d69-49c4-8c60-264ef3fd7aca-xtables-lock\") pod \"kube-proxy-w8qkq\" (UID: \"b4a00283-1d69-49c4-8c60-264ef3fd7aca\") " pod="kube-system/kube-proxy-w8qkq"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.056492    5394 scope.go:117] "RemoveContainer" containerID="c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.056894    5394 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-016570" podUID="03b56925-37e8-4f4c-947d-8798a9b0b1e8"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.080422    5394 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-functional-016570"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.241180    5394 scope.go:117] "RemoveContainer" containerID="490a48762f6299610dde37755b7d1a88c8a68ac418dfbdab150f25fa336a052b"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: E0916 10:41:44.242240    5394 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:storage-provisioner,Image:gcr.io/k8s-minikube/storage-provisioner:v5,Command:[/storage-provisioner],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bvpkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},Sta
rtupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod storage-provisioner_kube-system(9924f10d-5beb-43b1-9782-44644a015b56): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: E0916 10:41:44.243435    5394 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/storage-provisioner" podUID="9924f10d-5beb-43b1-9782-44644a015b56"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.264872    5394 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-functional-016570" podStartSLOduration=0.264847622 podStartE2EDuration="264.847622ms" podCreationTimestamp="2024-09-16 10:41:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:41:44.264526883 +0000 UTC m=+10.389456446" watchObservedRunningTime="2024-09-16 10:41:44.264847622 +0000 UTC m=+10.389777177"
	Sep 16 10:41:45 functional-016570 kubelet[5394]: I0916 10:41:45.059426    5394 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-016570" podUID="03b56925-37e8-4f4c-947d-8798a9b0b1e8"
	Sep 16 10:41:45 functional-016570 kubelet[5394]: I0916 10:41:45.955614    5394 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5333b7f22b4ca6fa3369f64c875d053e" path="/var/lib/kubelet/pods/5333b7f22b4ca6fa3369f64c875d053e/volumes"
	Sep 16 10:41:54 functional-016570 kubelet[5394]: I0916 10:41:54.952651    5394 scope.go:117] "RemoveContainer" containerID="490a48762f6299610dde37755b7d1a88c8a68ac418dfbdab150f25fa336a052b"
	Sep 16 10:42:09 functional-016570 kubelet[5394]: E0916 10:42:09.426875    5394 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5333b7f22b4ca6fa3369f64c875d053e" containerName="kube-apiserver"
	Sep 16 10:42:09 functional-016570 kubelet[5394]: I0916 10:42:09.427533    5394 memory_manager.go:354] "RemoveStaleState removing state" podUID="5333b7f22b4ca6fa3369f64c875d053e" containerName="kube-apiserver"
	Sep 16 10:42:09 functional-016570 kubelet[5394]: I0916 10:42:09.622483    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f6bb1f19-917d-404c-9fab-b966f900a8c6-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-jvhn6\" (UID: \"f6bb1f19-917d-404c-9fab-b966f900a8c6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-jvhn6"
	Sep 16 10:42:09 functional-016570 kubelet[5394]: I0916 10:42:09.622547    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8930ff3f-4f5f-41f0-94be-b2685f45ca6c-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-64tpc\" (UID: \"8930ff3f-4f5f-41f0-94be-b2685f45ca6c\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-64tpc"
	Sep 16 10:42:09 functional-016570 kubelet[5394]: I0916 10:42:09.622580    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcblv\" (UniqueName: \"kubernetes.io/projected/f6bb1f19-917d-404c-9fab-b966f900a8c6-kube-api-access-gcblv\") pod \"dashboard-metrics-scraper-c5db448b4-jvhn6\" (UID: \"f6bb1f19-917d-404c-9fab-b966f900a8c6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-jvhn6"
	Sep 16 10:42:09 functional-016570 kubelet[5394]: I0916 10:42:09.622611    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfd5x\" (UniqueName: \"kubernetes.io/projected/8930ff3f-4f5f-41f0-94be-b2685f45ca6c-kube-api-access-gfd5x\") pod \"kubernetes-dashboard-695b96c756-64tpc\" (UID: \"8930ff3f-4f5f-41f0-94be-b2685f45ca6c\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-64tpc"
	Sep 16 10:42:09 functional-016570 kubelet[5394]: I0916 10:42:09.732338    5394 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Sep 16 10:42:19 functional-016570 kubelet[5394]: I0916 10:42:19.171891    5394 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-jvhn6" podStartSLOduration=7.606827724 podStartE2EDuration="10.171864194s" podCreationTimestamp="2024-09-16 10:42:09 +0000 UTC" firstStartedPulling="2024-09-16 10:42:10.242082896 +0000 UTC m=+36.367012449" lastFinishedPulling="2024-09-16 10:42:12.807119369 +0000 UTC m=+38.932048919" observedRunningTime="2024-09-16 10:42:13.143610827 +0000 UTC m=+39.268540390" watchObservedRunningTime="2024-09-16 10:42:19.171864194 +0000 UTC m=+45.296793791"
	
	
	==> kubernetes-dashboard [c4fcde4fb7e4558929a10d0dec11db9887811e378006e6e73e32d54112fa03d7] <==
	2024/09/16 10:42:18 Using namespace: kubernetes-dashboard
	2024/09/16 10:42:18 Using in-cluster config to connect to apiserver
	2024/09/16 10:42:18 Using secret token for csrf signing
	2024/09/16 10:42:18 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/16 10:42:18 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/16 10:42:18 Successful initial request to the apiserver, version: v1.31.1
	2024/09/16 10:42:18 Generating JWE encryption key
	2024/09/16 10:42:18 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/16 10:42:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/16 10:42:19 Initializing JWE encryption key from synchronized object
	2024/09/16 10:42:19 Creating in-cluster Sidecar client
	2024/09/16 10:42:19 Serving insecurely on HTTP port: 9090
	2024/09/16 10:42:19 Successful request to sidecar
	2024/09/16 10:42:18 Starting overwatch
	
	
	==> storage-provisioner [490a48762f6299610dde37755b7d1a88c8a68ac418dfbdab150f25fa336a052b] <==
	I0916 10:41:23.246262       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0916 10:41:23.248507       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [4d490c9b7ae90d00a29d06ff834c0ce770ddfef68473b74c781044e7a283344c] <==
	I0916 10:41:55.019536       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:41:55.026724       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:41:55.026761       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:42:12.454479       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:42:12.454665       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-016570_189be736-a61c-4399-97b3-ea0b09de3894!
	I0916 10:42:12.454679       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1e3e2c42-8555-41e5-b1cf-7a6ddf78f6d7", APIVersion:"v1", ResourceVersion:"588", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-016570_189be736-a61c-4399-97b3-ea0b09de3894 became leader
	I0916 10:42:12.555037       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-016570_189be736-a61c-4399-97b3-ea0b09de3894!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-016570 -n functional-016570
helpers_test.go:261: (dbg) Run:  kubectl --context functional-016570 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context functional-016570 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (488.193µs)
helpers_test.go:263: kubectl --context functional-016570 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestFunctional/parallel/MySQL (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-016570 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:219: (dbg) Non-zero exit: kubectl --context functional-016570 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": fork/exec /usr/local/bin/kubectl: exec format error (466.285µs)
functional_test.go:221: failed to 'kubectl get nodes' with args "kubectl --context functional-016570 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": fork/exec /usr/local/bin/kubectl: exec format error
functional_test.go:227: expected to have label "minikube.k8s.io/commit" in node labels but got : 
functional_test.go:227: expected to have label "minikube.k8s.io/version" in node labels but got : 
functional_test.go:227: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
functional_test.go:227: expected to have label "minikube.k8s.io/name" in node labels but got : 
functional_test.go:227: expected to have label "minikube.k8s.io/primary" in node labels but got : 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-016570
helpers_test.go:235: (dbg) docker inspect functional-016570:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b",
	        "Created": "2024-09-16T10:40:22.437729239Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 44343,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:40:22.549860744Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b/hostname",
	        "HostsPath": "/var/lib/docker/containers/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b/hosts",
	        "LogPath": "/var/lib/docker/containers/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b/389fc216adb500e0982cda1eb10ce408803b0922cf8ed35732acec23ca975a4b-json.log",
	        "Name": "/functional-016570",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-016570:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-016570",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/89f1441ae4b7c160cc54fa69d84178c0aef1412348cbc8b5731e08fc84a1cd48-init/diff:/var/lib/docker/overlay2/e391a31d00d345931d18da68f8a7d7338830f6d9b98fe203461225d03a65d594/diff",
	                "MergedDir": "/var/lib/docker/overlay2/89f1441ae4b7c160cc54fa69d84178c0aef1412348cbc8b5731e08fc84a1cd48/merged",
	                "UpperDir": "/var/lib/docker/overlay2/89f1441ae4b7c160cc54fa69d84178c0aef1412348cbc8b5731e08fc84a1cd48/diff",
	                "WorkDir": "/var/lib/docker/overlay2/89f1441ae4b7c160cc54fa69d84178c0aef1412348cbc8b5731e08fc84a1cd48/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-016570",
	                "Source": "/var/lib/docker/volumes/functional-016570/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-016570",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-016570",
	                "name.minikube.sigs.k8s.io": "functional-016570",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cea48287f6ef85c384c2d0fd7e890db0e03a4817385b96024c8e0a0688fd7962",
	            "SandboxKey": "/var/run/docker/netns/cea48287f6ef",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-016570": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "0f0536305092a99e49a50f7adb5b668b5d3bb1c3c21d82e43be0c155c4e7cfe5",
	                    "EndpointID": "9201dc1ccce436b0c1f5c3cef087a9b08f43da4627ab9f3717ae9bbdfb5de9f3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-016570",
	                        "389fc216adb5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-016570 -n functional-016570
helpers_test.go:244: <<< TestFunctional/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/NodeLabels]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-016570 logs -n 25: (1.441562418s)
helpers_test.go:252: TestFunctional/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	|-----------|------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|  Command  |                            Args                            |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|-----------|------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| config    | functional-016570 config set                               | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|           | cpus 2                                                     |                   |         |         |                     |                     |
	| config    | functional-016570 config get                               | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|           | cpus                                                       |                   |         |         |                     |                     |
	| config    | functional-016570 config unset                             | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|           | cpus                                                       |                   |         |         |                     |                     |
	| ssh       | functional-016570 ssh -n                                   | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|           | functional-016570 sudo cat                                 |                   |         |         |                     |                     |
	|           | /home/docker/cp-test.txt                                   |                   |         |         |                     |                     |
	| config    | functional-016570 config get                               | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|           | cpus                                                       |                   |         |         |                     |                     |
	| service   | functional-016570 service list                             | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|           | -o json                                                    |                   |         |         |                     |                     |
	| start     | -p functional-016570                                       | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|           | --dry-run --memory                                         |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                    |                   |         |         |                     |                     |
	|           | --driver=docker                                            |                   |         |         |                     |                     |
	|           | --container-runtime=containerd                             |                   |         |         |                     |                     |
	| start     | -p functional-016570                                       | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|           | --dry-run --memory                                         |                   |         |         |                     |                     |
	|           | 250MB --alsologtostderr                                    |                   |         |         |                     |                     |
	|           | --driver=docker                                            |                   |         |         |                     |                     |
	|           | --container-runtime=containerd                             |                   |         |         |                     |                     |
	| cp        | functional-016570 cp                                       | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|           | functional-016570:/home/docker/cp-test.txt                 |                   |         |         |                     |                     |
	|           | /tmp/TestFunctionalparallelCpCmd2928196455/001/cp-test.txt |                   |         |         |                     |                     |
	| service   | functional-016570 service                                  | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|           | --namespace=default --https                                |                   |         |         |                     |                     |
	|           | --url hello-node                                           |                   |         |         |                     |                     |
	| start     | -p functional-016570                                       | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|           | --dry-run --alsologtostderr                                |                   |         |         |                     |                     |
	|           | -v=1 --driver=docker                                       |                   |         |         |                     |                     |
	|           | --container-runtime=containerd                             |                   |         |         |                     |                     |
	| ssh       | functional-016570 ssh -n                                   | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|           | functional-016570 sudo cat                                 |                   |         |         |                     |                     |
	|           | /home/docker/cp-test.txt                                   |                   |         |         |                     |                     |
	| dashboard | --url --port 36195                                         | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|           | -p functional-016570                                       |                   |         |         |                     |                     |
	|           | --alsologtostderr -v=1                                     |                   |         |         |                     |                     |
	| service   | functional-016570                                          | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|           | service hello-node --url                                   |                   |         |         |                     |                     |
	|           | --format={{.IP}}                                           |                   |         |         |                     |                     |
	| cp        | functional-016570 cp                                       | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|           | testdata/cp-test.txt                                       |                   |         |         |                     |                     |
	|           | /tmp/does/not/exist/cp-test.txt                            |                   |         |         |                     |                     |
	| service   | functional-016570 service                                  | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|           | hello-node --url                                           |                   |         |         |                     |                     |
	| ssh       | functional-016570 ssh -n                                   | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|           | functional-016570 sudo cat                                 |                   |         |         |                     |                     |
	|           | /tmp/does/not/exist/cp-test.txt                            |                   |         |         |                     |                     |
	| ssh       | functional-016570 ssh echo                                 | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|           | hello                                                      |                   |         |         |                     |                     |
	| ssh       | functional-016570 ssh cat                                  | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|           | /etc/hostname                                              |                   |         |         |                     |                     |
	| tunnel    | functional-016570 tunnel                                   | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|           | --alsologtostderr                                          |                   |         |         |                     |                     |
	| tunnel    | functional-016570 tunnel                                   | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|           | --alsologtostderr                                          |                   |         |         |                     |                     |
	| tunnel    | functional-016570 tunnel                                   | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|           | --alsologtostderr                                          |                   |         |         |                     |                     |
	| license   |                                                            | minikube          | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	| ssh       | functional-016570 ssh sudo                                 | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|           | systemctl is-active docker                                 |                   |         |         |                     |                     |
	| ssh       | functional-016570 ssh sudo                                 | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC |                     |
	|           | systemctl is-active crio                                   |                   |         |         |                     |                     |
	|-----------|------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:42:07
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:42:07.925802   55153 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:42:07.926076   55153 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:42:07.926087   55153 out.go:358] Setting ErrFile to fd 2...
	I0916 10:42:07.926094   55153 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:42:07.926329   55153 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 10:42:07.926917   55153 out.go:352] Setting JSON to false
	I0916 10:42:07.928083   55153 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1472,"bootTime":1726481856,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:42:07.928201   55153 start.go:139] virtualization: kvm guest
	I0916 10:42:07.930891   55153 out.go:177] * [functional-016570] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:42:07.933206   55153 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:42:07.933437   55153 notify.go:220] Checking for updates...
	I0916 10:42:07.936378   55153 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:42:07.937840   55153 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:42:07.939249   55153 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 10:42:07.940760   55153 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:42:07.942139   55153 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:42:07.944069   55153 config.go:182] Loaded profile config "functional-016570": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:42:07.944793   55153 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:42:07.980732   55153 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:42:07.980810   55153 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:42:08.038252   55153 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-16 10:42:08.026410213 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:42:08.038402   55153 docker.go:318] overlay module found
	I0916 10:42:08.040535   55153 out.go:177] * Using the docker driver based on existing profile
	I0916 10:42:08.042029   55153 start.go:297] selected driver: docker
	I0916 10:42:08.042043   55153 start.go:901] validating driver "docker" against &{Name:functional-016570 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-016570 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:42:08.042118   55153 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:42:08.042187   55153 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:42:08.096294   55153 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-16 10:42:08.085371862 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:42:08.096876   55153 cni.go:84] Creating CNI manager for ""
	I0916 10:42:08.096923   55153 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 10:42:08.096974   55153 start.go:340] cluster config:
	{Name:functional-016570 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-016570 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUI
D:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:42:08.098919   55153 out.go:177] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	7f85730bd4d94       115053965e86b       1 second ago         Running             dashboard-metrics-scraper   0                   395f1e784acbe       dashboard-metrics-scraper-c5db448b4-jvhn6
	4d490c9b7ae90       6e38f40d628db       19 seconds ago       Running             storage-provisioner         2                   b81ffde02718d       storage-provisioner
	d2500e97c949b       6bab7719df100       33 seconds ago       Running             kube-apiserver              0                   9d885083d4265       kube-apiserver-functional-016570
	c8262cd23469c       175ffd71cce3d       33 seconds ago       Running             kube-controller-manager     2                   8b5d374851050       kube-controller-manager-functional-016570
	861dc747735da       2e96e5913fc06       33 seconds ago       Running             etcd                        1                   2cdebcb8c7807       etcd-functional-016570
	f40f6265fc1c6       12968670680f4       51 seconds ago       Running             kindnet-cni                 1                   c7f56f796b013       kindnet-5qjpd
	490a48762f629       6e38f40d628db       51 seconds ago       Exited              storage-provisioner         1                   b81ffde02718d       storage-provisioner
	2810e4a546750       60c005f310ff3       51 seconds ago       Running             kube-proxy                  1                   f4ed79f8dffeb       kube-proxy-w8qkq
	485f2c5cef235       175ffd71cce3d       51 seconds ago       Exited              kube-controller-manager     1                   8b5d374851050       kube-controller-manager-functional-016570
	9ff9913af2feb       9aa1fad941575       51 seconds ago       Running             kube-scheduler              1                   caa2007696d1b       kube-scheduler-functional-016570
	b8bd1849da6c4       c69fa2e9cbf5f       51 seconds ago       Running             coredns                     1                   3d9a434f8b6e5       coredns-7c65d6cfc9-59qm7
	fd0c81e7a39a2       c69fa2e9cbf5f       About a minute ago   Exited              coredns                     0                   3d9a434f8b6e5       coredns-7c65d6cfc9-59qm7
	bf96dac81b725       12968670680f4       About a minute ago   Exited              kindnet-cni                 0                   c7f56f796b013       kindnet-5qjpd
	80095e847084f       60c005f310ff3       About a minute ago   Exited              kube-proxy                  0                   f4ed79f8dffeb       kube-proxy-w8qkq
	0906c5e415b9c       9aa1fad941575       About a minute ago   Exited              kube-scheduler              0                   caa2007696d1b       kube-scheduler-functional-016570
	b4905826c508e       2e96e5913fc06       About a minute ago   Exited              etcd                        0                   2cdebcb8c7807       etcd-functional-016570
	
	
	==> containerd <==
	Sep 16 10:42:10 functional-016570 containerd[4401]: time="2024-09-16T10:42:10.081290451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 10:42:10 functional-016570 containerd[4401]: time="2024-09-16T10:42:10.081519917Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 10:42:10 functional-016570 containerd[4401]: time="2024-09-16T10:42:10.081632718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:42:10 functional-016570 containerd[4401]: time="2024-09-16T10:42:10.082467994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:42:10 functional-016570 containerd[4401]: time="2024-09-16T10:42:10.134165522Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 10:42:10 functional-016570 containerd[4401]: time="2024-09-16T10:42:10.134288920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 10:42:10 functional-016570 containerd[4401]: time="2024-09-16T10:42:10.134627837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:42:10 functional-016570 containerd[4401]: time="2024-09-16T10:42:10.135160649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:42:10 functional-016570 containerd[4401]: time="2024-09-16T10:42:10.239472370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:dashboard-metrics-scraper-c5db448b4-jvhn6,Uid:f6bb1f19-917d-404c-9fab-b966f900a8c6,Namespace:kubernetes-dashboard,Attempt:0,} returns sandbox id \"395f1e784acbe7fe418e3072ee9988263c9eb72e57f3f6cf96ac518590ba79da\""
	Sep 16 10:42:10 functional-016570 containerd[4401]: time="2024-09-16T10:42:10.244086489Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\""
	Sep 16 10:42:10 functional-016570 containerd[4401]: time="2024-09-16T10:42:10.246483361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kubernetes-dashboard-695b96c756-64tpc,Uid:8930ff3f-4f5f-41f0-94be-b2685f45ca6c,Namespace:kubernetes-dashboard,Attempt:0,} returns sandbox id \"4817ed22e184b214d46870728ee1653ce49c9055d735e02434c8cea1b7d1de44\""
	Sep 16 10:42:10 functional-016570 containerd[4401]: time="2024-09-16T10:42:10.249544155Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 16 10:42:10 functional-016570 containerd[4401]: time="2024-09-16T10:42:10.864690766Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 16 10:42:12 functional-016570 containerd[4401]: time="2024-09-16T10:42:12.802821956Z" level=info msg="ImageCreate event name:\"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 10:42:12 functional-016570 containerd[4401]: time="2024-09-16T10:42:12.803624095Z" level=info msg="stop pulling image docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: active requests=0, bytes read=19757298"
	Sep 16 10:42:12 functional-016570 containerd[4401]: time="2024-09-16T10:42:12.804936014Z" level=info msg="ImageCreate event name:\"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 10:42:12 functional-016570 containerd[4401]: time="2024-09-16T10:42:12.806015778Z" level=info msg="Pulled image \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" with image id \"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7\", repo tag \"\", repo digest \"docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\", size \"19746404\" in 2.561692105s"
	Sep 16 10:42:12 functional-016570 containerd[4401]: time="2024-09-16T10:42:12.806055104Z" level=info msg="PullImage \"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\" returns image reference \"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7\""
	Sep 16 10:42:12 functional-016570 containerd[4401]: time="2024-09-16T10:42:12.807348878Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\""
	Sep 16 10:42:12 functional-016570 containerd[4401]: time="2024-09-16T10:42:12.808543096Z" level=info msg="CreateContainer within sandbox \"395f1e784acbe7fe418e3072ee9988263c9eb72e57f3f6cf96ac518590ba79da\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,}"
	Sep 16 10:42:12 functional-016570 containerd[4401]: time="2024-09-16T10:42:12.808771404Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 16 10:42:12 functional-016570 containerd[4401]: time="2024-09-16T10:42:12.819891593Z" level=info msg="CreateContainer within sandbox \"395f1e784acbe7fe418e3072ee9988263c9eb72e57f3f6cf96ac518590ba79da\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,} returns container id \"7f85730bd4d94cf4fd50f2f026d885dc30954aa647a12600ead0766929560615\""
	Sep 16 10:42:12 functional-016570 containerd[4401]: time="2024-09-16T10:42:12.820422212Z" level=info msg="StartContainer for \"7f85730bd4d94cf4fd50f2f026d885dc30954aa647a12600ead0766929560615\""
	Sep 16 10:42:12 functional-016570 containerd[4401]: time="2024-09-16T10:42:12.860740687Z" level=info msg="StartContainer for \"7f85730bd4d94cf4fd50f2f026d885dc30954aa647a12600ead0766929560615\" returns successfully"
	Sep 16 10:42:13 functional-016570 containerd[4401]: time="2024-09-16T10:42:13.406600747Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	
	
	==> coredns [b8bd1849da6c464b7fbc64f004ea8f6e93596b309cb23e3f75f0493a6c22ebd5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:59975 - 12600 "HINFO IN 4686966597786162674.2744546229077384558. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011873953s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [fd0c81e7a39a2566405ad2950426958ab0d7abfe073ce6517f67e87f2cc2dabe] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44994 - 49719 "HINFO IN 5811560446017322614.7472127089541594346. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008166038s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-016570
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-016570
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=functional-016570
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_40_38_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:40:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-016570
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:42:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:41:43 +0000   Mon, 16 Sep 2024 10:40:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:41:43 +0000   Mon, 16 Sep 2024 10:40:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:41:43 +0000   Mon, 16 Sep 2024 10:40:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:41:43 +0000   Mon, 16 Sep 2024 10:40:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-016570
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 7e0bbc72eb56431496705823b26dcd4d
	  System UUID:                9e1cbc84-d044-4524-b143-ca93a6dc5aa0
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-59qm7                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     91s
	  kube-system                 etcd-functional-016570                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         96s
	  kube-system                 kindnet-5qjpd                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      91s
	  kube-system                 kube-apiserver-functional-016570             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-functional-016570    200m (2%)     0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-proxy-w8qkq                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-scheduler-functional-016570             100m (1%)     0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-jvhn6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-64tpc        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 90s                kube-proxy       
	  Normal   Starting                 16s                kube-proxy       
	  Normal   NodeAllocatableEnforced  97s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 97s                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 97s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  96s                kubelet          Node functional-016570 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    96s                kubelet          Node functional-016570 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     96s                kubelet          Node functional-016570 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           92s                node-controller  Node functional-016570 event: Registered Node functional-016570 in Controller
	  Normal   Starting                 41s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 41s                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  37s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  34s (x8 over 37s)  kubelet          Node functional-016570 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    34s (x7 over 37s)  kubelet          Node functional-016570 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     34s (x7 over 37s)  kubelet          Node functional-016570 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           28s                node-controller  Node functional-016570 event: Registered Node functional-016570 in Controller
	
	
	==> dmesg <==
	[Sep16 10:17]  #2
	[  +0.001391]  #3
	[  +0.000000]  #4
	[  +0.003060] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.003238] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.002037] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.002135]  #5
	[  +0.000696]  #6
	[  +0.003195]  #7
	[  +0.058540] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.423486] i8042: Warning: Keylock active
	[  +0.007424] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.002994] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000654] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000607] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000622] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000638] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000657] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000607] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000608] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.588189] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +7.251152] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [861dc747735da2748e3f4b24b824a36d0a52f89bbf91f6d93373e8e94ec47110] <==
	{"level":"info","ts":"2024-09-16T10:41:40.736533Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:41:40.736587Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:41:40.736603Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:41:40.736848Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:41:40.736868Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:41:40.737630Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-09-16T10:41:40.737696Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-09-16T10:41:40.737861Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:41:40.737967Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:41:42.324428Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-16T10:41:42.324481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T10:41:42.324523Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-16T10:41:42.324536Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:41:42.324550Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-16T10:41:42.324585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-09-16T10:41:42.324592Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-16T10:41:42.326180Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-016570 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:41:42.326181Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:41:42.326207Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:41:42.326768Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:41:42.326898Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:41:42.328280Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:41:42.328298Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:41:42.329057Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:41:42.329116Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> etcd [b4905826c508e92d29a39ef565f1c838d026ed2e1af8256da846ca2ca7e33d25] <==
	{"level":"info","ts":"2024-09-16T10:40:33.951156Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-16T10:40:33.951169Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-16T10:40:33.952133Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-016570 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:40:33.952149Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:40:33.952176Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:40:33.952174Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:40:33.952458Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:40:33.952538Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:40:33.952886Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:40:33.953028Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:40:33.953056Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:40:33.953277Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:40:33.954142Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:40:33.955433Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:40:33.956074Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-16T10:41:32.555101Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-09-16T10:41:32.555186Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-016570","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-09-16T10:41:32.555307Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:41:32.555359Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:41:32.556893Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-09-16T10:41:32.556932Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-09-16T10:41:32.558356Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-09-16T10:41:32.560054Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:41:32.560161Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-16T10:41:32.560190Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-016570","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 10:42:14 up 24 min,  0 users,  load average: 1.28, 0.94, 0.59
	Linux functional-016570 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [bf96dac81b725b0cdd05c80d46fccb31fba58eb314cbefaf4fa45648dd564d75] <==
	I0916 10:40:44.322865       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 10:40:44.323089       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0916 10:40:44.323250       1 main.go:148] setting mtu 1500 for CNI 
	I0916 10:40:44.323270       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 10:40:44.323292       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 10:40:44.644895       1 controller.go:334] Starting controller kube-network-policies
	I0916 10:40:44.644910       1 controller.go:338] Waiting for informer caches to sync
	I0916 10:40:44.644915       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 10:40:45.019853       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 10:40:45.019909       1 metrics.go:61] Registering metrics
	I0916 10:40:45.019969       1 controller.go:374] Syncing nftables rules
	I0916 10:40:54.644919       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:40:54.644955       1 main.go:299] handling current node
	I0916 10:41:04.651830       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:41:04.651862       1 main.go:299] handling current node
	I0916 10:41:14.648734       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:41:14.648784       1 main.go:299] handling current node
	
	
	==> kindnet [f40f6265fc1c666726cfc4dfc8b0637a32e85401949dfb2edee2619b1765db77] <==
	W0916 10:41:27.212636       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:41:27.212709       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0916 10:41:27.370312       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:41:27.370396       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0916 10:41:27.496085       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:41:27.496151       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0916 10:41:31.899869       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:41:31.899923       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0916 10:41:32.540969       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:41:32.541033       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0916 10:41:32.592684       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:41:32.592731       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: Get "https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	W0916 10:41:33.006411       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	E0916 10:41:33.006451       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	I0916 10:41:43.445970       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 10:41:43.446004       1 metrics.go:61] Registering metrics
	I0916 10:41:43.446082       1 controller.go:374] Syncing nftables rules
	I0916 10:41:43.844790       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:41:43.844836       1 main.go:299] handling current node
	I0916 10:41:53.844621       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:41:53.844659       1 main.go:299] handling current node
	I0916 10:42:03.848081       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:42:03.848124       1 main.go:299] handling current node
	I0916 10:42:13.845336       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:42:13.845386       1 main.go:299] handling current node
	
	
	==> kube-apiserver [d2500e97c949b2a2e8330e357558411d2893aaafdb03d1ef0422a5a6fb1cb12f] <==
	I0916 10:41:43.422383       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 10:41:43.422423       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:41:43.422975       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:41:43.423049       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:41:43.423247       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:41:43.424833       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:41:43.424920       1 policy_source.go:224] refreshing policies
	I0916 10:41:43.439068       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:41:43.466711       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:41:43.467901       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:41:43.471775       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 10:41:43.520505       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:41:44.269900       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0916 10:41:44.531189       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0916 10:41:44.532603       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:41:44.536818       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:41:44.865983       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:41:44.962527       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:41:44.972551       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:41:45.027823       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:41:45.034699       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 10:42:09.231235       1 controller.go:615] quota admission added evaluator for: namespaces
	I0916 10:42:09.266000       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 10:42:09.379723       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.44.131"}
	I0916 10:42:09.429057       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.65.100"}
	
	
	==> kube-controller-manager [485f2c5cef235c0182e1a64e3a548bea54de9894193e120b2b717a72b9ef1bff] <==
	I0916 10:41:23.729467       1 serving.go:386] Generated self-signed cert in-memory
	I0916 10:41:24.021064       1 controllermanager.go:197] "Starting" version="v1.31.1"
	I0916 10:41:24.021095       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:41:24.022441       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0916 10:41:24.022441       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0916 10:41:24.022737       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0916 10:41:24.022823       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0916 10:41:34.024812       1 controllermanager.go:242] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-controller-manager [c8262cd23469ca086f00137b1fc38c96429b63d514a16f9a905da144ecd2b73c] <==
	I0916 10:42:09.302132       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="21.672713ms"
	E0916 10:42:09.302164       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:42:09.322862       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="19.394348ms"
	E0916 10:42:09.322901       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:42:09.322862       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="19.742072ms"
	E0916 10:42:09.322918       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:42:09.330631       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="5.717139ms"
	E0916 10:42:09.330672       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:42:09.330731       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="6.175138ms"
	E0916 10:42:09.330740       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:42:09.338767       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="3.604226ms"
	E0916 10:42:09.338811       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:42:09.339183       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="4.001104ms"
	E0916 10:42:09.339225       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I0916 10:42:09.420221       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="39.404744ms"
	I0916 10:42:09.422278       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="41.065043ms"
	I0916 10:42:09.430005       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="7.676168ms"
	I0916 10:42:09.430194       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="129.11µs"
	I0916 10:42:09.445018       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="42.392µs"
	I0916 10:42:09.520924       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="100.632911ms"
	I0916 10:42:09.521034       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="57.631µs"
	I0916 10:42:09.521082       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="25.339µs"
	I0916 10:42:09.531158       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="49.005µs"
	I0916 10:42:13.153143       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="10.141174ms"
	I0916 10:42:13.153260       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="55.696µs"
	
	
	==> kube-proxy [2810e4a54675045b91d6e2b6996d5595fca99d1ed910f700a9440b05c934282a] <==
	I0916 10:41:23.344360       1 server_linux.go:66] "Using iptables proxy"
	E0916 10:41:23.466145       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-016570\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0916 10:41:24.666043       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-016570\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0916 10:41:26.980270       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-016570\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0916 10:41:31.272411       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-016570\": dial tcp 192.168.49.2:8441: connect: connection refused"
	E0916 10:41:40.759863       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-016570\": dial tcp 192.168.49.2:8441: connect: connection refused"
	I0916 10:41:57.573875       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:41:57.573958       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:41:57.593375       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:41:57.593437       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:41:57.595286       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:41:57.595663       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:41:57.595695       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:41:57.596863       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:41:57.596985       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:41:57.597145       1 config.go:199] "Starting service config controller"
	I0916 10:41:57.597451       1 config.go:328] "Starting node config controller"
	I0916 10:41:57.597468       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:41:57.597345       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:41:57.697968       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:41:57.698030       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:41:57.698035       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [80095e847084fa5775086821e6be1bb6e7bae0fe6a66745b19cb9b75e266bc3f] <==
	I0916 10:40:44.050300       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:40:44.179052       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:40:44.179113       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:40:44.196854       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:40:44.196926       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:40:44.198841       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:40:44.199284       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:40:44.199313       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:40:44.200417       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:40:44.200434       1 config.go:199] "Starting service config controller"
	I0916 10:40:44.200460       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:40:44.200461       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:40:44.200579       1 config.go:328] "Starting node config controller"
	I0916 10:40:44.200592       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:40:44.300956       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:40:44.300944       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:40:44.300945       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [0906c5e415b9c8b7cf0dc6e35daf1caed07aec8e00dc68457a9203cdeaa0fcee] <==
	E0916 10:40:35.625982       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:35.626126       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:40:35.626272       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:35.626178       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 10:40:35.627185       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.485194       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:40:36.485283       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.534754       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:40:36.534802       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.553649       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 10:40:36.553693       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.739511       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 10:40:36.739572       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.763525       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 10:40:36.763583       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.768041       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 10:40:36.768094       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.782504       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 10:40:36.782564       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:40:36.919306       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:40:36.919357       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 10:40:39.247701       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:41:22.445138       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0916 10:41:22.445225       1 run.go:72] "command failed" err="finished without leader elect"
	I0916 10:41:22.445244       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	
	
	==> kube-scheduler [9ff9913af2feb41a804690d65aef168822cd2ac0a456e3642182c64337903889] <==
	W0916 10:41:33.309245       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0916 10:41:33.309313       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:41:34.856594       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0916 10:41:34.856642       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:41:39.800183       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0916 10:41:39.800261       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:41:40.256311       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0916 10:41:40.256386       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:41:40.442961       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0916 10:41:40.443018       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:41:40.559680       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8441: connect: connection refused
	E0916 10:41:40.559763       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError"
	W0916 10:41:43.341030       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0916 10:41:43.341153       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:41:43.341209       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0916 10:41:43.341177       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:41:43.341296       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 10:41:43.341326       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:41:43.341402       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 10:41:43.341424       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:41:43.341470       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 10:41:43.341491       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:41:43.341579       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:41:43.341738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0916 10:41:43.775320       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:41:43 functional-016570 kubelet[5394]: time="2024-09-16T10:41:43Z" level=warning msg="Failed to remove cgroup (will retry)" error="rmdir /sys/fs/cgroup/devices/kubepods/burstable/pod5333b7f22b4ca6fa3369f64c875d053e/5b66d77e8e33400b91593c23cc79092e1262597c431c960d97c2f3351c50e961: device or resource busy"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.038387    5394 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.039162    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9924f10d-5beb-43b1-9782-44644a015b56-tmp\") pod \"storage-provisioner\" (UID: \"9924f10d-5beb-43b1-9782-44644a015b56\") " pod="kube-system/storage-provisioner"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.039226    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4a00283-1d69-49c4-8c60-264ef3fd7aca-lib-modules\") pod \"kube-proxy-w8qkq\" (UID: \"b4a00283-1d69-49c4-8c60-264ef3fd7aca\") " pod="kube-system/kube-proxy-w8qkq"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.039244    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8ee89403-0943-480c-9f48-4b25a0198f6d-cni-cfg\") pod \"kindnet-5qjpd\" (UID: \"8ee89403-0943-480c-9f48-4b25a0198f6d\") " pod="kube-system/kindnet-5qjpd"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.039258    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ee89403-0943-480c-9f48-4b25a0198f6d-xtables-lock\") pod \"kindnet-5qjpd\" (UID: \"8ee89403-0943-480c-9f48-4b25a0198f6d\") " pod="kube-system/kindnet-5qjpd"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.039324    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ee89403-0943-480c-9f48-4b25a0198f6d-lib-modules\") pod \"kindnet-5qjpd\" (UID: \"8ee89403-0943-480c-9f48-4b25a0198f6d\") " pod="kube-system/kindnet-5qjpd"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.039348    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4a00283-1d69-49c4-8c60-264ef3fd7aca-xtables-lock\") pod \"kube-proxy-w8qkq\" (UID: \"b4a00283-1d69-49c4-8c60-264ef3fd7aca\") " pod="kube-system/kube-proxy-w8qkq"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.056492    5394 scope.go:117] "RemoveContainer" containerID="c1a0361849f33223014e152d4eda266616bbc55966122a9cd0716827729e4171"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.056894    5394 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-016570" podUID="03b56925-37e8-4f4c-947d-8798a9b0b1e8"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.080422    5394 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-functional-016570"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.241180    5394 scope.go:117] "RemoveContainer" containerID="490a48762f6299610dde37755b7d1a88c8a68ac418dfbdab150f25fa336a052b"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: E0916 10:41:44.242240    5394 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:storage-provisioner,Image:gcr.io/k8s-minikube/storage-provisioner:v5,Command:[/storage-provisioner],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bvpkq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},Sta
rtupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod storage-provisioner_kube-system(9924f10d-5beb-43b1-9782-44644a015b56): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: E0916 10:41:44.243435    5394 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/storage-provisioner" podUID="9924f10d-5beb-43b1-9782-44644a015b56"
	Sep 16 10:41:44 functional-016570 kubelet[5394]: I0916 10:41:44.264872    5394 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-functional-016570" podStartSLOduration=0.264847622 podStartE2EDuration="264.847622ms" podCreationTimestamp="2024-09-16 10:41:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:41:44.264526883 +0000 UTC m=+10.389456446" watchObservedRunningTime="2024-09-16 10:41:44.264847622 +0000 UTC m=+10.389777177"
	Sep 16 10:41:45 functional-016570 kubelet[5394]: I0916 10:41:45.059426    5394 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-016570" podUID="03b56925-37e8-4f4c-947d-8798a9b0b1e8"
	Sep 16 10:41:45 functional-016570 kubelet[5394]: I0916 10:41:45.955614    5394 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5333b7f22b4ca6fa3369f64c875d053e" path="/var/lib/kubelet/pods/5333b7f22b4ca6fa3369f64c875d053e/volumes"
	Sep 16 10:41:54 functional-016570 kubelet[5394]: I0916 10:41:54.952651    5394 scope.go:117] "RemoveContainer" containerID="490a48762f6299610dde37755b7d1a88c8a68ac418dfbdab150f25fa336a052b"
	Sep 16 10:42:09 functional-016570 kubelet[5394]: E0916 10:42:09.426875    5394 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5333b7f22b4ca6fa3369f64c875d053e" containerName="kube-apiserver"
	Sep 16 10:42:09 functional-016570 kubelet[5394]: I0916 10:42:09.427533    5394 memory_manager.go:354] "RemoveStaleState removing state" podUID="5333b7f22b4ca6fa3369f64c875d053e" containerName="kube-apiserver"
	Sep 16 10:42:09 functional-016570 kubelet[5394]: I0916 10:42:09.622483    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f6bb1f19-917d-404c-9fab-b966f900a8c6-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-jvhn6\" (UID: \"f6bb1f19-917d-404c-9fab-b966f900a8c6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-jvhn6"
	Sep 16 10:42:09 functional-016570 kubelet[5394]: I0916 10:42:09.622547    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8930ff3f-4f5f-41f0-94be-b2685f45ca6c-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-64tpc\" (UID: \"8930ff3f-4f5f-41f0-94be-b2685f45ca6c\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-64tpc"
	Sep 16 10:42:09 functional-016570 kubelet[5394]: I0916 10:42:09.622580    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcblv\" (UniqueName: \"kubernetes.io/projected/f6bb1f19-917d-404c-9fab-b966f900a8c6-kube-api-access-gcblv\") pod \"dashboard-metrics-scraper-c5db448b4-jvhn6\" (UID: \"f6bb1f19-917d-404c-9fab-b966f900a8c6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-jvhn6"
	Sep 16 10:42:09 functional-016570 kubelet[5394]: I0916 10:42:09.622611    5394 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfd5x\" (UniqueName: \"kubernetes.io/projected/8930ff3f-4f5f-41f0-94be-b2685f45ca6c-kube-api-access-gfd5x\") pod \"kubernetes-dashboard-695b96c756-64tpc\" (UID: \"8930ff3f-4f5f-41f0-94be-b2685f45ca6c\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-64tpc"
	Sep 16 10:42:09 functional-016570 kubelet[5394]: I0916 10:42:09.732338    5394 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	
	
	==> storage-provisioner [490a48762f6299610dde37755b7d1a88c8a68ac418dfbdab150f25fa336a052b] <==
	I0916 10:41:23.246262       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0916 10:41:23.248507       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [4d490c9b7ae90d00a29d06ff834c0ce770ddfef68473b74c781044e7a283344c] <==
	I0916 10:41:55.019536       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 10:41:55.026724       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 10:41:55.026761       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 10:42:12.454479       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 10:42:12.454665       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-016570_189be736-a61c-4399-97b3-ea0b09de3894!
	I0916 10:42:12.454679       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1e3e2c42-8555-41e5-b1cf-7a6ddf78f6d7", APIVersion:"v1", ResourceVersion:"588", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-016570_189be736-a61c-4399-97b3-ea0b09de3894 became leader
	I0916 10:42:12.555037       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-016570_189be736-a61c-4399-97b3-ea0b09de3894!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-016570 -n functional-016570
helpers_test.go:261: (dbg) Run:  kubectl --context functional-016570 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context functional-016570 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (493.31µs)
helpers_test.go:263: kubectl --context functional-016570 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestFunctional/parallel/NodeLabels (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-016570 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1439: (dbg) Non-zero exit: kubectl --context functional-016570 create deployment hello-node --image=registry.k8s.io/echoserver:1.8: fork/exec /usr/local/bin/kubectl: exec format error (528.14µs)
functional_test.go:1443: failed to create hello-node deployment with this command "kubectl --context functional-016570 create deployment hello-node --image=registry.k8s.io/echoserver:1.8": fork/exec /usr/local/bin/kubectl: exec format error.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 service list
functional_test.go:1464: expected 'service list' to contain *hello-node* but got -"|-------------|------------|--------------|-----|\n|  NAMESPACE  |    NAME    | TARGET PORT  | URL |\n|-------------|------------|--------------|-----|\n| default     | kubernetes | No node port |     |\n| kube-system | kube-dns   | No node port |     |\n|-------------|------------|--------------|-----|\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 service list -o json
functional_test.go:1494: Took "383.460285ms" to run "out/minikube-linux-amd64 -p functional-016570 service list -o json"
functional_test.go:1498: expected the json of 'service list' to include "hello-node" but got *"[{\"Namespace\":\"default\",\"Name\":\"kubernetes\",\"URLs\":[],\"PortNames\":[\"No node port\"]},{\"Namespace\":\"kube-system\",\"Name\":\"kube-dns\",\"URLs\":[],\"PortNames\":[\"No node port\"]}]"*. args: "out/minikube-linux-amd64 -p functional-016570 service list -o json"
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 service --namespace=default --https --url hello-node
functional_test.go:1509: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-016570 service --namespace=default --https --url hello-node: exit status 115 (363.893964ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_NOT_FOUND: Service 'hello-node' was not found in 'default' namespace.
	You may select another namespace by using 'minikube service hello-node -n <namespace>'. Or list out all the services using 'minikube service list'

                                                
                                                
** /stderr **
functional_test.go:1511: failed to get service url. args "out/minikube-linux-amd64 -p functional-016570 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 service hello-node --url --format={{.IP}}
functional_test.go:1540: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-016570 service hello-node --url --format={{.IP}}: exit status 115 (317.401049ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_NOT_FOUND: Service 'hello-node' was not found in 'default' namespace.
	You may select another namespace by using 'minikube service hello-node -n <namespace>'. Or list out all the services using 'minikube service list'

                                                
                                                
** /stderr **
functional_test.go:1542: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-016570 service hello-node --url --format={{.IP}}": exit status 115
functional_test.go:1548: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 service hello-node --url
functional_test.go:1559: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-016570 service hello-node --url: exit status 115 (321.147639ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_NOT_FOUND: Service 'hello-node' was not found in 'default' namespace.
	You may select another namespace by using 'minikube service hello-node -n <namespace>'. Or list out all the services using 'minikube service list'

                                                
                                                
** /stderr **
functional_test.go:1561: failed to get service url. args: "out/minikube-linux-amd64 -p functional-016570 service hello-node --url": exit status 115
functional_test.go:1565: found endpoint for hello-node: 
functional_test.go:1573: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-016570 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-016570 apply -f testdata/testsvc.yaml: fork/exec /usr/local/bin/kubectl: exec format error (445.297µs)
functional_test_tunnel_test.go:214: kubectl --context functional-016570 apply -f testdata/testsvc.yaml failed: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (104.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-016570 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-016570 get svc nginx-svc: fork/exec /usr/local/bin/kubectl: exec format error (552.114µs)
functional_test_tunnel_test.go:292: kubectl --context functional-016570 get svc nginx-svc failed: fork/exec /usr/local/bin/kubectl: exec format error
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (104.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (2.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-016570 /tmp/TestFunctionalparallelMountCmdany-port3245019756/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726483338750366627" to /tmp/TestFunctionalparallelMountCmdany-port3245019756/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726483338750366627" to /tmp/TestFunctionalparallelMountCmdany-port3245019756/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726483338750366627" to /tmp/TestFunctionalparallelMountCmdany-port3245019756/001/test-1726483338750366627
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-016570 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (379.596264ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 16 10:42 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 16 10:42 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 16 10:42 test-1726483338750366627
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 ssh cat /mount-9p/test-1726483338750366627
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-016570 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-016570 replace --force -f testdata/busybox-mount-test.yaml: fork/exec /usr/local/bin/kubectl: exec format error (436.933µs)
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-016570 replace --force -f testdata/busybox-mount-test.yaml" : fork/exec /usr/local/bin/kubectl: exec format error
functional_test_mount_test.go:80: "TestFunctional/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-016570 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (263.847191ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=999,access=any,msize=262144,trans=tcp,noextend,port=40535)
	total 2
	-rw-r--r-- 1 docker docker 24 Sep 16 10:42 created-by-test
	-rw-r--r-- 1 docker docker 24 Sep 16 10:42 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Sep 16 10:42 test-1726483338750366627
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-amd64 -p functional-016570 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-016570 /tmp/TestFunctionalparallelMountCmdany-port3245019756/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-016570 /tmp/TestFunctionalparallelMountCmdany-port3245019756/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalparallelMountCmdany-port3245019756/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:40535
* Userspace file server: ufs starting
* Successfully mounted /tmp/TestFunctionalparallelMountCmdany-port3245019756/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-amd64 mount -p functional-016570 /tmp/TestFunctionalparallelMountCmdany-port3245019756/001:/mount-9p --alsologtostderr -v=1] stderr:
I0916 10:42:18.802974   60192 out.go:345] Setting OutFile to fd 1 ...
I0916 10:42:18.803249   60192 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:42:18.803258   60192 out.go:358] Setting ErrFile to fd 2...
I0916 10:42:18.803262   60192 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:42:18.803500   60192 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
I0916 10:42:18.803913   60192 mustload.go:65] Loading cluster: functional-016570
I0916 10:42:18.804382   60192 config.go:182] Loaded profile config "functional-016570": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0916 10:42:18.804946   60192 cli_runner.go:164] Run: docker container inspect functional-016570 --format={{.State.Status}}
I0916 10:42:18.823189   60192 host.go:66] Checking if "functional-016570" exists ...
I0916 10:42:18.823516   60192 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0916 10:42:18.892667   60192 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:57 SystemTime:2024-09-16 10:42:18.881491015 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0916 10:42:18.892845   60192 cli_runner.go:164] Run: docker network inspect functional-016570 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0916 10:42:18.921663   60192 out.go:177] * Mounting host path /tmp/TestFunctionalparallelMountCmdany-port3245019756/001 into VM as /mount-9p ...
I0916 10:42:18.923794   60192 out.go:177]   - Mount type:   9p
I0916 10:42:18.925459   60192 out.go:177]   - User ID:      docker
I0916 10:42:18.926770   60192 out.go:177]   - Group ID:     docker
I0916 10:42:18.928026   60192 out.go:177]   - Version:      9p2000.L
I0916 10:42:18.929509   60192 out.go:177]   - Message Size: 262144
I0916 10:42:18.930922   60192 out.go:177]   - Options:      map[]
I0916 10:42:18.932341   60192 out.go:177]   - Bind Address: 192.168.49.1:40535
I0916 10:42:18.933728   60192 out.go:177] * Userspace file server: 
I0916 10:42:18.933828   60192 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f /mount-9p || echo "
I0916 10:42:18.933905   60192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-016570
I0916 10:42:18.953657   60192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/functional-016570/id_rsa Username:docker}
I0916 10:42:19.074748   60192 mount.go:180] unmount for /mount-9p ran successfully
I0916 10:42:19.074810   60192 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I0916 10:42:19.086469   60192 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=40535,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I0916 10:42:19.156279   60192 main.go:125] stdlog: ufs.go:141 connected
I0916 10:42:19.156746   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tversion tag 65535 msize 262144 version '9P2000.L'
I0916 10:42:19.156813   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rversion tag 65535 msize 262144 version '9P2000'
I0916 10:42:19.157069   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I0916 10:42:19.157147   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rattach tag 0 aqid (20fa08a fa6d15f8 'd')
I0916 10:42:19.157471   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tstat tag 0 fid 0
I0916 10:42:19.157616   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa08a fa6d15f8 'd') m d775 at 0 mt 1726483338 l 4096 t 0 d 0 ext )
I0916 10:42:19.159457   60192 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/.mount-process: {Name:mk04ea8ce5b2a26780b50bf6a4b50a29175109a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0916 10:42:19.159638   60192 mount.go:105] mount successful: ""
I0916 10:42:19.162178   60192 out.go:177] * Successfully mounted /tmp/TestFunctionalparallelMountCmdany-port3245019756/001 to /mount-9p
I0916 10:42:19.163896   60192 out.go:201] 
I0916 10:42:19.165719   60192 out.go:177] * NOTE: This process must stay alive for the mount to be accessible ...
I0916 10:42:20.160019   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tstat tag 0 fid 0
I0916 10:42:20.160158   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa08a fa6d15f8 'd') m d775 at 0 mt 1726483338 l 4096 t 0 d 0 ext )
I0916 10:42:20.160557   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Twalk tag 0 fid 0 newfid 1 
I0916 10:42:20.160604   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rwalk tag 0 
I0916 10:42:20.160835   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Topen tag 0 fid 1 mode 0
I0916 10:42:20.160902   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Ropen tag 0 qid (20fa08a fa6d15f8 'd') iounit 0
I0916 10:42:20.161118   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tstat tag 0 fid 0
I0916 10:42:20.161250   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa08a fa6d15f8 'd') m d775 at 0 mt 1726483338 l 4096 t 0 d 0 ext )
I0916 10:42:20.161443   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tread tag 0 fid 1 offset 0 count 262120
I0916 10:42:20.161621   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rread tag 0 count 258
I0916 10:42:20.161760   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tread tag 0 fid 1 offset 258 count 261862
I0916 10:42:20.161794   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rread tag 0 count 0
I0916 10:42:20.161910   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tread tag 0 fid 1 offset 258 count 262120
I0916 10:42:20.161937   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rread tag 0 count 0
I0916 10:42:20.162080   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I0916 10:42:20.162124   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rwalk tag 0 (20fa08c fa6d15f8 '') 
I0916 10:42:20.162227   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tstat tag 0 fid 2
I0916 10:42:20.162319   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa08c fa6d15f8 '') m 644 at 0 mt 1726483338 l 24 t 0 d 0 ext )
I0916 10:42:20.162466   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tstat tag 0 fid 2
I0916 10:42:20.162546   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa08c fa6d15f8 '') m 644 at 0 mt 1726483338 l 24 t 0 d 0 ext )
I0916 10:42:20.162663   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tclunk tag 0 fid 2
I0916 10:42:20.162713   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rclunk tag 0
I0916 10:42:20.162841   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I0916 10:42:20.162886   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rwalk tag 0 (20fa08b fa6d15f8 '') 
I0916 10:42:20.162997   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tstat tag 0 fid 2
I0916 10:42:20.163088   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa08b fa6d15f8 '') m 644 at 0 mt 1726483338 l 24 t 0 d 0 ext )
I0916 10:42:20.163218   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tstat tag 0 fid 2
I0916 10:42:20.163297   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa08b fa6d15f8 '') m 644 at 0 mt 1726483338 l 24 t 0 d 0 ext )
I0916 10:42:20.163416   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tclunk tag 0 fid 2
I0916 10:42:20.163443   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rclunk tag 0
I0916 10:42:20.163654   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Twalk tag 0 fid 0 newfid 2 0:'test-1726483338750366627' 
I0916 10:42:20.163713   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rwalk tag 0 (20fa08d fa6d15f8 '') 
I0916 10:42:20.163874   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tstat tag 0 fid 2
I0916 10:42:20.163969   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rstat tag 0 st ('test-1726483338750366627' 'jenkins' 'balintp' '' q (20fa08d fa6d15f8 '') m 644 at 0 mt 1726483338 l 24 t 0 d 0 ext )
I0916 10:42:20.164110   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tstat tag 0 fid 2
I0916 10:42:20.164209   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rstat tag 0 st ('test-1726483338750366627' 'jenkins' 'balintp' '' q (20fa08d fa6d15f8 '') m 644 at 0 mt 1726483338 l 24 t 0 d 0 ext )
I0916 10:42:20.164450   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tclunk tag 0 fid 2
I0916 10:42:20.164488   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rclunk tag 0
I0916 10:42:20.164631   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tread tag 0 fid 1 offset 258 count 262120
I0916 10:42:20.164674   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rread tag 0 count 0
I0916 10:42:20.164822   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tclunk tag 0 fid 1
I0916 10:42:20.164854   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rclunk tag 0
I0916 10:42:20.411842   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Twalk tag 0 fid 0 newfid 1 0:'test-1726483338750366627' 
I0916 10:42:20.411907   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rwalk tag 0 (20fa08d fa6d15f8 '') 
I0916 10:42:20.412119   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tstat tag 0 fid 1
I0916 10:42:20.412220   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rstat tag 0 st ('test-1726483338750366627' 'jenkins' 'balintp' '' q (20fa08d fa6d15f8 '') m 644 at 0 mt 1726483338 l 24 t 0 d 0 ext )
I0916 10:42:20.412396   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Twalk tag 0 fid 1 newfid 2 
I0916 10:42:20.412449   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rwalk tag 0 
I0916 10:42:20.412616   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Topen tag 0 fid 2 mode 0
I0916 10:42:20.412682   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Ropen tag 0 qid (20fa08d fa6d15f8 '') iounit 0
I0916 10:42:20.412804   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tstat tag 0 fid 1
I0916 10:42:20.412901   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rstat tag 0 st ('test-1726483338750366627' 'jenkins' 'balintp' '' q (20fa08d fa6d15f8 '') m 644 at 0 mt 1726483338 l 24 t 0 d 0 ext )
I0916 10:42:20.413041   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tread tag 0 fid 2 offset 0 count 262120
I0916 10:42:20.413099   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rread tag 0 count 24
I0916 10:42:20.413208   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tread tag 0 fid 2 offset 24 count 262120
I0916 10:42:20.413277   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rread tag 0 count 0
I0916 10:42:20.413408   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tread tag 0 fid 2 offset 24 count 262120
I0916 10:42:20.413440   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rread tag 0 count 0
I0916 10:42:20.413561   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tclunk tag 0 fid 2
I0916 10:42:20.413591   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rclunk tag 0
I0916 10:42:20.413704   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tclunk tag 0 fid 1
I0916 10:42:20.413729   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rclunk tag 0
I0916 10:42:20.675914   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tstat tag 0 fid 0
I0916 10:42:20.676055   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa08a fa6d15f8 'd') m d775 at 0 mt 1726483338 l 4096 t 0 d 0 ext )
I0916 10:42:20.676538   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Twalk tag 0 fid 0 newfid 1 
I0916 10:42:20.676601   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rwalk tag 0 
I0916 10:42:20.676782   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Topen tag 0 fid 1 mode 0
I0916 10:42:20.676850   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Ropen tag 0 qid (20fa08a fa6d15f8 'd') iounit 0
I0916 10:42:20.677003   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tstat tag 0 fid 0
I0916 10:42:20.677109   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rstat tag 0 st ('001' 'jenkins' 'balintp' '' q (20fa08a fa6d15f8 'd') m d775 at 0 mt 1726483338 l 4096 t 0 d 0 ext )
I0916 10:42:20.677319   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tread tag 0 fid 1 offset 0 count 262120
I0916 10:42:20.677489   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rread tag 0 count 258
I0916 10:42:20.677638   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tread tag 0 fid 1 offset 258 count 261862
I0916 10:42:20.677691   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rread tag 0 count 0
I0916 10:42:20.677826   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tread tag 0 fid 1 offset 258 count 262120
I0916 10:42:20.677861   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rread tag 0 count 0
I0916 10:42:20.677987   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I0916 10:42:20.678024   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rwalk tag 0 (20fa08c fa6d15f8 '') 
I0916 10:42:20.678141   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tstat tag 0 fid 2
I0916 10:42:20.678225   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa08c fa6d15f8 '') m 644 at 0 mt 1726483338 l 24 t 0 d 0 ext )
I0916 10:42:20.678354   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tstat tag 0 fid 2
I0916 10:42:20.678427   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'balintp' '' q (20fa08c fa6d15f8 '') m 644 at 0 mt 1726483338 l 24 t 0 d 0 ext )
I0916 10:42:20.678557   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tclunk tag 0 fid 2
I0916 10:42:20.678598   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rclunk tag 0
I0916 10:42:20.678735   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I0916 10:42:20.678769   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rwalk tag 0 (20fa08b fa6d15f8 '') 
I0916 10:42:20.678876   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tstat tag 0 fid 2
I0916 10:42:20.678967   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa08b fa6d15f8 '') m 644 at 0 mt 1726483338 l 24 t 0 d 0 ext )
I0916 10:42:20.679111   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tstat tag 0 fid 2
I0916 10:42:20.679209   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rstat tag 0 st ('created-by-test' 'jenkins' 'balintp' '' q (20fa08b fa6d15f8 '') m 644 at 0 mt 1726483338 l 24 t 0 d 0 ext )
I0916 10:42:20.679340   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tclunk tag 0 fid 2
I0916 10:42:20.679369   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rclunk tag 0
I0916 10:42:20.679488   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Twalk tag 0 fid 0 newfid 2 0:'test-1726483338750366627' 
I0916 10:42:20.679527   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rwalk tag 0 (20fa08d fa6d15f8 '') 
I0916 10:42:20.679659   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tstat tag 0 fid 2
I0916 10:42:20.679799   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rstat tag 0 st ('test-1726483338750366627' 'jenkins' 'balintp' '' q (20fa08d fa6d15f8 '') m 644 at 0 mt 1726483338 l 24 t 0 d 0 ext )
I0916 10:42:20.679943   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tstat tag 0 fid 2
I0916 10:42:20.680037   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rstat tag 0 st ('test-1726483338750366627' 'jenkins' 'balintp' '' q (20fa08d fa6d15f8 '') m 644 at 0 mt 1726483338 l 24 t 0 d 0 ext )
I0916 10:42:20.680163   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tclunk tag 0 fid 2
I0916 10:42:20.680194   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rclunk tag 0
I0916 10:42:20.680363   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tread tag 0 fid 1 offset 258 count 262120
I0916 10:42:20.680405   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rread tag 0 count 0
I0916 10:42:20.680557   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tclunk tag 0 fid 1
I0916 10:42:20.680600   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rclunk tag 0
I0916 10:42:20.681961   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I0916 10:42:20.682023   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rerror tag 0 ename 'file not found' ecode 0
I0916 10:42:20.934041   60192 main.go:125] stdlog: srv_conn.go:133 >>> 192.168.49.2:33514 Tclunk tag 0 fid 0
I0916 10:42:20.934108   60192 main.go:125] stdlog: srv_conn.go:190 <<< 192.168.49.2:33514 Rclunk tag 0
I0916 10:42:20.934684   60192 main.go:125] stdlog: ufs.go:147 disconnected
I0916 10:42:20.952761   60192 out.go:177] * Unmounting /mount-9p ...
I0916 10:42:20.954076   60192 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f /mount-9p || echo "
I0916 10:42:20.961068   60192 mount.go:180] unmount for /mount-9p ran successfully
I0916 10:42:20.961180   60192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/.mount-process: {Name:mk04ea8ce5b2a26780b50bf6a4b50a29175109a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0916 10:42:20.962866   60192 out.go:201] 
W0916 10:42:20.964253   60192 out.go:270] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I0916 10:42:20.965759   60192 out.go:201] 
--- FAIL: TestFunctional/parallel/MountCmd/any-port (2.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (2.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-770465 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-770465 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": fork/exec /usr/local/bin/kubectl: exec format error (472.634µs)
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-770465 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": fork/exec /usr/local/bin/kubectl: exec format error
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-770465 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/NodeLabels]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-770465
helpers_test.go:235: (dbg) docker inspect ha-770465:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c7d04b23d2abf966ca171fd8833a886574de3fcaa485d5ebf8d16c3f7eff5dbf",
	        "Created": "2024-09-16T10:44:02.535590959Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 67096,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:44:02.647879467Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/c7d04b23d2abf966ca171fd8833a886574de3fcaa485d5ebf8d16c3f7eff5dbf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c7d04b23d2abf966ca171fd8833a886574de3fcaa485d5ebf8d16c3f7eff5dbf/hostname",
	        "HostsPath": "/var/lib/docker/containers/c7d04b23d2abf966ca171fd8833a886574de3fcaa485d5ebf8d16c3f7eff5dbf/hosts",
	        "LogPath": "/var/lib/docker/containers/c7d04b23d2abf966ca171fd8833a886574de3fcaa485d5ebf8d16c3f7eff5dbf/c7d04b23d2abf966ca171fd8833a886574de3fcaa485d5ebf8d16c3f7eff5dbf-json.log",
	        "Name": "/ha-770465",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-770465:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-770465",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7cf5268ac52be23fef4a82bd45f08106e1a207d0ea8d2788ec989044339c0bf7-init/diff:/var/lib/docker/overlay2/e391a31d00d345931d18da68f8a7d7338830f6d9b98fe203461225d03a65d594/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7cf5268ac52be23fef4a82bd45f08106e1a207d0ea8d2788ec989044339c0bf7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7cf5268ac52be23fef4a82bd45f08106e1a207d0ea8d2788ec989044339c0bf7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7cf5268ac52be23fef4a82bd45f08106e1a207d0ea8d2788ec989044339c0bf7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-770465",
	                "Source": "/var/lib/docker/volumes/ha-770465/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-770465",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-770465",
	                "name.minikube.sigs.k8s.io": "ha-770465",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "44b97868fe538185a93dd3ffee226f783c7a36b13e0f3eef97b478a02c3be30d",
	            "SandboxKey": "/var/run/docker/netns/44b97868fe53",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-770465": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "c95c64bb41bdebd7017cdb4d495e3e500618752ab547ea09aa27d1cdaf23b64d",
	                    "EndpointID": "7cdb8c3026b37e52aeed2849f3891bcd317a8955c9a3c33cd2c85ef8edba5112",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-770465",
	                        "c7d04b23d2ab"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-770465 -n ha-770465
helpers_test.go:244: <<< TestMultiControlPlane/serial/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/NodeLabels]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-770465 logs -n 25: (1.202602873s)
helpers_test.go:252: TestMultiControlPlane/serial/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	|----------------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                 Args                 |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| update-context | functional-016570                    | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	|                | update-context                       |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2               |                   |         |         |                     |                     |
	| image          | functional-016570 image ls           | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:42 UTC | 16 Sep 24 10:42 UTC |
	| delete         | -p functional-016570                 | functional-016570 | jenkins | v1.34.0 | 16 Sep 24 10:43 UTC | 16 Sep 24 10:43 UTC |
	| start          | -p ha-770465 --wait=true             | ha-770465         | jenkins | v1.34.0 | 16 Sep 24 10:43 UTC | 16 Sep 24 10:45 UTC |
	|                | --memory=2200 --ha                   |                   |         |         |                     |                     |
	|                | -v=7 --alsologtostderr               |                   |         |         |                     |                     |
	|                | --driver=docker                      |                   |         |         |                     |                     |
	|                | --container-runtime=containerd       |                   |         |         |                     |                     |
	| kubectl        | -p ha-770465 -- apply -f             | ha-770465         | jenkins | v1.34.0 | 16 Sep 24 10:45 UTC | 16 Sep 24 10:45 UTC |
	|                | ./testdata/ha/ha-pod-dns-test.yaml   |                   |         |         |                     |                     |
	| kubectl        | -p ha-770465 -- rollout status       | ha-770465         | jenkins | v1.34.0 | 16 Sep 24 10:45 UTC | 16 Sep 24 10:46 UTC |
	|                | deployment/busybox                   |                   |         |         |                     |                     |
	| kubectl        | -p ha-770465 -- get pods -o          | ha-770465         | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|                | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl        | -p ha-770465 -- get pods -o          | ha-770465         | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|                | jsonpath='{.items[*].metadata.name}' |                   |         |         |                     |                     |
	| kubectl        | -p ha-770465 -- exec                 | ha-770465         | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|                | busybox-7dff88458-845rc --           |                   |         |         |                     |                     |
	|                | nslookup kubernetes.io               |                   |         |         |                     |                     |
	| kubectl        | -p ha-770465 -- exec                 | ha-770465         | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|                | busybox-7dff88458-dlndh --           |                   |         |         |                     |                     |
	|                | nslookup kubernetes.io               |                   |         |         |                     |                     |
	| kubectl        | -p ha-770465 -- exec                 | ha-770465         | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|                | busybox-7dff88458-klfw4 --           |                   |         |         |                     |                     |
	|                | nslookup kubernetes.io               |                   |         |         |                     |                     |
	| kubectl        | -p ha-770465 -- exec                 | ha-770465         | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|                | busybox-7dff88458-845rc --           |                   |         |         |                     |                     |
	|                | nslookup kubernetes.default          |                   |         |         |                     |                     |
	| kubectl        | -p ha-770465 -- exec                 | ha-770465         | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|                | busybox-7dff88458-dlndh --           |                   |         |         |                     |                     |
	|                | nslookup kubernetes.default          |                   |         |         |                     |                     |
	| kubectl        | -p ha-770465 -- exec                 | ha-770465         | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|                | busybox-7dff88458-klfw4 --           |                   |         |         |                     |                     |
	|                | nslookup kubernetes.default          |                   |         |         |                     |                     |
	| kubectl        | -p ha-770465 -- exec                 | ha-770465         | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|                | busybox-7dff88458-845rc -- nslookup  |                   |         |         |                     |                     |
	|                | kubernetes.default.svc.cluster.local |                   |         |         |                     |                     |
	| kubectl        | -p ha-770465 -- exec                 | ha-770465         | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|                | busybox-7dff88458-dlndh -- nslookup  |                   |         |         |                     |                     |
	|                | kubernetes.default.svc.cluster.local |                   |         |         |                     |                     |
	| kubectl        | -p ha-770465 -- exec                 | ha-770465         | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|                | busybox-7dff88458-klfw4 -- nslookup  |                   |         |         |                     |                     |
	|                | kubernetes.default.svc.cluster.local |                   |         |         |                     |                     |
	| kubectl        | -p ha-770465 -- get pods -o          | ha-770465         | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|                | jsonpath='{.items[*].metadata.name}' |                   |         |         |                     |                     |
	| kubectl        | -p ha-770465 -- exec                 | ha-770465         | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|                | busybox-7dff88458-845rc              |                   |         |         |                     |                     |
	|                | -- sh -c nslookup                    |                   |         |         |                     |                     |
	|                | host.minikube.internal | awk         |                   |         |         |                     |                     |
	|                | 'NR==5' | cut -d' ' -f3              |                   |         |         |                     |                     |
	| kubectl        | -p ha-770465 -- exec                 | ha-770465         | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|                | busybox-7dff88458-845rc -- sh        |                   |         |         |                     |                     |
	|                | -c ping -c 1 192.168.49.1            |                   |         |         |                     |                     |
	| kubectl        | -p ha-770465 -- exec                 | ha-770465         | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|                | busybox-7dff88458-dlndh              |                   |         |         |                     |                     |
	|                | -- sh -c nslookup                    |                   |         |         |                     |                     |
	|                | host.minikube.internal | awk         |                   |         |         |                     |                     |
	|                | 'NR==5' | cut -d' ' -f3              |                   |         |         |                     |                     |
	| kubectl        | -p ha-770465 -- exec                 | ha-770465         | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|                | busybox-7dff88458-dlndh -- sh        |                   |         |         |                     |                     |
	|                | -c ping -c 1 192.168.49.1            |                   |         |         |                     |                     |
	| kubectl        | -p ha-770465 -- exec                 | ha-770465         | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|                | busybox-7dff88458-klfw4              |                   |         |         |                     |                     |
	|                | -- sh -c nslookup                    |                   |         |         |                     |                     |
	|                | host.minikube.internal | awk         |                   |         |         |                     |                     |
	|                | 'NR==5' | cut -d' ' -f3              |                   |         |         |                     |                     |
	| kubectl        | -p ha-770465 -- exec                 | ha-770465         | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|                | busybox-7dff88458-klfw4 -- sh        |                   |         |         |                     |                     |
	|                | -c ping -c 1 192.168.49.1            |                   |         |         |                     |                     |
	| node           | add -p ha-770465 -v=7                | ha-770465         | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|                | --alsologtostderr                    |                   |         |         |                     |                     |
	|----------------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:43:57
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:43:57.194814   66415 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:43:57.195071   66415 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:43:57.195080   66415 out.go:358] Setting ErrFile to fd 2...
	I0916 10:43:57.195084   66415 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:43:57.195271   66415 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 10:43:57.195892   66415 out.go:352] Setting JSON to false
	I0916 10:43:57.196843   66415 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1581,"bootTime":1726481856,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:43:57.196943   66415 start.go:139] virtualization: kvm guest
	I0916 10:43:57.199443   66415 out.go:177] * [ha-770465] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:43:57.201260   66415 notify.go:220] Checking for updates...
	I0916 10:43:57.201316   66415 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:43:57.203072   66415 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:43:57.204887   66415 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:43:57.206727   66415 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 10:43:57.208588   66415 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:43:57.210353   66415 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:43:57.212180   66415 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:43:57.235492   66415 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:43:57.235632   66415 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:43:57.285551   66415 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2024-09-16 10:43:57.276396234 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:43:57.285662   66415 docker.go:318] overlay module found
	I0916 10:43:57.287818   66415 out.go:177] * Using the docker driver based on user configuration
	I0916 10:43:57.289265   66415 start.go:297] selected driver: docker
	I0916 10:43:57.289278   66415 start.go:901] validating driver "docker" against <nil>
	I0916 10:43:57.289304   66415 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:43:57.290089   66415 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:43:57.337613   66415 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2024-09-16 10:43:57.328917373 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:43:57.337780   66415 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:43:57.338033   66415 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:43:57.339771   66415 out.go:177] * Using Docker driver with root privileges
	I0916 10:43:57.341286   66415 cni.go:84] Creating CNI manager for ""
	I0916 10:43:57.341356   66415 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0916 10:43:57.341369   66415 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 10:43:57.341446   66415 start.go:340] cluster config:
	{Name:ha-770465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-770465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:contain
erd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:43:57.342936   66415 out.go:177] * Starting "ha-770465" primary control-plane node in "ha-770465" cluster
	I0916 10:43:57.344192   66415 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 10:43:57.345502   66415 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:43:57.346627   66415 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:43:57.346662   66415 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4
	I0916 10:43:57.346672   66415 cache.go:56] Caching tarball of preloaded images
	I0916 10:43:57.346727   66415 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:43:57.346745   66415 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 10:43:57.346753   66415 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 10:43:57.347073   66415 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/config.json ...
	I0916 10:43:57.347098   66415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/config.json: {Name:mkb67ba9c685f6e37a3398a22655544c40d6e0f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0916 10:43:57.366525   66415 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:43:57.366547   66415 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:43:57.366647   66415 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:43:57.366661   66415 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:43:57.366666   66415 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:43:57.366673   66415 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:43:57.366680   66415 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:43:57.367964   66415 image.go:273] response: 
	I0916 10:43:57.421502   66415 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:43:57.421548   66415 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:43:57.421585   66415 start.go:360] acquireMachinesLock for ha-770465: {Name:mk79463d2cf034afd16e2c9f41174a568f4314aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:43:57.421697   66415 start.go:364] duration metric: took 92.559µs to acquireMachinesLock for "ha-770465"
	I0916 10:43:57.421735   66415 start.go:93] Provisioning new machine with config: &{Name:ha-770465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-770465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 10:43:57.421827   66415 start.go:125] createHost starting for "" (driver="docker")
	I0916 10:43:57.423956   66415 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 10:43:57.424303   66415 start.go:159] libmachine.API.Create for "ha-770465" (driver="docker")
	I0916 10:43:57.424342   66415 client.go:168] LocalClient.Create starting
	I0916 10:43:57.424443   66415 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem
	I0916 10:43:57.424488   66415 main.go:141] libmachine: Decoding PEM data...
	I0916 10:43:57.424510   66415 main.go:141] libmachine: Parsing certificate...
	I0916 10:43:57.424584   66415 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem
	I0916 10:43:57.424610   66415 main.go:141] libmachine: Decoding PEM data...
	I0916 10:43:57.424625   66415 main.go:141] libmachine: Parsing certificate...
	I0916 10:43:57.425030   66415 cli_runner.go:164] Run: docker network inspect ha-770465 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 10:43:57.441724   66415 cli_runner.go:211] docker network inspect ha-770465 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 10:43:57.441785   66415 network_create.go:284] running [docker network inspect ha-770465] to gather additional debugging logs...
	I0916 10:43:57.441802   66415 cli_runner.go:164] Run: docker network inspect ha-770465
	W0916 10:43:57.457787   66415 cli_runner.go:211] docker network inspect ha-770465 returned with exit code 1
	I0916 10:43:57.457820   66415 network_create.go:287] error running [docker network inspect ha-770465]: docker network inspect ha-770465: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-770465 not found
	I0916 10:43:57.457832   66415 network_create.go:289] output of [docker network inspect ha-770465]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-770465 not found
	
	** /stderr **
	I0916 10:43:57.457910   66415 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:43:57.475572   66415 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000a22d80}
	I0916 10:43:57.475632   66415 network_create.go:124] attempt to create docker network ha-770465 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0916 10:43:57.475687   66415 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-770465 ha-770465
	I0916 10:43:57.536989   66415 network_create.go:108] docker network ha-770465 192.168.49.0/24 created
	I0916 10:43:57.537020   66415 kic.go:121] calculated static IP "192.168.49.2" for the "ha-770465" container
	I0916 10:43:57.537082   66415 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 10:43:57.553026   66415 cli_runner.go:164] Run: docker volume create ha-770465 --label name.minikube.sigs.k8s.io=ha-770465 --label created_by.minikube.sigs.k8s.io=true
	I0916 10:43:57.570659   66415 oci.go:103] Successfully created a docker volume ha-770465
	I0916 10:43:57.570756   66415 cli_runner.go:164] Run: docker run --rm --name ha-770465-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-770465 --entrypoint /usr/bin/test -v ha-770465:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 10:43:58.090213   66415 oci.go:107] Successfully prepared a docker volume ha-770465
	I0916 10:43:58.090264   66415 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:43:58.090286   66415 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 10:43:58.090352   66415 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-770465:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 10:44:02.470698   66415 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-770465:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.38028137s)
	I0916 10:44:02.470729   66415 kic.go:203] duration metric: took 4.3804387s to extract preloaded images to volume ...
	W0916 10:44:02.470887   66415 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 10:44:02.471006   66415 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 10:44:02.519215   66415 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-770465 --name ha-770465 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-770465 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-770465 --network ha-770465 --ip 192.168.49.2 --volume ha-770465:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 10:44:02.807062   66415 cli_runner.go:164] Run: docker container inspect ha-770465 --format={{.State.Running}}
	I0916 10:44:02.824971   66415 cli_runner.go:164] Run: docker container inspect ha-770465 --format={{.State.Status}}
	I0916 10:44:02.843878   66415 cli_runner.go:164] Run: docker exec ha-770465 stat /var/lib/dpkg/alternatives/iptables
	I0916 10:44:02.893080   66415 oci.go:144] the created container "ha-770465" has a running status.
	I0916 10:44:02.893111   66415 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa...
	I0916 10:44:03.031285   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 10:44:03.031333   66415 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 10:44:03.057161   66415 cli_runner.go:164] Run: docker container inspect ha-770465 --format={{.State.Status}}
	I0916 10:44:03.074631   66415 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 10:44:03.074653   66415 kic_runner.go:114] Args: [docker exec --privileged ha-770465 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 10:44:03.118992   66415 cli_runner.go:164] Run: docker container inspect ha-770465 --format={{.State.Status}}
	I0916 10:44:03.139540   66415 machine.go:93] provisionDockerMachine start ...
	I0916 10:44:03.139648   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:44:03.165705   66415 main.go:141] libmachine: Using SSH client type: native
	I0916 10:44:03.165984   66415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 10:44:03.165999   66415 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:44:03.166893   66415 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38230->127.0.0.1:32788: read: connection reset by peer
	I0916 10:44:06.299158   66415 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-770465
	
	I0916 10:44:06.299186   66415 ubuntu.go:169] provisioning hostname "ha-770465"
	I0916 10:44:06.299240   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:44:06.316285   66415 main.go:141] libmachine: Using SSH client type: native
	I0916 10:44:06.316491   66415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 10:44:06.316513   66415 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-770465 && echo "ha-770465" | sudo tee /etc/hostname
	I0916 10:44:06.458736   66415 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-770465
	
	I0916 10:44:06.458818   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:44:06.475716   66415 main.go:141] libmachine: Using SSH client type: native
	I0916 10:44:06.475931   66415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 10:44:06.475948   66415 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-770465' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-770465/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-770465' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:44:06.611976   66415 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:44:06.612006   66415 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 10:44:06.612041   66415 ubuntu.go:177] setting up certificates
	I0916 10:44:06.612055   66415 provision.go:84] configureAuth start
	I0916 10:44:06.612119   66415 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465
	I0916 10:44:06.629965   66415 provision.go:143] copyHostCerts
	I0916 10:44:06.630000   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:44:06.630031   66415 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 10:44:06.630040   66415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:44:06.630104   66415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 10:44:06.630182   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:44:06.630200   66415 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 10:44:06.630206   66415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:44:06.630229   66415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 10:44:06.630271   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:44:06.630289   66415 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 10:44:06.630292   66415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:44:06.630312   66415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 10:44:06.630364   66415 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.ha-770465 san=[127.0.0.1 192.168.49.2 ha-770465 localhost minikube]
	I0916 10:44:07.000349   66415 provision.go:177] copyRemoteCerts
	I0916 10:44:07.000421   66415 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:44:07.000454   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:44:07.016954   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa Username:docker}
	I0916 10:44:07.112204   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:44:07.112262   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 10:44:07.133619   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:44:07.133693   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0916 10:44:07.154592   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:44:07.154659   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:44:07.176443   66415 provision.go:87] duration metric: took 564.373064ms to configureAuth
	I0916 10:44:07.176469   66415 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:44:07.176636   66415 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:44:07.176648   66415 machine.go:96] duration metric: took 4.037078889s to provisionDockerMachine
	I0916 10:44:07.176654   66415 client.go:171] duration metric: took 9.752302538s to LocalClient.Create
	I0916 10:44:07.176673   66415 start.go:167] duration metric: took 9.752388319s to libmachine.API.Create "ha-770465"
	I0916 10:44:07.176684   66415 start.go:293] postStartSetup for "ha-770465" (driver="docker")
	I0916 10:44:07.176697   66415 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:44:07.176737   66415 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:44:07.176783   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:44:07.193817   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa Username:docker}
	I0916 10:44:07.288395   66415 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:44:07.291547   66415 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:44:07.291585   66415 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:44:07.291593   66415 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:44:07.291600   66415 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:44:07.291610   66415 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 10:44:07.291661   66415 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 10:44:07.291787   66415 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 10:44:07.291800   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /etc/ssl/certs/111892.pem
	I0916 10:44:07.291895   66415 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:44:07.299886   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:44:07.321632   66415 start.go:296] duration metric: took 144.925404ms for postStartSetup
	I0916 10:44:07.321943   66415 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465
	I0916 10:44:07.339383   66415 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/config.json ...
	I0916 10:44:07.339676   66415 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:44:07.339718   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:44:07.356862   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa Username:docker}
	I0916 10:44:07.448433   66415 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:44:07.452607   66415 start.go:128] duration metric: took 10.030761291s to createHost
	I0916 10:44:07.452643   66415 start.go:83] releasing machines lock for "ha-770465", held for 10.030930716s
	I0916 10:44:07.452715   66415 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465
	I0916 10:44:07.470126   66415 ssh_runner.go:195] Run: cat /version.json
	I0916 10:44:07.470159   66415 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:44:07.470170   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:44:07.470211   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:44:07.487483   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa Username:docker}
	I0916 10:44:07.488822   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa Username:docker}
	I0916 10:44:07.579262   66415 ssh_runner.go:195] Run: systemctl --version
	I0916 10:44:07.583511   66415 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:44:07.660764   66415 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 10:44:07.684454   66415 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:44:07.684520   66415 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:44:07.710536   66415 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 10:44:07.710556   66415 start.go:495] detecting cgroup driver to use...
	I0916 10:44:07.710597   66415 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:44:07.710645   66415 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 10:44:07.721841   66415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 10:44:07.732369   66415 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:44:07.732417   66415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:44:07.744738   66415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:44:07.757954   66415 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:44:07.830894   66415 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:44:07.906423   66415 docker.go:233] disabling docker service ...
	I0916 10:44:07.906481   66415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:44:07.923643   66415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:44:07.933929   66415 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:44:08.013748   66415 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:44:08.085472   66415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:44:08.096207   66415 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:44:08.111049   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 10:44:08.120105   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:44:08.129009   66415 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:44:08.129067   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:44:08.138760   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:44:08.147708   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:44:08.156735   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:44:08.165815   66415 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:44:08.174496   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:44:08.183615   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:44:08.192589   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:44:08.201635   66415 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:44:08.209166   66415 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:44:08.216698   66415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:44:08.289661   66415 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 10:44:08.399092   66415 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 10:44:08.399168   66415 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 10:44:08.402696   66415 start.go:563] Will wait 60s for crictl version
	I0916 10:44:08.402742   66415 ssh_runner.go:195] Run: which crictl
	I0916 10:44:08.405875   66415 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:44:08.438290   66415 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 10:44:08.438384   66415 ssh_runner.go:195] Run: containerd --version
	I0916 10:44:08.459416   66415 ssh_runner.go:195] Run: containerd --version
	I0916 10:44:08.484059   66415 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 10:44:08.485880   66415 cli_runner.go:164] Run: docker network inspect ha-770465 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:44:08.502600   66415 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:44:08.506126   66415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:44:08.516729   66415 kubeadm.go:883] updating cluster {Name:ha-770465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-770465 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:44:08.516867   66415 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:44:08.516917   66415 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:44:08.547534   66415 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 10:44:08.547554   66415 containerd.go:534] Images already preloaded, skipping extraction
	I0916 10:44:08.547603   66415 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:44:08.579979   66415 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 10:44:08.580000   66415 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:44:08.580007   66415 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I0916 10:44:08.580095   66415 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-770465 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-770465 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:44:08.580150   66415 ssh_runner.go:195] Run: sudo crictl info
	I0916 10:44:08.612440   66415 cni.go:84] Creating CNI manager for ""
	I0916 10:44:08.612464   66415 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 10:44:08.612476   66415 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:44:08.612503   66415 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-770465 NodeName:ha-770465 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:44:08.612664   66415 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-770465"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:44:08.612691   66415 kube-vip.go:115] generating kube-vip config ...
	I0916 10:44:08.612737   66415 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 10:44:08.623862   66415 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:44:08.623951   66415 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0916 10:44:08.623996   66415 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:44:08.631955   66415 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:44:08.632030   66415 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0916 10:44:08.639833   66415 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0916 10:44:08.655913   66415 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:44:08.673288   66415 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0916 10:44:08.690703   66415 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0916 10:44:08.707684   66415 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 10:44:08.710984   66415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:44:08.721537   66415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:44:08.797240   66415 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:44:08.810193   66415 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465 for IP: 192.168.49.2
	I0916 10:44:08.810217   66415 certs.go:194] generating shared ca certs ...
	I0916 10:44:08.810235   66415 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:44:08.810405   66415 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 10:44:08.810474   66415 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 10:44:08.810489   66415 certs.go:256] generating profile certs ...
	I0916 10:44:08.810562   66415 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.key
	I0916 10:44:08.810586   66415 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.crt with IP's: []
	I0916 10:44:09.290023   66415 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.crt ...
	I0916 10:44:09.290065   66415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.crt: {Name:mk3f167f76dda721d4d80ee048f18145ce2629ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:44:09.290248   66415 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.key ...
	I0916 10:44:09.290265   66415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.key: {Name:mk6ced1c16707f60b003e2ae9bbcd7fda238e598 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:44:09.290343   66415 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key.9688a104
	I0916 10:44:09.290357   66415 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt.9688a104 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0916 10:44:09.664203   66415 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt.9688a104 ...
	I0916 10:44:09.664242   66415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt.9688a104: {Name:mkad1b6852c8388971568713edf6b18ce679ff85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:44:09.664435   66415 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key.9688a104 ...
	I0916 10:44:09.664455   66415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key.9688a104: {Name:mk5087bee3d1e77d2ebdef457c71d782601e19c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:44:09.664555   66415 certs.go:381] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt.9688a104 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt
	I0916 10:44:09.664671   66415 certs.go:385] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key.9688a104 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key
	I0916 10:44:09.664757   66415 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.key
	I0916 10:44:09.664778   66415 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.crt with IP's: []
	I0916 10:44:09.828335   66415 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.crt ...
	I0916 10:44:09.828371   66415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.crt: {Name:mk7f0ffcb83dc64ecaf281ed8f885cb7c5ec4cc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:44:09.828542   66415 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.key ...
	I0916 10:44:09.828554   66415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.key: {Name:mk73142a656af1c1c1d3237c115a645da1705db4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:44:09.828625   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:44:09.828641   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:44:09.828654   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:44:09.828667   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:44:09.828680   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:44:09.828692   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:44:09.828704   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:44:09.828715   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:44:09.828764   66415 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 10:44:09.828797   66415 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 10:44:09.828807   66415 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:44:09.828830   66415 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 10:44:09.828854   66415 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:44:09.828874   66415 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 10:44:09.828909   66415 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:44:09.828938   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /usr/share/ca-certificates/111892.pem
	I0916 10:44:09.828951   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:44:09.828963   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem -> /usr/share/ca-certificates/11189.pem
	I0916 10:44:09.829537   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:44:09.851991   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:44:09.874486   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:44:09.896556   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:44:09.917869   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 10:44:09.939801   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 10:44:09.962103   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:44:09.985070   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 10:44:10.007372   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 10:44:10.028974   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:44:10.050458   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 10:44:10.073083   66415 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:44:10.089629   66415 ssh_runner.go:195] Run: openssl version
	I0916 10:44:10.094826   66415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 10:44:10.103639   66415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 10:44:10.106923   66415 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 10:44:10.106980   66415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 10:44:10.113257   66415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:44:10.121955   66415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:44:10.130807   66415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:44:10.134190   66415 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:44:10.134258   66415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:44:10.140652   66415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:44:10.149294   66415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 10:44:10.158265   66415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 10:44:10.161562   66415 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 10:44:10.161623   66415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 10:44:10.167931   66415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 10:44:10.176485   66415 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:44:10.179545   66415 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:44:10.179597   66415 kubeadm.go:392] StartCluster: {Name:ha-770465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-770465 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:44:10.179686   66415 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 10:44:10.179761   66415 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:44:10.212206   66415 cri.go:89] found id: ""
	I0916 10:44:10.212258   66415 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:44:10.220696   66415 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:44:10.228773   66415 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 10:44:10.228835   66415 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:44:10.236821   66415 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:44:10.236839   66415 kubeadm.go:157] found existing configuration files:
	
	I0916 10:44:10.236876   66415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 10:44:10.244721   66415 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:44:10.244770   66415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:44:10.252531   66415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 10:44:10.260482   66415 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:44:10.260533   66415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:44:10.268539   66415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 10:44:10.276817   66415 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:44:10.276882   66415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:44:10.285223   66415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 10:44:10.293714   66415 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:44:10.293780   66415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:44:10.301921   66415 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 10:44:10.336843   66415 kubeadm.go:310] W0916 10:44:10.336221    1157 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:44:10.337371   66415 kubeadm.go:310] W0916 10:44:10.336810    1157 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:44:10.354303   66415 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 10:44:10.405538   66415 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:44:20.311517   66415 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 10:44:20.311605   66415 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 10:44:20.311693   66415 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 10:44:20.311810   66415 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 10:44:20.311882   66415 kubeadm.go:310] OS: Linux
	I0916 10:44:20.311940   66415 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 10:44:20.311981   66415 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 10:44:20.312046   66415 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 10:44:20.312118   66415 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 10:44:20.312193   66415 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 10:44:20.312273   66415 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 10:44:20.312334   66415 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 10:44:20.312377   66415 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 10:44:20.312417   66415 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 10:44:20.312481   66415 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:44:20.312563   66415 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:44:20.312673   66415 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 10:44:20.312768   66415 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:44:20.314390   66415 out.go:235]   - Generating certificates and keys ...
	I0916 10:44:20.314466   66415 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 10:44:20.314534   66415 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 10:44:20.314617   66415 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 10:44:20.314683   66415 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 10:44:20.314735   66415 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 10:44:20.314775   66415 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 10:44:20.314820   66415 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 10:44:20.314906   66415 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-770465 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 10:44:20.314953   66415 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 10:44:20.315060   66415 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-770465 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 10:44:20.315124   66415 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 10:44:20.315179   66415 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 10:44:20.315218   66415 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 10:44:20.315274   66415 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:44:20.315317   66415 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:44:20.315371   66415 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 10:44:20.315416   66415 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:44:20.315471   66415 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:44:20.315542   66415 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:44:20.315622   66415 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:44:20.315677   66415 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:44:20.317306   66415 out.go:235]   - Booting up control plane ...
	I0916 10:44:20.317397   66415 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:44:20.317490   66415 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:44:20.317569   66415 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:44:20.317697   66415 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:44:20.317800   66415 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:44:20.317868   66415 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 10:44:20.317994   66415 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 10:44:20.318090   66415 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:44:20.318142   66415 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.612816ms
	I0916 10:44:20.318210   66415 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 10:44:20.318260   66415 kubeadm.go:310] [api-check] The API server is healthy after 5.986593008s
	I0916 10:44:20.318352   66415 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:44:20.318465   66415 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:44:20.318520   66415 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:44:20.318655   66415 kubeadm.go:310] [mark-control-plane] Marking the node ha-770465 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:44:20.318704   66415 kubeadm.go:310] [bootstrap-token] Using token: sszzzq.es5jj49460nx8z5d
	I0916 10:44:20.320889   66415 out.go:235]   - Configuring RBAC rules ...
	I0916 10:44:20.320981   66415 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:44:20.321068   66415 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:44:20.321189   66415 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:44:20.321331   66415 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:44:20.321472   66415 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:44:20.321564   66415 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:44:20.321699   66415 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:44:20.321739   66415 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 10:44:20.321786   66415 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 10:44:20.321792   66415 kubeadm.go:310] 
	I0916 10:44:20.321847   66415 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 10:44:20.321853   66415 kubeadm.go:310] 
	I0916 10:44:20.321916   66415 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 10:44:20.321922   66415 kubeadm.go:310] 
	I0916 10:44:20.321947   66415 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 10:44:20.322004   66415 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:44:20.322052   66415 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:44:20.322057   66415 kubeadm.go:310] 
	I0916 10:44:20.322115   66415 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 10:44:20.322125   66415 kubeadm.go:310] 
	I0916 10:44:20.322179   66415 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:44:20.322186   66415 kubeadm.go:310] 
	I0916 10:44:20.322232   66415 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 10:44:20.322295   66415 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:44:20.322357   66415 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:44:20.322363   66415 kubeadm.go:310] 
	I0916 10:44:20.322431   66415 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:44:20.322499   66415 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 10:44:20.322505   66415 kubeadm.go:310] 
	I0916 10:44:20.322589   66415 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token sszzzq.es5jj49460nx8z5d \
	I0916 10:44:20.322679   66415 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 \
	I0916 10:44:20.322699   66415 kubeadm.go:310] 	--control-plane 
	I0916 10:44:20.322704   66415 kubeadm.go:310] 
	I0916 10:44:20.322795   66415 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:44:20.322805   66415 kubeadm.go:310] 
	I0916 10:44:20.322911   66415 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token sszzzq.es5jj49460nx8z5d \
	I0916 10:44:20.323044   66415 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 
	I0916 10:44:20.323056   66415 cni.go:84] Creating CNI manager for ""
	I0916 10:44:20.323064   66415 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 10:44:20.324451   66415 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 10:44:20.325608   66415 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 10:44:20.329718   66415 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 10:44:20.329735   66415 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 10:44:20.346626   66415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 10:44:20.533947   66415 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:44:20.534024   66415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:44:20.534044   66415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-770465 minikube.k8s.io/updated_at=2024_09_16T10_44_20_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=ha-770465 minikube.k8s.io/primary=true
	I0916 10:44:20.541024   66415 ops.go:34] apiserver oom_adj: -16
	I0916 10:44:20.648080   66415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:44:21.148715   66415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:44:21.648948   66415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:44:22.148928   66415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:44:22.649021   66415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:44:23.148765   66415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:44:23.648809   66415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:44:24.148424   66415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:44:24.238563   66415 kubeadm.go:1113] duration metric: took 3.704600298s to wait for elevateKubeSystemPrivileges
	I0916 10:44:24.238597   66415 kubeadm.go:394] duration metric: took 14.059004214s to StartCluster
	I0916 10:44:24.238614   66415 settings.go:142] acquiring lock: {Name:mk5f140da961bb985c566f732d611f3e238f1f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:44:24.238673   66415 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:44:24.239304   66415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:44:24.239525   66415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 10:44:24.239543   66415 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 10:44:24.239518   66415 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 10:44:24.239634   66415 addons.go:69] Setting default-storageclass=true in profile "ha-770465"
	I0916 10:44:24.239647   66415 start.go:241] waiting for startup goroutines ...
	I0916 10:44:24.239652   66415 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-770465"
	I0916 10:44:24.239624   66415 addons.go:69] Setting storage-provisioner=true in profile "ha-770465"
	I0916 10:44:24.239675   66415 addons.go:234] Setting addon storage-provisioner=true in "ha-770465"
	I0916 10:44:24.239717   66415 host.go:66] Checking if "ha-770465" exists ...
	I0916 10:44:24.239758   66415 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:44:24.239991   66415 cli_runner.go:164] Run: docker container inspect ha-770465 --format={{.State.Status}}
	I0916 10:44:24.240253   66415 cli_runner.go:164] Run: docker container inspect ha-770465 --format={{.State.Status}}
	I0916 10:44:24.260439   66415 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:44:24.260566   66415 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:44:24.260784   66415 kapi.go:59] client config for ha-770465: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:44:24.261175   66415 cert_rotation.go:140] Starting client certificate rotation controller
	I0916 10:44:24.261377   66415 addons.go:234] Setting addon default-storageclass=true in "ha-770465"
	I0916 10:44:24.261414   66415 host.go:66] Checking if "ha-770465" exists ...
	I0916 10:44:24.261756   66415 cli_runner.go:164] Run: docker container inspect ha-770465 --format={{.State.Status}}
	I0916 10:44:24.261813   66415 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:44:24.261829   66415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:44:24.261871   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:44:24.283034   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa Username:docker}
	I0916 10:44:24.283292   66415 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:44:24.283312   66415 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:44:24.283369   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:44:24.302992   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa Username:docker}
	I0916 10:44:24.438174   66415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 10:44:24.543471   66415 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:44:24.548575   66415 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:44:25.027937   66415 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0916 10:44:25.028051   66415 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 10:44:25.028071   66415 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 10:44:25.028144   66415 round_trippers.go:463] GET https://192.168.49.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0916 10:44:25.028155   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:25.028165   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:25.028170   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:25.038620   66415 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0916 10:44:25.039381   66415 round_trippers.go:463] PUT https://192.168.49.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0916 10:44:25.039400   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:25.039411   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:25.039417   66415 round_trippers.go:473]     Content-Type: application/json
	I0916 10:44:25.039425   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:25.042229   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:25.269771   66415 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0916 10:44:25.271062   66415 addons.go:510] duration metric: took 1.031511426s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0916 10:44:25.271149   66415 start.go:246] waiting for cluster config update ...
	I0916 10:44:25.271190   66415 start.go:255] writing updated cluster config ...
	I0916 10:44:25.273007   66415 out.go:201] 
	I0916 10:44:25.274609   66415 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:44:25.274679   66415 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/config.json ...
	I0916 10:44:25.276441   66415 out.go:177] * Starting "ha-770465-m02" control-plane node in "ha-770465" cluster
	I0916 10:44:25.278238   66415 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 10:44:25.279933   66415 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:44:25.281656   66415 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:44:25.281682   66415 cache.go:56] Caching tarball of preloaded images
	I0916 10:44:25.281688   66415 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:44:25.281812   66415 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 10:44:25.281827   66415 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 10:44:25.281905   66415 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/config.json ...
	W0916 10:44:25.301883   66415 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:44:25.301902   66415 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:44:25.301994   66415 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:44:25.302009   66415 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:44:25.302015   66415 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:44:25.302023   66415 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:44:25.302030   66415 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:44:25.303169   66415 image.go:273] response: 
	I0916 10:44:25.356924   66415 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:44:25.356973   66415 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:44:25.357014   66415 start.go:360] acquireMachinesLock for ha-770465-m02: {Name:mk1ae0810eb0d80ca7ae9fe74f31de5324d2e214 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:44:25.357127   66415 start.go:364] duration metric: took 91.548µs to acquireMachinesLock for "ha-770465-m02"
	I0916 10:44:25.357157   66415 start.go:93] Provisioning new machine with config: &{Name:ha-770465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-770465 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9
PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 10:44:25.357232   66415 start.go:125] createHost starting for "m02" (driver="docker")
	I0916 10:44:25.358945   66415 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 10:44:25.359076   66415 start.go:159] libmachine.API.Create for "ha-770465" (driver="docker")
	I0916 10:44:25.359102   66415 client.go:168] LocalClient.Create starting
	I0916 10:44:25.359196   66415 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem
	I0916 10:44:25.359231   66415 main.go:141] libmachine: Decoding PEM data...
	I0916 10:44:25.359248   66415 main.go:141] libmachine: Parsing certificate...
	I0916 10:44:25.359295   66415 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem
	I0916 10:44:25.359313   66415 main.go:141] libmachine: Decoding PEM data...
	I0916 10:44:25.359328   66415 main.go:141] libmachine: Parsing certificate...
	I0916 10:44:25.359516   66415 cli_runner.go:164] Run: docker network inspect ha-770465 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:44:25.376717   66415 network_create.go:77] Found existing network {name:ha-770465 subnet:0xc0019b8810 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0916 10:44:25.376751   66415 kic.go:121] calculated static IP "192.168.49.3" for the "ha-770465-m02" container
	I0916 10:44:25.376803   66415 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 10:44:25.394583   66415 cli_runner.go:164] Run: docker volume create ha-770465-m02 --label name.minikube.sigs.k8s.io=ha-770465-m02 --label created_by.minikube.sigs.k8s.io=true
	I0916 10:44:25.413245   66415 oci.go:103] Successfully created a docker volume ha-770465-m02
	I0916 10:44:25.413334   66415 cli_runner.go:164] Run: docker run --rm --name ha-770465-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-770465-m02 --entrypoint /usr/bin/test -v ha-770465-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 10:44:26.039602   66415 oci.go:107] Successfully prepared a docker volume ha-770465-m02
	I0916 10:44:26.039644   66415 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:44:26.039694   66415 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 10:44:26.039810   66415 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-770465-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 10:44:30.342140   66415 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-770465-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.302273546s)
	I0916 10:44:30.342171   66415 kic.go:203] duration metric: took 4.302475081s to extract preloaded images to volume ...
	W0916 10:44:30.342298   66415 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 10:44:30.342384   66415 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 10:44:30.387993   66415 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-770465-m02 --name ha-770465-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-770465-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-770465-m02 --network ha-770465 --ip 192.168.49.3 --volume ha-770465-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 10:44:30.687266   66415 cli_runner.go:164] Run: docker container inspect ha-770465-m02 --format={{.State.Running}}
	I0916 10:44:30.705239   66415 cli_runner.go:164] Run: docker container inspect ha-770465-m02 --format={{.State.Status}}
	I0916 10:44:30.723829   66415 cli_runner.go:164] Run: docker exec ha-770465-m02 stat /var/lib/dpkg/alternatives/iptables
	I0916 10:44:30.766096   66415 oci.go:144] the created container "ha-770465-m02" has a running status.
	I0916 10:44:30.766123   66415 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m02/id_rsa...
	I0916 10:44:30.971239   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 10:44:30.971311   66415 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 10:44:30.993690   66415 cli_runner.go:164] Run: docker container inspect ha-770465-m02 --format={{.State.Status}}
	I0916 10:44:31.011874   66415 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 10:44:31.011895   66415 kic_runner.go:114] Args: [docker exec --privileged ha-770465-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 10:44:31.129780   66415 cli_runner.go:164] Run: docker container inspect ha-770465-m02 --format={{.State.Status}}
	I0916 10:44:31.146757   66415 machine.go:93] provisionDockerMachine start ...
	I0916 10:44:31.146848   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m02
	I0916 10:44:31.168557   66415 main.go:141] libmachine: Using SSH client type: native
	I0916 10:44:31.168827   66415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 10:44:31.168846   66415 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:44:31.339063   66415 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-770465-m02
	
	I0916 10:44:31.339097   66415 ubuntu.go:169] provisioning hostname "ha-770465-m02"
	I0916 10:44:31.339169   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m02
	I0916 10:44:31.357687   66415 main.go:141] libmachine: Using SSH client type: native
	I0916 10:44:31.357868   66415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 10:44:31.357881   66415 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-770465-m02 && echo "ha-770465-m02" | sudo tee /etc/hostname
	I0916 10:44:31.502584   66415 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-770465-m02
	
	I0916 10:44:31.502667   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m02
	I0916 10:44:31.519216   66415 main.go:141] libmachine: Using SSH client type: native
	I0916 10:44:31.519395   66415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 10:44:31.519412   66415 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-770465-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-770465-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-770465-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:44:31.651722   66415 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:44:31.651778   66415 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 10:44:31.651799   66415 ubuntu.go:177] setting up certificates
	I0916 10:44:31.651808   66415 provision.go:84] configureAuth start
	I0916 10:44:31.651864   66415 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465-m02
	I0916 10:44:31.668932   66415 provision.go:143] copyHostCerts
	I0916 10:44:31.668968   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:44:31.669004   66415 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 10:44:31.669016   66415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:44:31.669089   66415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 10:44:31.669185   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:44:31.669211   66415 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 10:44:31.669218   66415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:44:31.669263   66415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 10:44:31.669325   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:44:31.669354   66415 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 10:44:31.669361   66415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:44:31.669395   66415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 10:44:31.669466   66415 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.ha-770465-m02 san=[127.0.0.1 192.168.49.3 ha-770465-m02 localhost minikube]
	I0916 10:44:32.008664   66415 provision.go:177] copyRemoteCerts
	I0916 10:44:32.008736   66415 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:44:32.008791   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m02
	I0916 10:44:32.027573   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m02/id_rsa Username:docker}
	I0916 10:44:32.124445   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:44:32.124511   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 10:44:32.146483   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:44:32.146552   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:44:32.169237   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:44:32.169301   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:44:32.192111   66415 provision.go:87] duration metric: took 540.289843ms to configureAuth
	I0916 10:44:32.192143   66415 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:44:32.192327   66415 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:44:32.192339   66415 machine.go:96] duration metric: took 1.045560198s to provisionDockerMachine
	I0916 10:44:32.192345   66415 client.go:171] duration metric: took 6.833236368s to LocalClient.Create
	I0916 10:44:32.192364   66415 start.go:167] duration metric: took 6.833289798s to libmachine.API.Create "ha-770465"
	I0916 10:44:32.192372   66415 start.go:293] postStartSetup for "ha-770465-m02" (driver="docker")
	I0916 10:44:32.192380   66415 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:44:32.192420   66415 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:44:32.192452   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m02
	I0916 10:44:32.209146   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m02/id_rsa Username:docker}
	I0916 10:44:32.304496   66415 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:44:32.307418   66415 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:44:32.307446   66415 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:44:32.307454   66415 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:44:32.307460   66415 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:44:32.307470   66415 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 10:44:32.307519   66415 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 10:44:32.307592   66415 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 10:44:32.307602   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /etc/ssl/certs/111892.pem
	I0916 10:44:32.307692   66415 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:44:32.315440   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:44:32.336935   66415 start.go:296] duration metric: took 144.547197ms for postStartSetup
	I0916 10:44:32.337279   66415 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465-m02
	I0916 10:44:32.353412   66415 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/config.json ...
	I0916 10:44:32.353669   66415 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:44:32.353710   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m02
	I0916 10:44:32.370893   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m02/id_rsa Username:docker}
	I0916 10:44:32.460437   66415 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:44:32.464654   66415 start.go:128] duration metric: took 7.10740609s to createHost
	I0916 10:44:32.464678   66415 start.go:83] releasing machines lock for "ha-770465-m02", held for 7.107536685s
	I0916 10:44:32.464753   66415 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465-m02
	I0916 10:44:32.484577   66415 out.go:177] * Found network options:
	I0916 10:44:32.486485   66415 out.go:177]   - NO_PROXY=192.168.49.2
	W0916 10:44:32.487930   66415 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:44:32.487974   66415 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 10:44:32.488043   66415 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:44:32.488083   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m02
	I0916 10:44:32.488151   66415 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:44:32.488222   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m02
	I0916 10:44:32.507086   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m02/id_rsa Username:docker}
	I0916 10:44:32.507166   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m02/id_rsa Username:docker}
	I0916 10:44:32.674952   66415 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 10:44:32.698208   66415 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:44:32.698295   66415 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:44:32.722654   66415 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 10:44:32.722678   66415 start.go:495] detecting cgroup driver to use...
	I0916 10:44:32.722706   66415 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:44:32.722746   66415 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 10:44:32.733969   66415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 10:44:32.744574   66415 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:44:32.744626   66415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:44:32.756928   66415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:44:32.770398   66415 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:44:32.848552   66415 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:44:32.933680   66415 docker.go:233] disabling docker service ...
	I0916 10:44:32.933736   66415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:44:32.951795   66415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:44:32.962537   66415 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:44:33.040640   66415 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:44:33.117197   66415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:44:33.127550   66415 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:44:33.142072   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 10:44:33.151064   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:44:33.159837   66415 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:44:33.159904   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:44:33.168895   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:44:33.177939   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:44:33.186896   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:44:33.195503   66415 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:44:33.203773   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:44:33.212377   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:44:33.221013   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:44:33.229862   66415 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:44:33.238263   66415 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:44:33.245872   66415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:44:33.319936   66415 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 10:44:33.428668   66415 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 10:44:33.428730   66415 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 10:44:33.432287   66415 start.go:563] Will wait 60s for crictl version
	I0916 10:44:33.432354   66415 ssh_runner.go:195] Run: which crictl
	I0916 10:44:33.435480   66415 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:44:33.467247   66415 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 10:44:33.467316   66415 ssh_runner.go:195] Run: containerd --version
	I0916 10:44:33.489656   66415 ssh_runner.go:195] Run: containerd --version
	I0916 10:44:33.512880   66415 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 10:44:33.514244   66415 out.go:177]   - env NO_PROXY=192.168.49.2
	I0916 10:44:33.515495   66415 cli_runner.go:164] Run: docker network inspect ha-770465 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:44:33.532235   66415 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:44:33.535660   66415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:44:33.546674   66415 mustload.go:65] Loading cluster: ha-770465
	I0916 10:44:33.546842   66415 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:44:33.547035   66415 cli_runner.go:164] Run: docker container inspect ha-770465 --format={{.State.Status}}
	I0916 10:44:33.563709   66415 host.go:66] Checking if "ha-770465" exists ...
	I0916 10:44:33.564100   66415 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465 for IP: 192.168.49.3
	I0916 10:44:33.564115   66415 certs.go:194] generating shared ca certs ...
	I0916 10:44:33.564130   66415 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:44:33.564264   66415 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 10:44:33.564313   66415 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 10:44:33.564323   66415 certs.go:256] generating profile certs ...
	I0916 10:44:33.564395   66415 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.key
	I0916 10:44:33.564422   66415 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key.64a8388d
	I0916 10:44:33.564433   66415 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt.64a8388d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0916 10:44:33.727218   66415 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt.64a8388d ...
	I0916 10:44:33.727252   66415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt.64a8388d: {Name:mkc920debfcb3a99b73d5e7c12a59e767fd08f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:44:33.727426   66415 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key.64a8388d ...
	I0916 10:44:33.727440   66415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key.64a8388d: {Name:mkc04c70d0ba2d121f62899a67c94a0209c797d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:44:33.727513   66415 certs.go:381] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt.64a8388d -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt
	I0916 10:44:33.727643   66415 certs.go:385] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key.64a8388d -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key
	I0916 10:44:33.727790   66415 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.key
	I0916 10:44:33.727805   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:44:33.727819   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:44:33.727832   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:44:33.727844   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:44:33.727856   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:44:33.727869   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:44:33.727880   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:44:33.727892   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:44:33.727941   66415 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 10:44:33.727970   66415 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 10:44:33.727980   66415 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:44:33.728004   66415 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 10:44:33.728025   66415 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:44:33.728047   66415 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 10:44:33.728082   66415 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:44:33.728110   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /usr/share/ca-certificates/111892.pem
	I0916 10:44:33.728124   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:44:33.728136   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem -> /usr/share/ca-certificates/11189.pem
	I0916 10:44:33.728181   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:44:33.745014   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa Username:docker}
	I0916 10:44:33.832043   66415 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 10:44:33.835502   66415 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 10:44:33.846936   66415 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 10:44:33.850017   66415 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0916 10:44:33.861745   66415 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 10:44:33.865357   66415 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 10:44:33.877065   66415 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 10:44:33.880343   66415 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0916 10:44:33.891581   66415 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 10:44:33.894689   66415 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 10:44:33.905603   66415 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 10:44:33.908571   66415 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0916 10:44:33.919352   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:44:33.941883   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:44:33.964366   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:44:33.986170   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:44:34.008319   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0916 10:44:34.031142   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:44:34.053026   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:44:34.074843   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 10:44:34.096695   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 10:44:34.119039   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:44:34.140277   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 10:44:34.161465   66415 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 10:44:34.177873   66415 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0916 10:44:34.194185   66415 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 10:44:34.209874   66415 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0916 10:44:34.225730   66415 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 10:44:34.241906   66415 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0916 10:44:34.257709   66415 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 10:44:34.273936   66415 ssh_runner.go:195] Run: openssl version
	I0916 10:44:34.279150   66415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 10:44:34.287998   66415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 10:44:34.291319   66415 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 10:44:34.291369   66415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 10:44:34.297633   66415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:44:34.306391   66415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:44:34.315072   66415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:44:34.318294   66415 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:44:34.318352   66415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:44:34.324456   66415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:44:34.333012   66415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 10:44:34.341622   66415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 10:44:34.345125   66415 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 10:44:34.345171   66415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 10:44:34.351507   66415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 10:44:34.360056   66415 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:44:34.363128   66415 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:44:34.363177   66415 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.31.1 containerd true true} ...
	I0916 10:44:34.363262   66415 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-770465-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-770465 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:44:34.363292   66415 kube-vip.go:115] generating kube-vip config ...
	I0916 10:44:34.363334   66415 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 10:44:34.374326   66415 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:44:34.374400   66415 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 10:44:34.374460   66415 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:44:34.382536   66415 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:44:34.382606   66415 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 10:44:34.390792   66415 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0916 10:44:34.407366   66415 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:44:34.424930   66415 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 10:44:34.441722   66415 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 10:44:34.445008   66415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:44:34.455194   66415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:44:34.535021   66415 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:44:34.547499   66415 host.go:66] Checking if "ha-770465" exists ...
	I0916 10:44:34.547796   66415 start.go:317] joinCluster: &{Name:ha-770465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-770465 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:44:34.547925   66415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 10:44:34.547968   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:44:34.566237   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa Username:docker}
	I0916 10:44:34.708100   66415 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 10:44:34.708139   66415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2ux8zi.nv82uirjdh1l2nfj --discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-770465-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0916 10:44:38.829931   66415 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2ux8zi.nv82uirjdh1l2nfj --discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-770465-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (4.121756428s)
	I0916 10:44:38.829969   66415 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 10:44:39.645693   66415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-770465-m02 minikube.k8s.io/updated_at=2024_09_16T10_44_39_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=ha-770465 minikube.k8s.io/primary=false
	I0916 10:44:39.742050   66415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-770465-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 10:44:39.937226   66415 start.go:319] duration metric: took 5.389485167s to joinCluster
	I0916 10:44:39.937315   66415 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 10:44:39.937787   66415 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:44:39.939301   66415 out.go:177] * Verifying Kubernetes components...
	I0916 10:44:39.940876   66415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:44:40.422539   66415 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:44:40.439174   66415 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:44:40.439579   66415 kapi.go:59] client config for ha-770465: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 10:44:40.439680   66415 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 10:44:40.440006   66415 node_ready.go:35] waiting up to 6m0s for node "ha-770465-m02" to be "Ready" ...
	I0916 10:44:40.440125   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:44:40.440138   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:40.440152   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:40.440161   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:40.448886   66415 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0916 10:44:40.449675   66415 node_ready.go:49] node "ha-770465-m02" has status "Ready":"True"
	I0916 10:44:40.449701   66415 node_ready.go:38] duration metric: took 9.668969ms for node "ha-770465-m02" to be "Ready" ...
	I0916 10:44:40.449713   66415 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:44:40.449800   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:44:40.449814   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:40.449825   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:40.449833   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:40.454089   66415 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:44:40.463201   66415 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:40.463354   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:44:40.463368   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:40.463376   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:40.463386   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:40.466421   66415 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:44:40.467104   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:40.467120   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:40.467130   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:40.467135   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:40.469522   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:40.964218   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:44:40.964239   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:40.964247   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:40.964252   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:40.967136   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:40.967850   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:40.967903   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:40.967919   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:40.967929   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:40.970268   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:40.970958   66415 pod_ready.go:93] pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:40.970980   66415 pod_ready.go:82] duration metric: took 507.742956ms for pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:40.970990   66415 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sbs22" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:40.971053   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:40.971061   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:40.971068   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:40.971071   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:40.974690   66415 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:44:40.975248   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:40.975265   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:40.975274   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:40.975280   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:40.977441   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:41.471524   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:41.471546   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:41.471556   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:41.471561   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:41.474404   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:41.475254   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:41.475276   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:41.475287   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:41.475295   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:41.477551   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:41.972038   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:41.972060   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:41.972071   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:41.972089   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:41.974686   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:41.975507   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:41.975528   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:41.975538   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:41.975543   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:41.977837   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:42.471933   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:42.471960   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:42.471972   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:42.471977   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:42.474859   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:42.475561   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:42.475578   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:42.475586   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:42.475591   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:42.477822   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:42.971628   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:42.971655   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:42.971673   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:42.971680   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:42.974564   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:42.975358   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:42.975375   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:42.975388   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:42.975393   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:42.977642   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:42.978137   66415 pod_ready.go:103] pod "coredns-7c65d6cfc9-sbs22" in "kube-system" namespace has status "Ready":"False"
	I0916 10:44:43.471554   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:43.471576   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:43.471587   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:43.471593   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:43.474399   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:43.475010   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:43.475028   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:43.475038   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:43.475043   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:43.477419   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:43.971286   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:43.971306   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:43.971313   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:43.971318   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:43.973702   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:43.974286   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:43.974301   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:43.974308   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:43.974313   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:43.976360   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:44.471321   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:44.471343   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:44.471350   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:44.471354   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:44.473967   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:44.474610   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:44.474627   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:44.474637   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:44.474642   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:44.476820   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:44.971236   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:44.971257   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:44.971268   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:44.971277   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:44.973932   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:44.974727   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:44.974744   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:44.974751   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:44.974756   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:44.976923   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:45.471849   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:45.471873   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:45.471884   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:45.471888   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:45.474566   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:45.475187   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:45.475205   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:45.475212   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:45.475217   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:45.477291   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:45.477713   66415 pod_ready.go:103] pod "coredns-7c65d6cfc9-sbs22" in "kube-system" namespace has status "Ready":"False"
	I0916 10:44:45.971915   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:45.971935   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:45.971943   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:45.971946   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:45.974615   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:45.975214   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:45.975228   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:45.975237   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:45.975240   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:45.977499   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:46.471338   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:46.471361   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:46.471369   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:46.471375   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:46.474271   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:46.474975   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:46.474990   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:46.474998   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:46.475004   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:46.477142   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:46.971971   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:46.971992   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:46.972000   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:46.972003   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:46.974645   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:46.975233   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:46.975249   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:46.975256   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:46.975260   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:46.977325   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:47.471418   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:47.471443   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:47.471453   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:47.471458   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:47.474346   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:47.475089   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:47.475111   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:47.475121   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:47.475129   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:47.477434   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:47.477948   66415 pod_ready.go:103] pod "coredns-7c65d6cfc9-sbs22" in "kube-system" namespace has status "Ready":"False"
	I0916 10:44:47.972220   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:47.972241   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:47.972248   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:47.972253   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:47.974454   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:47.975031   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:47.975048   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:47.975056   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:47.975060   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:47.977117   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:48.471961   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:48.471983   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:48.471992   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:48.471995   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:48.474686   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:48.475339   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:48.475357   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:48.475366   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:48.475372   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:48.477680   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:48.971495   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:48.971516   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:48.971524   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:48.971530   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:48.974253   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:48.974944   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:48.974964   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:48.974975   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:48.974979   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:48.977164   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:49.472031   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:49.472049   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:49.472055   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:49.472058   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:49.474664   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:49.475283   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:49.475299   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:49.475307   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:49.475313   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:49.477626   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:49.478046   66415 pod_ready.go:103] pod "coredns-7c65d6cfc9-sbs22" in "kube-system" namespace has status "Ready":"False"
	I0916 10:44:49.971482   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:49.971504   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:49.971512   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:49.971515   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:49.977019   66415 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:44:49.977646   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:49.977664   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:49.977669   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:49.977673   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:49.979911   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:50.471907   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:50.471933   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:50.471944   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:50.471950   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:50.474692   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:50.475399   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:50.475415   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:50.475425   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:50.475430   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:50.477580   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:50.971347   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:50.971368   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:50.971376   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:50.971380   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:50.974251   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:50.975002   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:50.975020   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:50.975028   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:50.975032   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:50.977288   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:50.977740   66415 pod_ready.go:93] pod "coredns-7c65d6cfc9-sbs22" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:50.977758   66415 pod_ready.go:82] duration metric: took 10.00676272s for pod "coredns-7c65d6cfc9-sbs22" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:50.977769   66415 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:50.977830   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465
	I0916 10:44:50.977838   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:50.977845   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:50.977849   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:50.979915   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:50.980392   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:50.980406   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:50.980413   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:50.980416   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:50.982289   66415 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:44:50.982729   66415 pod_ready.go:93] pod "etcd-ha-770465" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:50.982748   66415 pod_ready.go:82] duration metric: took 4.970311ms for pod "etcd-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:50.982757   66415 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:50.982808   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m02
	I0916 10:44:50.982816   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:50.982822   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:50.982827   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:50.984719   66415 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:44:50.985276   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:44:50.985292   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:50.985299   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:50.985304   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:50.987264   66415 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:44:51.483906   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m02
	I0916 10:44:51.483928   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:51.483936   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:51.483941   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:51.486635   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:51.487196   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:44:51.487213   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:51.487221   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:51.487225   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:51.489524   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:51.983390   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m02
	I0916 10:44:51.983413   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:51.983421   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:51.983424   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:51.986343   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:51.986909   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:44:51.986927   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:51.986934   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:51.986938   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:51.989448   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:52.483469   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m02
	I0916 10:44:52.483492   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:52.483500   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:52.483504   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:52.486301   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:52.486911   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:44:52.486927   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:52.486932   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:52.486935   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:52.489214   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:52.983022   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m02
	I0916 10:44:52.983045   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:52.983055   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:52.983061   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:52.985856   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:52.986473   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:44:52.986492   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:52.986502   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:52.986510   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:52.988919   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:52.989377   66415 pod_ready.go:103] pod "etcd-ha-770465-m02" in "kube-system" namespace has status "Ready":"False"
	I0916 10:44:53.483836   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m02
	I0916 10:44:53.483863   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:53.483871   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:53.483875   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:53.486813   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:53.487481   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:44:53.487500   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:53.487510   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:53.487517   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:53.490185   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:53.983005   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m02
	I0916 10:44:53.983026   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:53.983033   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:53.983037   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:53.985814   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:53.986395   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:44:53.986414   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:53.986425   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:53.986431   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:53.988667   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:54.483764   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m02
	I0916 10:44:54.483790   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:54.483807   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:54.483816   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:54.486551   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:54.487231   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:44:54.487252   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:54.487264   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:54.487269   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:54.489916   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:54.983912   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m02
	I0916 10:44:54.983933   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:54.983941   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:54.983946   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:54.986591   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:54.987196   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:44:54.987212   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:54.987222   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:54.987226   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:54.989780   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:54.990316   66415 pod_ready.go:103] pod "etcd-ha-770465-m02" in "kube-system" namespace has status "Ready":"False"
	I0916 10:44:55.483155   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m02
	I0916 10:44:55.483177   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:55.483187   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:55.483191   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:55.485960   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:55.486554   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:44:55.486573   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:55.486581   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:55.486586   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:55.489033   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:55.982908   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m02
	I0916 10:44:55.982929   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:55.982937   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:55.982941   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:55.985669   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:55.986372   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:44:55.986389   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:55.986396   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:55.986401   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:55.988702   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:56.483520   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m02
	I0916 10:44:56.483540   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:56.483547   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:56.483552   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:56.486337   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:56.486960   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:44:56.486978   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:56.486986   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:56.486991   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:56.489646   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:56.983120   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m02
	I0916 10:44:56.983141   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:56.983148   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:56.983152   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:56.985997   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:56.986804   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:44:56.986822   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:56.986832   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:56.986837   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:56.989239   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:57.483539   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m02
	I0916 10:44:57.483563   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:57.483571   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:57.483575   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:57.486475   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:57.487046   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:44:57.487062   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:57.487072   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:57.487078   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:57.489747   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:57.490211   66415 pod_ready.go:93] pod "etcd-ha-770465-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:57.490231   66415 pod_ready.go:82] duration metric: took 6.507468086s for pod "etcd-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:57.490247   66415 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:57.490304   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465
	I0916 10:44:57.490309   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:57.490318   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:57.490322   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:57.492582   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:57.493154   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:57.493172   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:57.493179   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:57.493184   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:57.495245   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:57.495696   66415 pod_ready.go:93] pod "kube-apiserver-ha-770465" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:57.495714   66415 pod_ready.go:82] duration metric: took 5.461087ms for pod "kube-apiserver-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:57.495726   66415 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:57.495865   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m02
	I0916 10:44:57.495878   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:57.495888   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:57.495894   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:57.498354   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:57.498981   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:44:57.498994   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:57.499002   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:57.499007   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:57.501125   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:57.501560   66415 pod_ready.go:93] pod "kube-apiserver-ha-770465-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:57.501580   66415 pod_ready.go:82] duration metric: took 5.847741ms for pod "kube-apiserver-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:57.501590   66415 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:57.501644   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-770465
	I0916 10:44:57.501655   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:57.501663   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:57.501669   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:57.503690   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:57.504409   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:57.504425   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:57.504436   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:57.504444   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:57.506188   66415 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:44:57.506577   66415 pod_ready.go:93] pod "kube-controller-manager-ha-770465" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:57.506596   66415 pod_ready.go:82] duration metric: took 4.999332ms for pod "kube-controller-manager-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:57.506605   66415 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:57.506653   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-770465-m02
	I0916 10:44:57.506661   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:57.506667   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:57.506675   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:57.508471   66415 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:44:57.509039   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:44:57.509055   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:57.509061   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:57.509066   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:57.510842   66415 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:44:57.511253   66415 pod_ready.go:93] pod "kube-controller-manager-ha-770465-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:57.511279   66415 pod_ready.go:82] duration metric: took 4.665305ms for pod "kube-controller-manager-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:57.511290   66415 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4qgcs" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:57.683630   66415 request.go:632] Waited for 172.264763ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4qgcs
	I0916 10:44:57.683690   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4qgcs
	I0916 10:44:57.683695   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:57.683701   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:57.683706   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:57.686543   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:57.884519   66415 request.go:632] Waited for 197.380218ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:44:57.884599   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:44:57.884611   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:57.884621   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:57.884633   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:57.887441   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:57.887923   66415 pod_ready.go:93] pod "kube-proxy-4qgcs" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:57.887942   66415 pod_ready.go:82] duration metric: took 376.646228ms for pod "kube-proxy-4qgcs" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:57.887951   66415 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gd2mt" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:58.084019   66415 request.go:632] Waited for 196.003042ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gd2mt
	I0916 10:44:58.084110   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gd2mt
	I0916 10:44:58.084117   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:58.084124   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:58.084133   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:58.086867   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:58.283716   66415 request.go:632] Waited for 196.276486ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:58.283804   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:58.283822   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:58.283832   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:58.283838   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:58.286294   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:58.286746   66415 pod_ready.go:93] pod "kube-proxy-gd2mt" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:58.286764   66415 pod_ready.go:82] duration metric: took 398.806827ms for pod "kube-proxy-gd2mt" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:58.286775   66415 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:58.483879   66415 request.go:632] Waited for 197.025817ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465
	I0916 10:44:58.483931   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465
	I0916 10:44:58.483936   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:58.483943   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:58.483947   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:58.486667   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:58.684569   66415 request.go:632] Waited for 197.3405ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:58.684662   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:58.684672   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:58.684680   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:58.684683   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:58.687093   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:58.687525   66415 pod_ready.go:93] pod "kube-scheduler-ha-770465" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:58.687542   66415 pod_ready.go:82] duration metric: took 400.759791ms for pod "kube-scheduler-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:58.687555   66415 pod_ready.go:39] duration metric: took 18.237829446s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:44:58.687576   66415 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:44:58.687634   66415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:44:58.698573   66415 api_server.go:72] duration metric: took 18.761215592s to wait for apiserver process to appear ...
	I0916 10:44:58.698608   66415 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:44:58.698628   66415 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 10:44:58.702854   66415 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 10:44:58.702934   66415 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I0916 10:44:58.702942   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:58.702950   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:58.702955   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:58.703681   66415 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 10:44:58.703843   66415 api_server.go:141] control plane version: v1.31.1
	I0916 10:44:58.703867   66415 api_server.go:131] duration metric: took 5.250776ms to wait for apiserver health ...
	I0916 10:44:58.703874   66415 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:44:58.884320   66415 request.go:632] Waited for 180.346886ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:44:58.884395   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:44:58.884404   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:58.884415   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:58.884425   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:58.888635   66415 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:44:58.892728   66415 system_pods.go:59] 17 kube-system pods found
	I0916 10:44:58.892780   66415 system_pods.go:61] "coredns-7c65d6cfc9-9lw9q" [4d7cd19e-6f4d-44ce-a248-d0fdecdb9fe8] Running
	I0916 10:44:58.892791   66415 system_pods.go:61] "coredns-7c65d6cfc9-sbs22" [89925692-76b4-481f-bac7-16f06bea792a] Running
	I0916 10:44:58.892797   66415 system_pods.go:61] "etcd-ha-770465" [041e8a27-b6a3-4201-990f-c367da8c5eb5] Running
	I0916 10:44:58.892803   66415 system_pods.go:61] "etcd-ha-770465-m02" [1f922c60-bf68-44df-aa8c-42dce50e4dac] Running
	I0916 10:44:58.892808   66415 system_pods.go:61] "kindnet-grjh8" [98658171-1486-44a9-8a20-0d77ea019206] Running
	I0916 10:44:58.892814   66415 system_pods.go:61] "kindnet-kht59" [3124d723-6fc6-4dce-9ecf-bb40b4665208] Running
	I0916 10:44:58.892820   66415 system_pods.go:61] "kube-apiserver-ha-770465" [aa8ba784-661f-4074-b0d3-f98cddf8ce8c] Running
	I0916 10:44:58.892826   66415 system_pods.go:61] "kube-apiserver-ha-770465-m02" [7acdd845-890e-449a-b112-2efbc19a7ef5] Running
	I0916 10:44:58.892835   66415 system_pods.go:61] "kube-controller-manager-ha-770465" [f0a59ee1-bf5e-432d-a185-f9497c75ada4] Running
	I0916 10:44:58.892841   66415 system_pods.go:61] "kube-controller-manager-ha-770465-m02" [55d7421b-5679-416d-a6ae-c36738c1ec32] Running
	I0916 10:44:58.892846   66415 system_pods.go:61] "kube-proxy-4qgcs" [0dbed632-c44c-4a17-8486-b328e2cc41d3] Running
	I0916 10:44:58.892853   66415 system_pods.go:61] "kube-proxy-gd2mt" [fc3bf04d-635a-4264-883b-2fd72cac2e24] Running
	I0916 10:44:58.892859   66415 system_pods.go:61] "kube-scheduler-ha-770465" [85f95d4d-a2d4-45cf-8afe-be446333b9d0] Running
	I0916 10:44:58.892865   66415 system_pods.go:61] "kube-scheduler-ha-770465-m02" [ec894fe5-a992-49f2-a96f-caab3848f7dd] Running
	I0916 10:44:58.892873   66415 system_pods.go:61] "kube-vip-ha-770465" [8fe17ef3-e8ea-42a9-bfd4-5f556d0a3f77] Running
	I0916 10:44:58.892878   66415 system_pods.go:61] "kube-vip-ha-770465-m02" [ffd7ee7b-efe1-4d2d-b2f8-5fd898de7dbf] Running
	I0916 10:44:58.892883   66415 system_pods.go:61] "storage-provisioner" [cf470925-4874-4744-8015-700e93ab924f] Running
	I0916 10:44:58.892892   66415 system_pods.go:74] duration metric: took 189.008696ms to wait for pod list to return data ...
	I0916 10:44:58.892904   66415 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:44:59.084361   66415 request.go:632] Waited for 191.360753ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:44:59.084413   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:44:59.084418   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:59.084432   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:59.084440   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:59.087222   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:59.087471   66415 default_sa.go:45] found service account: "default"
	I0916 10:44:59.087489   66415 default_sa.go:55] duration metric: took 194.578547ms for default service account to be created ...
	I0916 10:44:59.087497   66415 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:44:59.283908   66415 request.go:632] Waited for 196.345018ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:44:59.283997   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:44:59.284011   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:59.284024   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:59.284035   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:59.287894   66415 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:44:59.291907   66415 system_pods.go:86] 17 kube-system pods found
	I0916 10:44:59.291934   66415 system_pods.go:89] "coredns-7c65d6cfc9-9lw9q" [4d7cd19e-6f4d-44ce-a248-d0fdecdb9fe8] Running
	I0916 10:44:59.291940   66415 system_pods.go:89] "coredns-7c65d6cfc9-sbs22" [89925692-76b4-481f-bac7-16f06bea792a] Running
	I0916 10:44:59.291944   66415 system_pods.go:89] "etcd-ha-770465" [041e8a27-b6a3-4201-990f-c367da8c5eb5] Running
	I0916 10:44:59.291948   66415 system_pods.go:89] "etcd-ha-770465-m02" [1f922c60-bf68-44df-aa8c-42dce50e4dac] Running
	I0916 10:44:59.291952   66415 system_pods.go:89] "kindnet-grjh8" [98658171-1486-44a9-8a20-0d77ea019206] Running
	I0916 10:44:59.291958   66415 system_pods.go:89] "kindnet-kht59" [3124d723-6fc6-4dce-9ecf-bb40b4665208] Running
	I0916 10:44:59.291964   66415 system_pods.go:89] "kube-apiserver-ha-770465" [aa8ba784-661f-4074-b0d3-f98cddf8ce8c] Running
	I0916 10:44:59.291970   66415 system_pods.go:89] "kube-apiserver-ha-770465-m02" [7acdd845-890e-449a-b112-2efbc19a7ef5] Running
	I0916 10:44:59.291978   66415 system_pods.go:89] "kube-controller-manager-ha-770465" [f0a59ee1-bf5e-432d-a185-f9497c75ada4] Running
	I0916 10:44:59.291988   66415 system_pods.go:89] "kube-controller-manager-ha-770465-m02" [55d7421b-5679-416d-a6ae-c36738c1ec32] Running
	I0916 10:44:59.291996   66415 system_pods.go:89] "kube-proxy-4qgcs" [0dbed632-c44c-4a17-8486-b328e2cc41d3] Running
	I0916 10:44:59.292003   66415 system_pods.go:89] "kube-proxy-gd2mt" [fc3bf04d-635a-4264-883b-2fd72cac2e24] Running
	I0916 10:44:59.292007   66415 system_pods.go:89] "kube-scheduler-ha-770465" [85f95d4d-a2d4-45cf-8afe-be446333b9d0] Running
	I0916 10:44:59.292013   66415 system_pods.go:89] "kube-scheduler-ha-770465-m02" [ec894fe5-a992-49f2-a96f-caab3848f7dd] Running
	I0916 10:44:59.292017   66415 system_pods.go:89] "kube-vip-ha-770465" [8fe17ef3-e8ea-42a9-bfd4-5f556d0a3f77] Running
	I0916 10:44:59.292022   66415 system_pods.go:89] "kube-vip-ha-770465-m02" [ffd7ee7b-efe1-4d2d-b2f8-5fd898de7dbf] Running
	I0916 10:44:59.292025   66415 system_pods.go:89] "storage-provisioner" [cf470925-4874-4744-8015-700e93ab924f] Running
	I0916 10:44:59.292032   66415 system_pods.go:126] duration metric: took 204.529072ms to wait for k8s-apps to be running ...
	I0916 10:44:59.292040   66415 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:44:59.292098   66415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:44:59.302720   66415 system_svc.go:56] duration metric: took 10.671731ms WaitForService to wait for kubelet
	I0916 10:44:59.302745   66415 kubeadm.go:582] duration metric: took 19.365391948s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:44:59.302761   66415 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:44:59.484174   66415 request.go:632] Waited for 181.324017ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0916 10:44:59.484220   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0916 10:44:59.484225   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:59.484234   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:59.484241   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:59.487361   66415 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:44:59.488071   66415 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:44:59.488097   66415 node_conditions.go:123] node cpu capacity is 8
	I0916 10:44:59.488111   66415 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:44:59.488116   66415 node_conditions.go:123] node cpu capacity is 8
	I0916 10:44:59.488121   66415 node_conditions.go:105] duration metric: took 185.35596ms to run NodePressure ...
	I0916 10:44:59.488134   66415 start.go:241] waiting for startup goroutines ...
	I0916 10:44:59.488187   66415 start.go:255] writing updated cluster config ...
	I0916 10:44:59.490522   66415 out.go:201] 
	I0916 10:44:59.491848   66415 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:44:59.491958   66415 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/config.json ...
	I0916 10:44:59.493433   66415 out.go:177] * Starting "ha-770465-m03" control-plane node in "ha-770465" cluster
	I0916 10:44:59.494431   66415 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 10:44:59.495519   66415 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:44:59.496566   66415 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:44:59.496592   66415 cache.go:56] Caching tarball of preloaded images
	I0916 10:44:59.496593   66415 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:44:59.496681   66415 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 10:44:59.496694   66415 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 10:44:59.496800   66415 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/config.json ...
	W0916 10:44:59.514737   66415 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:44:59.514756   66415 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:44:59.514845   66415 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:44:59.514862   66415 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:44:59.514869   66415 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:44:59.514880   66415 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:44:59.514891   66415 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:44:59.515886   66415 image.go:273] response: 
	I0916 10:44:59.564683   66415 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:44:59.564725   66415 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:44:59.564763   66415 start.go:360] acquireMachinesLock for ha-770465-m03: {Name:mk5962b775140909e26682052ad5dc2dfc9dc910 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:44:59.564857   66415 start.go:364] duration metric: took 76.168µs to acquireMachinesLock for "ha-770465-m03"
	I0916 10:44:59.564881   66415 start.go:93] Provisioning new machine with config: &{Name:ha-770465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-770465 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:fals
e kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Sock
etVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 10:44:59.564979   66415 start.go:125] createHost starting for "m03" (driver="docker")
	I0916 10:44:59.566542   66415 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 10:44:59.566644   66415 start.go:159] libmachine.API.Create for "ha-770465" (driver="docker")
	I0916 10:44:59.566676   66415 client.go:168] LocalClient.Create starting
	I0916 10:44:59.566751   66415 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem
	I0916 10:44:59.566779   66415 main.go:141] libmachine: Decoding PEM data...
	I0916 10:44:59.566794   66415 main.go:141] libmachine: Parsing certificate...
	I0916 10:44:59.566842   66415 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem
	I0916 10:44:59.566860   66415 main.go:141] libmachine: Decoding PEM data...
	I0916 10:44:59.566870   66415 main.go:141] libmachine: Parsing certificate...
	I0916 10:44:59.567053   66415 cli_runner.go:164] Run: docker network inspect ha-770465 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:44:59.582958   66415 network_create.go:77] Found existing network {name:ha-770465 subnet:0xc001b6ede0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0916 10:44:59.583001   66415 kic.go:121] calculated static IP "192.168.49.4" for the "ha-770465-m03" container
	I0916 10:44:59.583055   66415 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 10:44:59.598626   66415 cli_runner.go:164] Run: docker volume create ha-770465-m03 --label name.minikube.sigs.k8s.io=ha-770465-m03 --label created_by.minikube.sigs.k8s.io=true
	I0916 10:44:59.615801   66415 oci.go:103] Successfully created a docker volume ha-770465-m03
	I0916 10:44:59.615876   66415 cli_runner.go:164] Run: docker run --rm --name ha-770465-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-770465-m03 --entrypoint /usr/bin/test -v ha-770465-m03:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 10:45:00.190475   66415 oci.go:107] Successfully prepared a docker volume ha-770465-m03
	I0916 10:45:00.190519   66415 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:45:00.190543   66415 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 10:45:00.190614   66415 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-770465-m03:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 10:45:04.534280   66415 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-770465-m03:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.343625941s)
	I0916 10:45:04.534312   66415 kic.go:203] duration metric: took 4.343765248s to extract preloaded images to volume ...
	W0916 10:45:04.534449   66415 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 10:45:04.534558   66415 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 10:45:04.580679   66415 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-770465-m03 --name ha-770465-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-770465-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-770465-m03 --network ha-770465 --ip 192.168.49.4 --volume ha-770465-m03:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 10:45:04.869227   66415 cli_runner.go:164] Run: docker container inspect ha-770465-m03 --format={{.State.Running}}
	I0916 10:45:04.887147   66415 cli_runner.go:164] Run: docker container inspect ha-770465-m03 --format={{.State.Status}}
	I0916 10:45:04.906000   66415 cli_runner.go:164] Run: docker exec ha-770465-m03 stat /var/lib/dpkg/alternatives/iptables
	I0916 10:45:04.948553   66415 oci.go:144] the created container "ha-770465-m03" has a running status.
	I0916 10:45:04.948587   66415 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m03/id_rsa...
	I0916 10:45:05.207508   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 10:45:05.207553   66415 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 10:45:05.231999   66415 cli_runner.go:164] Run: docker container inspect ha-770465-m03 --format={{.State.Status}}
	I0916 10:45:05.261630   66415 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 10:45:05.261651   66415 kic_runner.go:114] Args: [docker exec --privileged ha-770465-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 10:45:05.334531   66415 cli_runner.go:164] Run: docker container inspect ha-770465-m03 --format={{.State.Status}}
	I0916 10:45:05.357202   66415 machine.go:93] provisionDockerMachine start ...
	I0916 10:45:05.357327   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m03
	I0916 10:45:05.380706   66415 main.go:141] libmachine: Using SSH client type: native
	I0916 10:45:05.380963   66415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32798 <nil> <nil>}
	I0916 10:45:05.380981   66415 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:45:05.575184   66415 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-770465-m03
	
	I0916 10:45:05.575210   66415 ubuntu.go:169] provisioning hostname "ha-770465-m03"
	I0916 10:45:05.575277   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m03
	I0916 10:45:05.593397   66415 main.go:141] libmachine: Using SSH client type: native
	I0916 10:45:05.593595   66415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32798 <nil> <nil>}
	I0916 10:45:05.593610   66415 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-770465-m03 && echo "ha-770465-m03" | sudo tee /etc/hostname
	I0916 10:45:05.742858   66415 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-770465-m03
	
	I0916 10:45:05.742938   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m03
	I0916 10:45:05.760321   66415 main.go:141] libmachine: Using SSH client type: native
	I0916 10:45:05.760542   66415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32798 <nil> <nil>}
	I0916 10:45:05.760562   66415 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-770465-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-770465-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-770465-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:45:05.895802   66415 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:45:05.895834   66415 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 10:45:05.895889   66415 ubuntu.go:177] setting up certificates
	I0916 10:45:05.895906   66415 provision.go:84] configureAuth start
	I0916 10:45:05.895985   66415 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465-m03
	I0916 10:45:05.911809   66415 provision.go:143] copyHostCerts
	I0916 10:45:05.911848   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:45:05.911876   66415 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 10:45:05.911884   66415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:45:05.911946   66415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 10:45:05.912022   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:45:05.912039   66415 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 10:45:05.912045   66415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:45:05.912076   66415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 10:45:05.912150   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:45:05.912173   66415 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 10:45:05.912183   66415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:45:05.912216   66415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 10:45:05.912291   66415 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.ha-770465-m03 san=[127.0.0.1 192.168.49.4 ha-770465-m03 localhost minikube]
	I0916 10:45:06.068789   66415 provision.go:177] copyRemoteCerts
	I0916 10:45:06.068869   66415 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:45:06.068904   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m03
	I0916 10:45:06.085761   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m03/id_rsa Username:docker}
	I0916 10:45:06.184583   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:45:06.184648   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 10:45:06.207594   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:45:06.207661   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:45:06.231109   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:45:06.231182   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:45:06.253831   66415 provision.go:87] duration metric: took 357.907291ms to configureAuth
	I0916 10:45:06.253858   66415 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:45:06.254076   66415 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:45:06.254088   66415 machine.go:96] duration metric: took 896.863995ms to provisionDockerMachine
	I0916 10:45:06.254094   66415 client.go:171] duration metric: took 6.687407939s to LocalClient.Create
	I0916 10:45:06.254111   66415 start.go:167] duration metric: took 6.68746971s to libmachine.API.Create "ha-770465"
	I0916 10:45:06.254121   66415 start.go:293] postStartSetup for "ha-770465-m03" (driver="docker")
	I0916 10:45:06.254129   66415 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:45:06.254170   66415 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:45:06.254205   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m03
	I0916 10:45:06.271529   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m03/id_rsa Username:docker}
	I0916 10:45:06.369004   66415 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:45:06.372170   66415 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:45:06.372213   66415 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:45:06.372224   66415 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:45:06.372232   66415 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:45:06.372245   66415 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 10:45:06.372305   66415 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 10:45:06.372405   66415 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 10:45:06.372419   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /etc/ssl/certs/111892.pem
	I0916 10:45:06.372527   66415 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:45:06.381102   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:45:06.405113   66415 start.go:296] duration metric: took 150.97696ms for postStartSetup
	I0916 10:45:06.405529   66415 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465-m03
	I0916 10:45:06.424234   66415 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/config.json ...
	I0916 10:45:06.424580   66415 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:45:06.424633   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m03
	I0916 10:45:06.442721   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m03/id_rsa Username:docker}
	I0916 10:45:06.536953   66415 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:45:06.541227   66415 start.go:128] duration metric: took 6.976233835s to createHost
	I0916 10:45:06.541247   66415 start.go:83] releasing machines lock for "ha-770465-m03", held for 6.976380181s
	I0916 10:45:06.541308   66415 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465-m03
	I0916 10:45:06.560689   66415 out.go:177] * Found network options:
	I0916 10:45:06.562367   66415 out.go:177]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0916 10:45:06.563605   66415 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:45:06.563625   66415 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:45:06.563649   66415 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:45:06.563660   66415 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 10:45:06.563765   66415 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:45:06.563815   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m03
	I0916 10:45:06.563856   66415 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:45:06.563917   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m03
	I0916 10:45:06.582285   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m03/id_rsa Username:docker}
	I0916 10:45:06.582354   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m03/id_rsa Username:docker}
	I0916 10:45:06.672545   66415 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 10:45:06.755905   66415 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:45:06.755987   66415 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:45:06.783569   66415 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 10:45:06.783590   66415 start.go:495] detecting cgroup driver to use...
	I0916 10:45:06.783619   66415 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:45:06.783661   66415 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 10:45:06.795082   66415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 10:45:06.805528   66415 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:45:06.805583   66415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:45:06.818406   66415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:45:06.831869   66415 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:45:06.911232   66415 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:45:06.991504   66415 docker.go:233] disabling docker service ...
	I0916 10:45:06.991558   66415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:45:07.009613   66415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:45:07.019917   66415 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:45:07.096709   66415 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:45:07.183239   66415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:45:07.193849   66415 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:45:07.208791   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 10:45:07.218040   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:45:07.227010   66415 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:45:07.227070   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:45:07.235760   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:45:07.244619   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:45:07.253413   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:45:07.262188   66415 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:45:07.270742   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:45:07.280436   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:45:07.289512   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:45:07.299610   66415 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:45:07.307608   66415 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:45:07.315452   66415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:45:07.392303   66415 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 10:45:07.492075   66415 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 10:45:07.492156   66415 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 10:45:07.495997   66415 start.go:563] Will wait 60s for crictl version
	I0916 10:45:07.496058   66415 ssh_runner.go:195] Run: which crictl
	I0916 10:45:07.499621   66415 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:45:07.530979   66415 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 10:45:07.531037   66415 ssh_runner.go:195] Run: containerd --version
	I0916 10:45:07.553670   66415 ssh_runner.go:195] Run: containerd --version
	I0916 10:45:07.577751   66415 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 10:45:07.578947   66415 out.go:177]   - env NO_PROXY=192.168.49.2
	I0916 10:45:07.580384   66415 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0916 10:45:07.581546   66415 cli_runner.go:164] Run: docker network inspect ha-770465 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:45:07.599279   66415 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:45:07.602751   66415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:45:07.613228   66415 mustload.go:65] Loading cluster: ha-770465
	I0916 10:45:07.613453   66415 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:45:07.613660   66415 cli_runner.go:164] Run: docker container inspect ha-770465 --format={{.State.Status}}
	I0916 10:45:07.631284   66415 host.go:66] Checking if "ha-770465" exists ...
	I0916 10:45:07.631559   66415 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465 for IP: 192.168.49.4
	I0916 10:45:07.631571   66415 certs.go:194] generating shared ca certs ...
	I0916 10:45:07.631585   66415 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:45:07.631691   66415 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 10:45:07.631726   66415 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 10:45:07.631732   66415 certs.go:256] generating profile certs ...
	I0916 10:45:07.631852   66415 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.key
	I0916 10:45:07.631878   66415 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key.0a02bdd9
	I0916 10:45:07.631890   66415 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt.0a02bdd9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0916 10:45:07.870795   66415 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt.0a02bdd9 ...
	I0916 10:45:07.870830   66415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt.0a02bdd9: {Name:mka449d4a69b81e5b7f938f495ca4fdede03c234 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:45:07.871041   66415 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key.0a02bdd9 ...
	I0916 10:45:07.871058   66415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key.0a02bdd9: {Name:mkc376f567171135c13f12509ad123c34cd9ac74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:45:07.871130   66415 certs.go:381] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt.0a02bdd9 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt
	I0916 10:45:07.871273   66415 certs.go:385] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key.0a02bdd9 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key
	I0916 10:45:07.871404   66415 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.key
	I0916 10:45:07.871418   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:45:07.871431   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:45:07.871447   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:45:07.871460   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:45:07.871473   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:45:07.871487   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:45:07.871499   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:45:07.871514   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:45:07.871567   66415 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 10:45:07.871593   66415 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 10:45:07.871602   66415 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:45:07.871626   66415 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 10:45:07.871649   66415 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:45:07.871669   66415 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 10:45:07.871704   66415 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:45:07.871729   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:45:07.871759   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem -> /usr/share/ca-certificates/11189.pem
	I0916 10:45:07.871769   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /usr/share/ca-certificates/111892.pem
	I0916 10:45:07.871812   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:45:07.888778   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa Username:docker}
	I0916 10:45:07.980075   66415 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 10:45:07.984043   66415 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 10:45:07.997143   66415 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 10:45:08.000621   66415 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0916 10:45:08.012919   66415 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 10:45:08.016032   66415 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 10:45:08.027457   66415 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 10:45:08.030609   66415 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0916 10:45:08.041790   66415 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 10:45:08.044902   66415 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 10:45:08.056504   66415 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 10:45:08.059865   66415 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0916 10:45:08.071499   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:45:08.094547   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:45:08.116716   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:45:08.138495   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:45:08.160346   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0916 10:45:08.182469   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 10:45:08.204661   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:45:08.226629   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 10:45:08.250352   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:45:08.272717   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 10:45:08.296202   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 10:45:08.320615   66415 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 10:45:08.336913   66415 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0916 10:45:08.352727   66415 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 10:45:08.369340   66415 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0916 10:45:08.386394   66415 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 10:45:08.404496   66415 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0916 10:45:08.422422   66415 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 10:45:08.440137   66415 ssh_runner.go:195] Run: openssl version
	I0916 10:45:08.445569   66415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 10:45:08.454324   66415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 10:45:08.457572   66415 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 10:45:08.457621   66415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 10:45:08.463846   66415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 10:45:08.473094   66415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 10:45:08.482669   66415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 10:45:08.486051   66415 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 10:45:08.486121   66415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 10:45:08.492744   66415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:45:08.501979   66415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:45:08.510762   66415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:45:08.513979   66415 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:45:08.514041   66415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:45:08.521011   66415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:45:08.530448   66415 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:45:08.533627   66415 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:45:08.533677   66415 kubeadm.go:934] updating node {m03 192.168.49.4 8443 v1.31.1 containerd true true} ...
	I0916 10:45:08.533755   66415 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-770465-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-770465 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:45:08.533783   66415 kube-vip.go:115] generating kube-vip config ...
	I0916 10:45:08.533820   66415 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 10:45:08.545954   66415 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:45:08.546042   66415 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 10:45:08.546098   66415 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:45:08.554713   66415 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:45:08.554780   66415 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 10:45:08.563181   66415 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0916 10:45:08.579611   66415 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:45:08.596526   66415 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 10:45:08.612985   66415 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 10:45:08.616443   66415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:45:08.626212   66415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:45:08.705912   66415 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:45:08.718211   66415 host.go:66] Checking if "ha-770465" exists ...
	I0916 10:45:08.718482   66415 start.go:317] joinCluster: &{Name:ha-770465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-770465 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:f
alse kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:45:08.718627   66415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 10:45:08.718682   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:45:08.737887   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa Username:docker}
	I0916 10:45:08.880844   66415 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 10:45:08.880899   66415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token npdktg.b5hiz94b3qw4i8jd --discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-770465-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0916 10:45:13.725094   66415 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token npdktg.b5hiz94b3qw4i8jd --discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-770465-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (4.844166666s)
	I0916 10:45:13.725176   66415 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 10:45:14.542159   66415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-770465-m03 minikube.k8s.io/updated_at=2024_09_16T10_45_14_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=ha-770465 minikube.k8s.io/primary=false
	I0916 10:45:14.615336   66415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-770465-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 10:45:14.711922   66415 start.go:319] duration metric: took 5.993439292s to joinCluster
	I0916 10:45:14.712001   66415 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 10:45:14.712310   66415 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:45:14.713916   66415 out.go:177] * Verifying Kubernetes components...
	I0916 10:45:14.715231   66415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:45:15.139449   66415 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:45:15.225571   66415 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:45:15.225922   66415 kapi.go:59] client config for ha-770465: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 10:45:15.226013   66415 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 10:45:15.226288   66415 node_ready.go:35] waiting up to 6m0s for node "ha-770465-m03" to be "Ready" ...
	I0916 10:45:15.226394   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:15.226406   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:15.226415   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:15.226425   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:15.229541   66415 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:45:15.230483   66415 node_ready.go:49] node "ha-770465-m03" has status "Ready":"True"
	I0916 10:45:15.230502   66415 node_ready.go:38] duration metric: took 4.188874ms for node "ha-770465-m03" to be "Ready" ...
	I0916 10:45:15.230513   66415 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:45:15.230599   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:45:15.230615   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:15.230626   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:15.230632   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:15.237022   66415 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:45:15.246945   66415 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:15.247073   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:45:15.247085   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:15.247095   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:15.247104   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:15.250134   66415 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:45:15.250989   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:45:15.251008   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:15.251019   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:15.251028   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:15.253409   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:15.253985   66415 pod_ready.go:93] pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace has status "Ready":"True"
	I0916 10:45:15.254008   66415 pod_ready.go:82] duration metric: took 7.029652ms for pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:15.254020   66415 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sbs22" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:15.254109   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:45:15.254118   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:15.254127   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:15.254134   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:15.256650   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:15.257327   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:45:15.257343   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:15.257350   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:15.257354   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:15.259587   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:15.260200   66415 pod_ready.go:93] pod "coredns-7c65d6cfc9-sbs22" in "kube-system" namespace has status "Ready":"True"
	I0916 10:45:15.260224   66415 pod_ready.go:82] duration metric: took 6.194306ms for pod "coredns-7c65d6cfc9-sbs22" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:15.260238   66415 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:15.260308   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465
	I0916 10:45:15.260317   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:15.260327   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:15.260334   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:15.262540   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:15.263070   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:45:15.263083   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:15.263090   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:15.263094   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:15.265480   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:15.265966   66415 pod_ready.go:93] pod "etcd-ha-770465" in "kube-system" namespace has status "Ready":"True"
	I0916 10:45:15.265986   66415 pod_ready.go:82] duration metric: took 5.740232ms for pod "etcd-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:15.265996   66415 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:15.266050   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m02
	I0916 10:45:15.266057   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:15.266064   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:15.266070   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:15.268454   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:15.268978   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:45:15.268990   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:15.268996   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:15.268999   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:15.271198   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:15.271640   66415 pod_ready.go:93] pod "etcd-ha-770465-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:45:15.271658   66415 pod_ready.go:82] duration metric: took 5.655922ms for pod "etcd-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:15.271667   66415 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:15.426937   66415 request.go:632] Waited for 155.196467ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:15.427080   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:15.427108   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:15.427122   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:15.427129   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:15.430137   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:15.627010   66415 request.go:632] Waited for 196.158788ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:15.627088   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:15.627098   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:15.627109   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:15.627117   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:15.630187   66415 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:45:15.826933   66415 request.go:632] Waited for 54.206651ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:15.826999   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:15.827012   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:15.827022   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:15.827029   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:15.830062   66415 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:45:16.027142   66415 request.go:632] Waited for 196.329602ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:16.027217   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:16.027225   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:16.027235   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:16.027243   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:16.030149   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:16.272870   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:16.272894   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:16.272906   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:16.272911   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:16.275317   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:16.427393   66415 request.go:632] Waited for 151.30038ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:16.427480   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:16.427489   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:16.427500   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:16.427512   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:16.430200   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:16.771917   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:16.771950   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:16.771963   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:16.771972   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:16.774755   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:16.826490   66415 request.go:632] Waited for 51.134782ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:16.826565   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:16.826576   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:16.826585   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:16.826591   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:16.829568   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:17.272851   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:17.272872   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:17.272880   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:17.272885   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:17.275345   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:17.275923   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:17.275941   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:17.275951   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:17.275958   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:17.278128   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:17.278630   66415 pod_ready.go:103] pod "etcd-ha-770465-m03" in "kube-system" namespace has status "Ready":"False"
	I0916 10:45:17.771978   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:17.772001   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:17.772008   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:17.772013   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:17.774837   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:17.775571   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:17.775591   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:17.775603   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:17.775608   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:17.777858   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:18.272690   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:18.272712   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:18.272724   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:18.272729   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:18.275206   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:18.275856   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:18.275870   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:18.275877   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:18.275881   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:18.278015   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:18.771887   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:18.771909   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:18.771918   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:18.771922   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:18.774768   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:18.775310   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:18.775329   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:18.775339   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:18.775346   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:18.777552   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:19.272522   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:19.272551   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:19.272564   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:19.272570   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:19.275540   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:19.276295   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:19.276316   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:19.276324   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:19.276333   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:19.278572   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:19.279103   66415 pod_ready.go:103] pod "etcd-ha-770465-m03" in "kube-system" namespace has status "Ready":"False"
	I0916 10:45:19.772499   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:19.772517   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:19.772523   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:19.772535   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:19.775612   66415 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:45:19.776430   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:19.776453   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:19.776463   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:19.776470   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:19.779064   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:20.271907   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:20.271930   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:20.271938   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:20.271943   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:20.274632   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:20.275259   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:20.275276   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:20.275283   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:20.275289   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:20.277589   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:20.772052   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:20.772074   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:20.772082   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:20.772087   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:20.774878   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:20.775464   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:20.775480   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:20.775487   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:20.775492   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:20.778228   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:21.271926   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:21.271950   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:21.271959   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:21.271965   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:21.274684   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:21.275255   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:21.275271   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:21.275279   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:21.275285   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:21.277593   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:21.772465   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:21.772485   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:21.772493   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:21.772497   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:21.775269   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:21.775942   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:21.775959   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:21.775973   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:21.775979   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:21.778399   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:21.778887   66415 pod_ready.go:103] pod "etcd-ha-770465-m03" in "kube-system" namespace has status "Ready":"False"
	I0916 10:45:22.272405   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:22.272426   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:22.272433   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:22.272438   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:22.275089   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:22.275678   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:22.275694   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:22.275701   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:22.275705   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:22.277906   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:22.772807   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:22.772828   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:22.772836   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:22.772841   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:22.775792   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:22.777011   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:22.777038   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:22.777049   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:22.777056   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:22.780076   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:23.271941   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:23.271964   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:23.271975   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:23.271981   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:23.274664   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:23.275231   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:23.275248   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:23.275258   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:23.275268   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:23.277763   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:23.772654   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:23.772676   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:23.772684   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:23.772689   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:23.775526   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:23.776159   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:23.776181   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:23.776191   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:23.776195   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:23.778660   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:23.779120   66415 pod_ready.go:103] pod "etcd-ha-770465-m03" in "kube-system" namespace has status "Ready":"False"
	I0916 10:45:24.272092   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:24.272114   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:24.272121   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:24.272126   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:24.274925   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:24.275482   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:24.275499   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:24.275507   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:24.275510   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:24.277858   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:24.772790   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:24.772817   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:24.772827   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:24.772831   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:24.775707   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:24.776499   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:24.776522   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:24.776533   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:24.776540   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:24.779240   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:25.272689   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:25.272714   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:25.272726   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:25.272733   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:25.275511   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:25.276148   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:25.276165   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:25.276172   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:25.276176   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:25.278596   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:25.772446   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:25.772466   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:25.772474   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:25.772486   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:25.775323   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:25.776034   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:25.776052   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:25.776060   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:25.776065   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:25.778529   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:26.272443   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:26.272463   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:26.272470   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:26.272475   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:26.275036   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:26.275595   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:26.275610   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:26.275617   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:26.275620   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:26.277833   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:26.278251   66415 pod_ready.go:93] pod "etcd-ha-770465-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:45:26.278269   66415 pod_ready.go:82] duration metric: took 11.006595583s for pod "etcd-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:26.278286   66415 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:26.278342   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465
	I0916 10:45:26.278350   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:26.278356   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:26.278359   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:26.280281   66415 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:45:26.280784   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:45:26.280797   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:26.280804   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:26.280808   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:26.282725   66415 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:45:26.283250   66415 pod_ready.go:93] pod "kube-apiserver-ha-770465" in "kube-system" namespace has status "Ready":"True"
	I0916 10:45:26.283271   66415 pod_ready.go:82] duration metric: took 4.977851ms for pod "kube-apiserver-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:26.283284   66415 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:26.283357   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m02
	I0916 10:45:26.283366   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:26.283374   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:26.283377   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:26.285562   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:26.286101   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:45:26.286113   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:26.286120   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:26.286124   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:26.288170   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:26.288650   66415 pod_ready.go:93] pod "kube-apiserver-ha-770465-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:45:26.288665   66415 pod_ready.go:82] duration metric: took 5.373681ms for pod "kube-apiserver-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:26.288673   66415 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:26.288719   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m03
	I0916 10:45:26.288726   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:26.288733   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:26.288738   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:26.290631   66415 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:45:26.291287   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:26.291306   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:26.291313   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:26.291316   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:26.293057   66415 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:45:26.788914   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m03
	I0916 10:45:26.788935   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:26.788942   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:26.788947   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:26.791615   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:26.792370   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:26.792390   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:26.792401   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:26.792406   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:26.794640   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:27.289498   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m03
	I0916 10:45:27.289516   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:27.289524   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:27.289528   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:27.292303   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:27.293030   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:27.293049   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:27.293059   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:27.293064   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:27.295163   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:27.789322   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m03
	I0916 10:45:27.789358   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:27.789368   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:27.789374   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:27.791953   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:27.792673   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:27.792689   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:27.792697   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:27.792702   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:27.794817   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:28.289565   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m03
	I0916 10:45:28.289588   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:28.289598   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:28.289603   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:28.292575   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:28.293195   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:28.293211   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:28.293219   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:28.293227   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:28.295508   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:28.295922   66415 pod_ready.go:103] pod "kube-apiserver-ha-770465-m03" in "kube-system" namespace has status "Ready":"False"
	I0916 10:45:28.788946   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m03
	I0916 10:45:28.788971   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:28.788982   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:28.788987   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:28.791615   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:28.792241   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:28.792257   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:28.792264   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:28.792269   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:28.794388   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:29.289233   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m03
	I0916 10:45:29.289253   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:29.289276   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:29.289281   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:29.292248   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:29.292926   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:29.292943   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:29.292949   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:29.292954   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:29.294987   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:29.789890   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m03
	I0916 10:45:29.789915   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:29.789927   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:29.789935   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:29.792751   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:29.793493   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:29.793509   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:29.793527   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:29.793529   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:29.795945   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:30.289433   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m03
	I0916 10:45:30.289455   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:30.289464   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:30.289469   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:30.292290   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:30.292866   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:30.292880   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:30.292887   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:30.292891   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:30.294819   66415 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:45:30.789677   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m03
	I0916 10:45:30.789698   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:30.789706   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:30.789710   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:30.792471   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:30.793161   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:30.793177   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:30.793185   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:30.793188   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:30.795638   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:30.796173   66415 pod_ready.go:103] pod "kube-apiserver-ha-770465-m03" in "kube-system" namespace has status "Ready":"False"
	I0916 10:45:31.289587   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m03
	I0916 10:45:31.289612   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:31.289622   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:31.289626   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:31.292485   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:31.293147   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:31.293160   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:31.293166   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:31.293172   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:31.295303   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:31.789031   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m03
	I0916 10:45:31.789055   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:31.789067   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:31.789072   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:31.791647   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:31.792326   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:31.792343   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:31.792350   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:31.792353   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:31.794506   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:31.794877   66415 pod_ready.go:93] pod "kube-apiserver-ha-770465-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:45:31.794896   66415 pod_ready.go:82] duration metric: took 5.506214149s for pod "kube-apiserver-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:31.794905   66415 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:31.794961   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-770465
	I0916 10:45:31.794969   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:31.794979   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:31.794986   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:31.797071   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:31.797642   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:45:31.797656   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:31.797663   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:31.797666   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:31.799614   66415 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:45:31.800062   66415 pod_ready.go:93] pod "kube-controller-manager-ha-770465" in "kube-system" namespace has status "Ready":"True"
	I0916 10:45:31.800079   66415 pod_ready.go:82] duration metric: took 5.168498ms for pod "kube-controller-manager-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:31.800089   66415 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:31.800139   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-770465-m02
	I0916 10:45:31.800148   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:31.800158   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:31.800165   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:31.802091   66415 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:45:31.802812   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:45:31.802825   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:31.802832   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:31.802836   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:31.807180   66415 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:45:31.807666   66415 pod_ready.go:93] pod "kube-controller-manager-ha-770465-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:45:31.807682   66415 pod_ready.go:82] duration metric: took 7.588075ms for pod "kube-controller-manager-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:31.807692   66415 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:31.807799   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-770465-m03
	I0916 10:45:31.807810   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:31.807820   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:31.807831   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:31.809946   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:31.810500   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:31.810517   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:31.810526   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:31.810531   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:31.812555   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:31.813045   66415 pod_ready.go:93] pod "kube-controller-manager-ha-770465-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:45:31.813063   66415 pod_ready.go:82] duration metric: took 5.364715ms for pod "kube-controller-manager-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:31.813073   66415 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4qgcs" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:31.813125   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4qgcs
	I0916 10:45:31.813132   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:31.813139   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:31.813146   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:31.815060   66415 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:45:31.872914   66415 request.go:632] Waited for 57.265145ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:45:31.872977   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:45:31.872984   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:31.872998   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:31.873006   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:31.875763   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:31.876209   66415 pod_ready.go:93] pod "kube-proxy-4qgcs" in "kube-system" namespace has status "Ready":"True"
	I0916 10:45:31.876228   66415 pod_ready.go:82] duration metric: took 63.14631ms for pod "kube-proxy-4qgcs" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:31.876238   66415 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gd2mt" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:32.072597   66415 request.go:632] Waited for 196.279835ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gd2mt
	I0916 10:45:32.072669   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gd2mt
	I0916 10:45:32.072676   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:32.072685   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:32.072693   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:32.075391   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:32.272945   66415 request.go:632] Waited for 196.824277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:45:32.273006   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:45:32.273013   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:32.273024   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:32.273034   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:32.275911   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:32.276399   66415 pod_ready.go:93] pod "kube-proxy-gd2mt" in "kube-system" namespace has status "Ready":"True"
	I0916 10:45:32.276417   66415 pod_ready.go:82] duration metric: took 400.172027ms for pod "kube-proxy-gd2mt" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:32.276428   66415 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qlspc" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:32.473481   66415 request.go:632] Waited for 196.973475ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qlspc
	I0916 10:45:32.473557   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qlspc
	I0916 10:45:32.473564   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:32.473575   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:32.473589   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:32.476400   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:32.673476   66415 request.go:632] Waited for 196.380614ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:32.673535   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:32.673540   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:32.673547   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:32.673554   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:32.676386   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:32.676834   66415 pod_ready.go:93] pod "kube-proxy-qlspc" in "kube-system" namespace has status "Ready":"True"
	I0916 10:45:32.676854   66415 pod_ready.go:82] duration metric: took 400.419202ms for pod "kube-proxy-qlspc" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:32.676863   66415 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:32.873017   66415 request.go:632] Waited for 196.092276ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465
	I0916 10:45:32.873081   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465
	I0916 10:45:32.873088   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:32.873096   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:32.873106   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:32.875939   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:33.072916   66415 request.go:632] Waited for 196.185471ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:45:33.072974   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:45:33.072979   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:33.072986   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:33.072993   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:33.075478   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:33.076046   66415 pod_ready.go:93] pod "kube-scheduler-ha-770465" in "kube-system" namespace has status "Ready":"True"
	I0916 10:45:33.076068   66415 pod_ready.go:82] duration metric: took 399.198084ms for pod "kube-scheduler-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:33.076082   66415 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:33.272926   66415 request.go:632] Waited for 196.751102ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465-m02
	I0916 10:45:33.272985   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465-m02
	I0916 10:45:33.272991   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:33.273000   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:33.273007   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:33.275772   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:33.472542   66415 request.go:632] Waited for 196.275401ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:45:33.472618   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:45:33.472624   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:33.472631   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:33.472635   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:33.475553   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:33.476139   66415 pod_ready.go:93] pod "kube-scheduler-ha-770465-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:45:33.476159   66415 pod_ready.go:82] duration metric: took 400.066183ms for pod "kube-scheduler-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:33.476170   66415 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:33.673295   66415 request.go:632] Waited for 197.05213ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465-m03
	I0916 10:45:33.673387   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465-m03
	I0916 10:45:33.673394   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:33.673401   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:33.673407   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:33.676250   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:33.872868   66415 request.go:632] Waited for 196.005771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:33.872919   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:33.872924   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:33.872931   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:33.872935   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:33.875690   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:33.876144   66415 pod_ready.go:93] pod "kube-scheduler-ha-770465-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:45:33.876162   66415 pod_ready.go:82] duration metric: took 399.984234ms for pod "kube-scheduler-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:33.876172   66415 pod_ready.go:39] duration metric: took 18.645648206s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:45:33.876184   66415 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:45:33.876239   66415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:45:33.886742   66415 api_server.go:72] duration metric: took 19.17471158s to wait for apiserver process to appear ...
	I0916 10:45:33.886763   66415 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:45:33.886784   66415 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 10:45:33.890485   66415 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 10:45:33.890550   66415 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I0916 10:45:33.890558   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:33.890566   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:33.890573   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:33.891303   66415 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 10:45:33.891376   66415 api_server.go:141] control plane version: v1.31.1
	I0916 10:45:33.891394   66415 api_server.go:131] duration metric: took 4.624477ms to wait for apiserver health ...
	I0916 10:45:33.891407   66415 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:45:34.072867   66415 request.go:632] Waited for 181.36247ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:45:34.072931   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:45:34.072938   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:34.072949   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:34.072959   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:34.078891   66415 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:45:34.085918   66415 system_pods.go:59] 24 kube-system pods found
	I0916 10:45:34.085947   66415 system_pods.go:61] "coredns-7c65d6cfc9-9lw9q" [4d7cd19e-6f4d-44ce-a248-d0fdecdb9fe8] Running
	I0916 10:45:34.085952   66415 system_pods.go:61] "coredns-7c65d6cfc9-sbs22" [89925692-76b4-481f-bac7-16f06bea792a] Running
	I0916 10:45:34.085957   66415 system_pods.go:61] "etcd-ha-770465" [041e8a27-b6a3-4201-990f-c367da8c5eb5] Running
	I0916 10:45:34.085963   66415 system_pods.go:61] "etcd-ha-770465-m02" [1f922c60-bf68-44df-aa8c-42dce50e4dac] Running
	I0916 10:45:34.085969   66415 system_pods.go:61] "etcd-ha-770465-m03" [876ca26f-ca18-47df-9bfb-302321d66136] Running
	I0916 10:45:34.085974   66415 system_pods.go:61] "kindnet-66kfj" [9cd2abf1-a30e-40f4-af5c-59d50e367b36] Running
	I0916 10:45:34.085978   66415 system_pods.go:61] "kindnet-grjh8" [98658171-1486-44a9-8a20-0d77ea019206] Running
	I0916 10:45:34.085983   66415 system_pods.go:61] "kindnet-kht59" [3124d723-6fc6-4dce-9ecf-bb40b4665208] Running
	I0916 10:45:34.085988   66415 system_pods.go:61] "kube-apiserver-ha-770465" [aa8ba784-661f-4074-b0d3-f98cddf8ce8c] Running
	I0916 10:45:34.085995   66415 system_pods.go:61] "kube-apiserver-ha-770465-m02" [7acdd845-890e-449a-b112-2efbc19a7ef5] Running
	I0916 10:45:34.086000   66415 system_pods.go:61] "kube-apiserver-ha-770465-m03" [8ab3c55a-d726-456c-803b-e13c1032b13a] Running
	I0916 10:45:34.086003   66415 system_pods.go:61] "kube-controller-manager-ha-770465" [f0a59ee1-bf5e-432d-a185-f9497c75ada4] Running
	I0916 10:45:34.086013   66415 system_pods.go:61] "kube-controller-manager-ha-770465-m02" [55d7421b-5679-416d-a6ae-c36738c1ec32] Running
	I0916 10:45:34.086016   66415 system_pods.go:61] "kube-controller-manager-ha-770465-m03" [37613674-2ba4-4425-824a-d00d16ea7b62] Running
	I0916 10:45:34.086020   66415 system_pods.go:61] "kube-proxy-4qgcs" [0dbed632-c44c-4a17-8486-b328e2cc41d3] Running
	I0916 10:45:34.086023   66415 system_pods.go:61] "kube-proxy-gd2mt" [fc3bf04d-635a-4264-883b-2fd72cac2e24] Running
	I0916 10:45:34.086026   66415 system_pods.go:61] "kube-proxy-qlspc" [703b3f60-1294-49fd-aab1-35474b771351] Running
	I0916 10:45:34.086029   66415 system_pods.go:61] "kube-scheduler-ha-770465" [85f95d4d-a2d4-45cf-8afe-be446333b9d0] Running
	I0916 10:45:34.086032   66415 system_pods.go:61] "kube-scheduler-ha-770465-m02" [ec894fe5-a992-49f2-a96f-caab3848f7dd] Running
	I0916 10:45:34.086035   66415 system_pods.go:61] "kube-scheduler-ha-770465-m03" [0c6e312a-4eca-4ca3-b0d9-d70f7bf5087c] Running
	I0916 10:45:34.086038   66415 system_pods.go:61] "kube-vip-ha-770465" [8fe17ef3-e8ea-42a9-bfd4-5f556d0a3f77] Running
	I0916 10:45:34.086041   66415 system_pods.go:61] "kube-vip-ha-770465-m02" [ffd7ee7b-efe1-4d2d-b2f8-5fd898de7dbf] Running
	I0916 10:45:34.086044   66415 system_pods.go:61] "kube-vip-ha-770465-m03" [408226b8-b640-4b83-bd7e-a61b37724197] Running
	I0916 10:45:34.086047   66415 system_pods.go:61] "storage-provisioner" [cf470925-4874-4744-8015-700e93ab924f] Running
	I0916 10:45:34.086052   66415 system_pods.go:74] duration metric: took 194.637339ms to wait for pod list to return data ...
	I0916 10:45:34.086061   66415 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:45:34.273409   66415 request.go:632] Waited for 187.276734ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:45:34.273465   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:45:34.273470   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:34.273479   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:34.273483   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:34.276479   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:34.276588   66415 default_sa.go:45] found service account: "default"
	I0916 10:45:34.276602   66415 default_sa.go:55] duration metric: took 190.535855ms for default service account to be created ...
	I0916 10:45:34.276611   66415 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:45:34.472907   66415 request.go:632] Waited for 196.233603ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:45:34.472963   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:45:34.472968   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:34.472976   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:34.472983   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:34.478381   66415 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:45:34.484510   66415 system_pods.go:86] 24 kube-system pods found
	I0916 10:45:34.484539   66415 system_pods.go:89] "coredns-7c65d6cfc9-9lw9q" [4d7cd19e-6f4d-44ce-a248-d0fdecdb9fe8] Running
	I0916 10:45:34.484545   66415 system_pods.go:89] "coredns-7c65d6cfc9-sbs22" [89925692-76b4-481f-bac7-16f06bea792a] Running
	I0916 10:45:34.484549   66415 system_pods.go:89] "etcd-ha-770465" [041e8a27-b6a3-4201-990f-c367da8c5eb5] Running
	I0916 10:45:34.484553   66415 system_pods.go:89] "etcd-ha-770465-m02" [1f922c60-bf68-44df-aa8c-42dce50e4dac] Running
	I0916 10:45:34.484557   66415 system_pods.go:89] "etcd-ha-770465-m03" [876ca26f-ca18-47df-9bfb-302321d66136] Running
	I0916 10:45:34.484560   66415 system_pods.go:89] "kindnet-66kfj" [9cd2abf1-a30e-40f4-af5c-59d50e367b36] Running
	I0916 10:45:34.484564   66415 system_pods.go:89] "kindnet-grjh8" [98658171-1486-44a9-8a20-0d77ea019206] Running
	I0916 10:45:34.484567   66415 system_pods.go:89] "kindnet-kht59" [3124d723-6fc6-4dce-9ecf-bb40b4665208] Running
	I0916 10:45:34.484571   66415 system_pods.go:89] "kube-apiserver-ha-770465" [aa8ba784-661f-4074-b0d3-f98cddf8ce8c] Running
	I0916 10:45:34.484576   66415 system_pods.go:89] "kube-apiserver-ha-770465-m02" [7acdd845-890e-449a-b112-2efbc19a7ef5] Running
	I0916 10:45:34.484583   66415 system_pods.go:89] "kube-apiserver-ha-770465-m03" [8ab3c55a-d726-456c-803b-e13c1032b13a] Running
	I0916 10:45:34.484587   66415 system_pods.go:89] "kube-controller-manager-ha-770465" [f0a59ee1-bf5e-432d-a185-f9497c75ada4] Running
	I0916 10:45:34.484594   66415 system_pods.go:89] "kube-controller-manager-ha-770465-m02" [55d7421b-5679-416d-a6ae-c36738c1ec32] Running
	I0916 10:45:34.484597   66415 system_pods.go:89] "kube-controller-manager-ha-770465-m03" [37613674-2ba4-4425-824a-d00d16ea7b62] Running
	I0916 10:45:34.484604   66415 system_pods.go:89] "kube-proxy-4qgcs" [0dbed632-c44c-4a17-8486-b328e2cc41d3] Running
	I0916 10:45:34.484608   66415 system_pods.go:89] "kube-proxy-gd2mt" [fc3bf04d-635a-4264-883b-2fd72cac2e24] Running
	I0916 10:45:34.484613   66415 system_pods.go:89] "kube-proxy-qlspc" [703b3f60-1294-49fd-aab1-35474b771351] Running
	I0916 10:45:34.484617   66415 system_pods.go:89] "kube-scheduler-ha-770465" [85f95d4d-a2d4-45cf-8afe-be446333b9d0] Running
	I0916 10:45:34.484623   66415 system_pods.go:89] "kube-scheduler-ha-770465-m02" [ec894fe5-a992-49f2-a96f-caab3848f7dd] Running
	I0916 10:45:34.484627   66415 system_pods.go:89] "kube-scheduler-ha-770465-m03" [0c6e312a-4eca-4ca3-b0d9-d70f7bf5087c] Running
	I0916 10:45:34.484630   66415 system_pods.go:89] "kube-vip-ha-770465" [8fe17ef3-e8ea-42a9-bfd4-5f556d0a3f77] Running
	I0916 10:45:34.484633   66415 system_pods.go:89] "kube-vip-ha-770465-m02" [ffd7ee7b-efe1-4d2d-b2f8-5fd898de7dbf] Running
	I0916 10:45:34.484638   66415 system_pods.go:89] "kube-vip-ha-770465-m03" [408226b8-b640-4b83-bd7e-a61b37724197] Running
	I0916 10:45:34.484641   66415 system_pods.go:89] "storage-provisioner" [cf470925-4874-4744-8015-700e93ab924f] Running
	I0916 10:45:34.484647   66415 system_pods.go:126] duration metric: took 208.029152ms to wait for k8s-apps to be running ...
	I0916 10:45:34.484655   66415 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:45:34.484697   66415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:45:34.495613   66415 system_svc.go:56] duration metric: took 10.94482ms WaitForService to wait for kubelet
	I0916 10:45:34.495647   66415 kubeadm.go:582] duration metric: took 19.78361955s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:45:34.495666   66415 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:45:34.672922   66415 request.go:632] Waited for 177.188345ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0916 10:45:34.672994   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0916 10:45:34.673000   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:34.673007   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:34.673014   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:34.675880   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:34.676738   66415 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:45:34.676757   66415 node_conditions.go:123] node cpu capacity is 8
	I0916 10:45:34.676769   66415 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:45:34.676775   66415 node_conditions.go:123] node cpu capacity is 8
	I0916 10:45:34.676780   66415 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:45:34.676785   66415 node_conditions.go:123] node cpu capacity is 8
	I0916 10:45:34.676793   66415 node_conditions.go:105] duration metric: took 181.121718ms to run NodePressure ...
	I0916 10:45:34.676807   66415 start.go:241] waiting for startup goroutines ...
	I0916 10:45:34.676830   66415 start.go:255] writing updated cluster config ...
	I0916 10:45:34.677124   66415 ssh_runner.go:195] Run: rm -f paused
	I0916 10:45:34.683263   66415 out.go:177] * Done! kubectl is now configured to use "ha-770465" cluster and "default" namespace by default
	E0916 10:45:34.684495   66415 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e01ca3a0115c5       8c811b4aec35f       50 seconds ago       Running             busybox                   0                   55f666e26fe6c       busybox-7dff88458-845rc
	505568793f357       c69fa2e9cbf5f       About a minute ago   Running             coredns                   0                   1fd35ed82463b       coredns-7c65d6cfc9-sbs22
	120ff8a81efa1       c69fa2e9cbf5f       About a minute ago   Running             coredns                   0                   be59c99f1c75f       coredns-7c65d6cfc9-9lw9q
	ec0de017ccfa5       6e38f40d628db       2 minutes ago        Running             storage-provisioner       0                   f2ec4aec1e0b2       storage-provisioner
	b31c2d77265e3       12968670680f4       2 minutes ago        Running             kindnet-cni               0                   3fc06a79ff69e       kindnet-grjh8
	15571e99ab074       60c005f310ff3       2 minutes ago        Running             kube-proxy                0                   21353a9cca68d       kube-proxy-gd2mt
	75391807e9839       38af8ddebf499       2 minutes ago        Running             kube-vip                  0                   bbeb0c20f3069       kube-vip-ha-770465
	8b022d1d91205       2e96e5913fc06       2 minutes ago        Running             etcd                      0                   1e24ae4d4e2d8       etcd-ha-770465
	fc07020cd4841       9aa1fad941575       2 minutes ago        Running             kube-scheduler            0                   d47515013434a       kube-scheduler-ha-770465
	780f65ad6abab       175ffd71cce3d       2 minutes ago        Running             kube-controller-manager   0                   51746ddbcbea1       kube-controller-manager-ha-770465
	535bd4e938e3a       6bab7719df100       2 minutes ago        Running             kube-apiserver            0                   53fe88679ccf5       kube-apiserver-ha-770465
	
	
	==> containerd <==
	Sep 16 10:44:49 ha-770465 containerd[863]: time="2024-09-16T10:44:49.662101077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 10:44:49 ha-770465 containerd[863]: time="2024-09-16T10:44:49.662117526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:44:49 ha-770465 containerd[863]: time="2024-09-16T10:44:49.662203637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:44:49 ha-770465 containerd[863]: time="2024-09-16T10:44:49.708041585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-sbs22,Uid:89925692-76b4-481f-bac7-16f06bea792a,Namespace:kube-system,Attempt:0,} returns sandbox id \"1fd35ed82463bdeaed95b6c537cfd734fd3f5a191985667470b39c1feb3c143b\""
	Sep 16 10:44:49 ha-770465 containerd[863]: time="2024-09-16T10:44:49.710652827Z" level=info msg="CreateContainer within sandbox \"1fd35ed82463bdeaed95b6c537cfd734fd3f5a191985667470b39c1feb3c143b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Sep 16 10:44:49 ha-770465 containerd[863]: time="2024-09-16T10:44:49.722332652Z" level=info msg="CreateContainer within sandbox \"1fd35ed82463bdeaed95b6c537cfd734fd3f5a191985667470b39c1feb3c143b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"505568793f3574e96c0799007e1921a7d91c4ad3c8aeba5624a5d0c4a02e46d5\""
	Sep 16 10:44:49 ha-770465 containerd[863]: time="2024-09-16T10:44:49.722879183Z" level=info msg="StartContainer for \"505568793f3574e96c0799007e1921a7d91c4ad3c8aeba5624a5d0c4a02e46d5\""
	Sep 16 10:44:49 ha-770465 containerd[863]: time="2024-09-16T10:44:49.766136140Z" level=info msg="StartContainer for \"505568793f3574e96c0799007e1921a7d91c4ad3c8aeba5624a5d0c4a02e46d5\" returns successfully"
	Sep 16 10:45:36 ha-770465 containerd[863]: time="2024-09-16T10:45:36.214606138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-7dff88458-845rc,Uid:d5a45010-f551-4f0c-bb3e-d70e2eed9df0,Namespace:default,Attempt:0,}"
	Sep 16 10:45:36 ha-770465 containerd[863]: time="2024-09-16T10:45:36.250670965Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 10:45:36 ha-770465 containerd[863]: time="2024-09-16T10:45:36.250739768Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 10:45:36 ha-770465 containerd[863]: time="2024-09-16T10:45:36.250751691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:45:36 ha-770465 containerd[863]: time="2024-09-16T10:45:36.250856205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:45:36 ha-770465 containerd[863]: time="2024-09-16T10:45:36.296227347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-7dff88458-845rc,Uid:d5a45010-f551-4f0c-bb3e-d70e2eed9df0,Namespace:default,Attempt:0,} returns sandbox id \"55f666e26fe6ce338a9bd6c1802eafd533c1692af41e714ab63be449b882ad5b\""
	Sep 16 10:45:36 ha-770465 containerd[863]: time="2024-09-16T10:45:36.299315327Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 16 10:45:38 ha-770465 containerd[863]: time="2024-09-16T10:45:38.220119236Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 10:45:38 ha-770465 containerd[863]: time="2024-09-16T10:45:38.221190631Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28: active requests=0, bytes read=727667"
	Sep 16 10:45:38 ha-770465 containerd[863]: time="2024-09-16T10:45:38.222543840Z" level=info msg="ImageCreate event name:\"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 10:45:38 ha-770465 containerd[863]: time="2024-09-16T10:45:38.224559402Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 10:45:38 ha-770465 containerd[863]: time="2024-09-16T10:45:38.224888789Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28\" with image id \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\", repo tag \"gcr.io/k8s-minikube/busybox:1.28\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\", size \"725911\" in 1.92552792s"
	Sep 16 10:45:38 ha-770465 containerd[863]: time="2024-09-16T10:45:38.224923141Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\" returns image reference \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\""
	Sep 16 10:45:38 ha-770465 containerd[863]: time="2024-09-16T10:45:38.227034844Z" level=info msg="CreateContainer within sandbox \"55f666e26fe6ce338a9bd6c1802eafd533c1692af41e714ab63be449b882ad5b\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Sep 16 10:45:38 ha-770465 containerd[863]: time="2024-09-16T10:45:38.239551777Z" level=info msg="CreateContainer within sandbox \"55f666e26fe6ce338a9bd6c1802eafd533c1692af41e714ab63be449b882ad5b\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"e01ca3a0115c593fb62c91c1fe233bb2dcacc8fba6d38a7be8e09dc401933a28\""
	Sep 16 10:45:38 ha-770465 containerd[863]: time="2024-09-16T10:45:38.240116822Z" level=info msg="StartContainer for \"e01ca3a0115c593fb62c91c1fe233bb2dcacc8fba6d38a7be8e09dc401933a28\""
	Sep 16 10:45:38 ha-770465 containerd[863]: time="2024-09-16T10:45:38.296790736Z" level=info msg="StartContainer for \"e01ca3a0115c593fb62c91c1fe233bb2dcacc8fba6d38a7be8e09dc401933a28\" returns successfully"
	
	
	==> coredns [120ff8a81efa1183e1409d1cdb8fa5e1e7c675ebb3d0f165783c5512f48e07ce] <==
	[INFO] 127.0.0.1:47401 - 6102 "HINFO IN 7552043894687877427.7409354771220060933. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009655762s
	[INFO] 10.244.2.2:41874 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000284968s
	[INFO] 10.244.2.2:43872 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000938288s
	[INFO] 10.244.1.2:52261 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161563s
	[INFO] 10.244.1.2:56357 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001567449s
	[INFO] 10.244.1.2:42838 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000111184s
	[INFO] 10.244.1.2:53654 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001745214s
	[INFO] 10.244.0.4:53747 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.011812399s
	[INFO] 10.244.2.2:58497 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001353637s
	[INFO] 10.244.2.2:44119 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000158419s
	[INFO] 10.244.1.2:54873 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000164329s
	[INFO] 10.244.1.2:44900 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001619482s
	[INFO] 10.244.1.2:52029 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070813s
	[INFO] 10.244.0.4:56319 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144654s
	[INFO] 10.244.0.4:58425 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0002097s
	[INFO] 10.244.2.2:50531 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000233084s
	[INFO] 10.244.1.2:57721 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200098s
	[INFO] 10.244.1.2:47494 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147603s
	[INFO] 10.244.1.2:55948 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104458s
	[INFO] 10.244.1.2:41737 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000105046s
	[INFO] 10.244.0.4:56889 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184697s
	[INFO] 10.244.0.4:58113 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000142403s
	[INFO] 10.244.2.2:46838 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000183592s
	[INFO] 10.244.2.2:57080 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000106185s
	[INFO] 10.244.1.2:47643 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000156174s
	
	
	==> coredns [505568793f3574e96c0799007e1921a7d91c4ad3c8aeba5624a5d0c4a02e46d5] <==
	[INFO] 10.244.0.4:52021 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000136338s
	[INFO] 10.244.0.4:55747 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112985s
	[INFO] 10.244.2.2:51737 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184765s
	[INFO] 10.244.2.2:53734 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001929846s
	[INFO] 10.244.2.2:48077 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000125445s
	[INFO] 10.244.2.2:56941 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093993s
	[INFO] 10.244.2.2:53593 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00010639s
	[INFO] 10.244.2.2:53291 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000714s
	[INFO] 10.244.1.2:54655 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185177s
	[INFO] 10.244.1.2:48932 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002062451s
	[INFO] 10.244.1.2:41866 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000103063s
	[INFO] 10.244.1.2:51846 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000082591s
	[INFO] 10.244.1.2:55756 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087775s
	[INFO] 10.244.0.4:55553 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098067s
	[INFO] 10.244.0.4:54433 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008689s
	[INFO] 10.244.2.2:46677 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00019083s
	[INFO] 10.244.2.2:33741 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073821s
	[INFO] 10.244.2.2:54300 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000115863s
	[INFO] 10.244.0.4:41373 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000182332s
	[INFO] 10.244.0.4:46249 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000174562s
	[INFO] 10.244.2.2:53722 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000107299s
	[INFO] 10.244.2.2:37649 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000141192s
	[INFO] 10.244.1.2:47658 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000179545s
	[INFO] 10.244.1.2:40089 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000124796s
	[INFO] 10.244.1.2:58475 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000130146s
	
	
	==> describe nodes <==
	Name:               ha-770465
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-770465
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-770465
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_44_20_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:44:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-770465
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:46:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:45:52 +0000   Mon, 16 Sep 2024 10:44:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:45:52 +0000   Mon, 16 Sep 2024 10:44:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:45:52 +0000   Mon, 16 Sep 2024 10:44:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:45:52 +0000   Mon, 16 Sep 2024 10:44:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-770465
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 0ba1e4cf0f2047a2ba0924f2c23df268
	  System UUID:                f3656390-934b-423a-8190-9f78053eddee
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-845rc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kube-system                 coredns-7c65d6cfc9-9lw9q             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m4s
	  kube-system                 coredns-7c65d6cfc9-sbs22             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m4s
	  kube-system                 etcd-ha-770465                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m11s
	  kube-system                 kindnet-grjh8                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m4s
	  kube-system                 kube-apiserver-ha-770465             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 kube-controller-manager-ha-770465    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-proxy-gd2mt                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-scheduler-ha-770465             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-vip-ha-770465                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 2m3s  kube-proxy       
	  Normal   Starting                 2m9s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m9s  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  2m9s  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m9s  kubelet          Node ha-770465 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m9s  kubelet          Node ha-770465 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m9s  kubelet          Node ha-770465 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m5s  node-controller  Node ha-770465 event: Registered Node ha-770465 in Controller
	  Normal   RegisteredNode           103s  node-controller  Node ha-770465 event: Registered Node ha-770465 in Controller
	  Normal   RegisteredNode           68s   node-controller  Node ha-770465 event: Registered Node ha-770465 in Controller
	
	
	Name:               ha-770465-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-770465-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-770465
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_44_39_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:44:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-770465-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:46:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:46:08 +0000   Mon, 16 Sep 2024 10:44:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:46:08 +0000   Mon, 16 Sep 2024 10:44:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:46:08 +0000   Mon, 16 Sep 2024 10:44:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:46:08 +0000   Mon, 16 Sep 2024 10:44:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-770465-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 46a53aa7bbbe47cc9363439ec77ff032
	  System UUID:                0ec75a9b-7a96-466a-872e-476404dc1e5d
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-klfw4                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kube-system                 etcd-ha-770465-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         109s
	  kube-system                 kindnet-kht59                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-ha-770465-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-controller-manager-ha-770465-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-4qgcs                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-ha-770465-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-vip-ha-770465-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 108s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  111s (x8 over 111s)  kubelet          Node ha-770465-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s (x7 over 111s)  kubelet          Node ha-770465-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     111s (x7 over 111s)  kubelet          Node ha-770465-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  111s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           110s                 node-controller  Node ha-770465-m02 event: Registered Node ha-770465-m02 in Controller
	  Normal  RegisteredNode           103s                 node-controller  Node ha-770465-m02 event: Registered Node ha-770465-m02 in Controller
	  Normal  RegisteredNode           68s                  node-controller  Node ha-770465-m02 event: Registered Node ha-770465-m02 in Controller
	
	
	Name:               ha-770465-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-770465-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-770465
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_45_14_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:45:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-770465-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:46:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:46:13 +0000   Mon, 16 Sep 2024 10:45:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:46:13 +0000   Mon, 16 Sep 2024 10:45:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:46:13 +0000   Mon, 16 Sep 2024 10:45:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:46:13 +0000   Mon, 16 Sep 2024 10:45:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-770465-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 d21ac7bc08cb49e1a337ea803b228e0a
	  System UUID:                e87efbe7-d110-423f-ad1f-3d6b898d752e
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-dlndh                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kube-system                 etcd-ha-770465-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         75s
	  kube-system                 kindnet-66kfj                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      77s
	  kube-system                 kube-apiserver-ha-770465-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-controller-manager-ha-770465-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-proxy-qlspc                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 kube-scheduler-ha-770465-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-vip-ha-770465-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 74s                kube-proxy       
	  Normal  NodeHasSufficientMemory  77s (x8 over 77s)  kubelet          Node ha-770465-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    77s (x7 over 77s)  kubelet          Node ha-770465-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     77s (x7 over 77s)  kubelet          Node ha-770465-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  77s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           75s                node-controller  Node ha-770465-m03 event: Registered Node ha-770465-m03 in Controller
	  Normal  RegisteredNode           73s                node-controller  Node ha-770465-m03 event: Registered Node ha-770465-m03 in Controller
	  Normal  RegisteredNode           68s                node-controller  Node ha-770465-m03 event: Registered Node ha-770465-m03 in Controller
	
	
	Name:               ha-770465-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-770465-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-770465
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_46_20_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:46:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-770465-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:46:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:46:20 +0000   Mon, 16 Sep 2024 10:46:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:46:20 +0000   Mon, 16 Sep 2024 10:46:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:46:20 +0000   Mon, 16 Sep 2024 10:46:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:46:20 +0000   Mon, 16 Sep 2024 10:46:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-770465-m04
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 77bf457e10f94471aaa4387428b4961a
	  System UUID:                82d9765a-9474-4a2c-ae78-19bbbf1ab150
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-bflwn       100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7s
	  kube-system                 kube-proxy-78l2l    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%)  100m (1%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 6s               kube-proxy       
	  Normal  NodeHasSufficientMemory  9s (x2 over 9s)  kubelet          Node ha-770465-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x2 over 9s)  kubelet          Node ha-770465-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x2 over 9s)  kubelet          Node ha-770465-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s               kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8s               node-controller  Node ha-770465-m04 event: Registered Node ha-770465-m04 in Controller
	  Normal  RegisteredNode           8s               node-controller  Node ha-770465-m04 event: Registered Node ha-770465-m04 in Controller
	  Normal  NodeReady                8s               kubelet          Node ha-770465-m04 status is now: NodeReady
	  Normal  RegisteredNode           5s               node-controller  Node ha-770465-m04 event: Registered Node ha-770465-m04 in Controller
	
	
	==> dmesg <==
	[Sep16 10:17]  #2
	[  +0.001391]  #3
	[  +0.000000]  #4
	[  +0.003060] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.003238] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.002037] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.002135]  #5
	[  +0.000696]  #6
	[  +0.003195]  #7
	[  +0.058540] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.423486] i8042: Warning: Keylock active
	[  +0.007424] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.002994] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000654] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000607] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000622] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000638] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000657] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000607] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000608] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.588189] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +7.251152] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [8b022d1d912058b6aec308a7f6777b3f8fcb7b0b8c051be8ff2b7c53dc37450c] <==
	{"level":"warn","ts":"2024-09-16T10:45:03.483011Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.588652ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128031932284351937 > lease_revoke:<id:70cc91fa6ed734ff>","response":"size:29"}
	{"level":"warn","ts":"2024-09-16T10:45:11.870392Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:36166","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-09-16T10:45:11.878142Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892 17455162631699035958) learners=(1750190452317010141)"}
	{"level":"info","ts":"2024-09-16T10:45:11.878280Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"1849ecf187a2b8dd","added-peer-peer-urls":["https://192.168.49.4:2380"]}
	{"level":"info","ts":"2024-09-16T10:45:11.878315Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"1849ecf187a2b8dd"}
	{"level":"info","ts":"2024-09-16T10:45:11.878352Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"1849ecf187a2b8dd"}
	{"level":"info","ts":"2024-09-16T10:45:11.879512Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"1849ecf187a2b8dd"}
	{"level":"info","ts":"2024-09-16T10:45:11.879555Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"1849ecf187a2b8dd","remote-peer-urls":["https://192.168.49.4:2380"]}
	{"level":"info","ts":"2024-09-16T10:45:11.879582Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddLearnerNode","raft-conf-change-node-id":"1849ecf187a2b8dd"}
	{"level":"info","ts":"2024-09-16T10:45:11.879904Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"1849ecf187a2b8dd"}
	{"level":"info","ts":"2024-09-16T10:45:11.880142Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"1849ecf187a2b8dd"}
	{"level":"info","ts":"2024-09-16T10:45:11.880179Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"1849ecf187a2b8dd"}
	{"level":"info","ts":"2024-09-16T10:45:11.880392Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"1849ecf187a2b8dd"}
	{"level":"warn","ts":"2024-09-16T10:45:11.901608Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:36182","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-09-16T10:45:12.925524Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"aec36adc501070cc","to":"1849ecf187a2b8dd","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-16T10:45:12.925698Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"1849ecf187a2b8dd"}
	{"level":"info","ts":"2024-09-16T10:45:12.925764Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"1849ecf187a2b8dd"}
	{"level":"warn","ts":"2024-09-16T10:45:12.935157Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"1849ecf187a2b8dd","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-09-16T10:45:12.946871Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"aec36adc501070cc","to":"1849ecf187a2b8dd","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-16T10:45:12.946916Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"1849ecf187a2b8dd"}
	{"level":"info","ts":"2024-09-16T10:45:12.954316Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"1849ecf187a2b8dd"}
	{"level":"info","ts":"2024-09-16T10:45:13.020302Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"1849ecf187a2b8dd"}
	{"level":"info","ts":"2024-09-16T10:45:13.424152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(1750190452317010141 12593026477526642892 17455162631699035958)"}
	{"level":"info","ts":"2024-09-16T10:45:13.424314Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-09-16T10:45:13.424388Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"1849ecf187a2b8dd"}
	
	
	==> kernel <==
	 10:46:28 up 28 min,  0 users,  load average: 0.91, 0.91, 0.64
	Linux ha-770465 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [b31c2d77265e3a87517539fba911addc87dcfa7cd4932f3fa5cfa6b294afd8aa] <==
	I0916 10:45:55.754992       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0916 10:45:55.754997       1 main.go:322] Node ha-770465-m02 has CIDR [10.244.1.0/24] 
	I0916 10:45:55.755159       1 main.go:295] Handling node with IPs: map[192.168.49.4:{}]
	I0916 10:45:55.755176       1 main.go:322] Node ha-770465-m03 has CIDR [10.244.2.0/24] 
	I0916 10:46:05.752900       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:46:05.752955       1 main.go:299] handling current node
	I0916 10:46:05.752971       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0916 10:46:05.752976       1 main.go:322] Node ha-770465-m02 has CIDR [10.244.1.0/24] 
	I0916 10:46:05.753130       1 main.go:295] Handling node with IPs: map[192.168.49.4:{}]
	I0916 10:46:05.753144       1 main.go:322] Node ha-770465-m03 has CIDR [10.244.2.0/24] 
	I0916 10:46:15.752928       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:46:15.753005       1 main.go:299] handling current node
	I0916 10:46:15.753022       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0916 10:46:15.753029       1 main.go:322] Node ha-770465-m02 has CIDR [10.244.1.0/24] 
	I0916 10:46:15.753177       1 main.go:295] Handling node with IPs: map[192.168.49.4:{}]
	I0916 10:46:15.753191       1 main.go:322] Node ha-770465-m03 has CIDR [10.244.2.0/24] 
	I0916 10:46:25.752955       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:46:25.753005       1 main.go:299] handling current node
	I0916 10:46:25.753026       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0916 10:46:25.753034       1 main.go:322] Node ha-770465-m02 has CIDR [10.244.1.0/24] 
	I0916 10:46:25.753237       1 main.go:295] Handling node with IPs: map[192.168.49.4:{}]
	I0916 10:46:25.753252       1 main.go:322] Node ha-770465-m03 has CIDR [10.244.2.0/24] 
	I0916 10:46:25.753313       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0916 10:46:25.753324       1 main.go:322] Node ha-770465-m04 has CIDR [10.244.3.0/24] 
	I0916 10:46:25.753375       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.49.5 Flags: [] Table: 0} 
	
	
	==> kube-apiserver [535bd4e938e3aeb6ecfbd02d81bf8fc060b9bb649a67b3f28d6b43d2c199e4ba] <==
	W0916 10:44:17.975803       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0916 10:44:17.977097       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:44:17.981779       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:44:18.429026       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:44:19.732485       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:44:19.743980       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 10:44:19.753201       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:44:24.080680       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0916 10:44:24.080680       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0916 10:44:24.180774       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0916 10:46:04.259087       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:54782: use of closed network connection
	E0916 10:46:04.412401       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:54796: use of closed network connection
	E0916 10:46:04.568563       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:54808: use of closed network connection
	E0916 10:46:04.740761       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:54822: use of closed network connection
	E0916 10:46:04.905896       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:54836: use of closed network connection
	E0916 10:46:05.060982       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:54858: use of closed network connection
	E0916 10:46:05.228361       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:54878: use of closed network connection
	E0916 10:46:05.380406       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:54894: use of closed network connection
	E0916 10:46:05.547512       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:54904: use of closed network connection
	E0916 10:46:05.822889       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:54930: use of closed network connection
	E0916 10:46:05.978196       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:54942: use of closed network connection
	E0916 10:46:06.125590       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:54966: use of closed network connection
	E0916 10:46:06.271367       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:54990: use of closed network connection
	E0916 10:46:06.417557       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:55004: use of closed network connection
	E0916 10:46:06.561545       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:55012: use of closed network connection
	
	
	==> kube-controller-manager [780f65ad6abab29bdde89c430c29bcd890f45aa17487c1bfd744c963df712f3d] <==
	I0916 10:45:36.819054       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="66.964µs"
	I0916 10:45:38.289841       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="26.154709ms"
	I0916 10:45:38.289939       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="51.595µs"
	I0916 10:45:38.814556       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="7.74076ms"
	I0916 10:45:38.814650       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.1µs"
	I0916 10:45:41.523632       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="61.768µs"
	I0916 10:45:42.638341       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m03"
	I0916 10:45:52.041669       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465"
	I0916 10:46:03.827357       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.858169ms"
	I0916 10:46:03.827464       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="59.444µs"
	I0916 10:46:08.674888       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m02"
	I0916 10:46:13.096265       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m03"
	E0916 10:46:19.274244       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-8wfr5 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-8wfr5\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0916 10:46:19.399508       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-770465-m04\" does not exist"
	I0916 10:46:19.440331       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-770465-m04" podCIDRs=["10.244.3.0/24"]
	I0916 10:46:19.440377       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m04"
	I0916 10:46:19.440419       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m04"
	I0916 10:46:19.874388       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m04"
	I0916 10:46:20.121826       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m04"
	I0916 10:46:20.188037       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m04"
	I0916 10:46:20.657563       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-770465-m04"
	I0916 10:46:20.657871       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m04"
	I0916 10:46:20.671968       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m04"
	I0916 10:46:23.179728       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-770465-m04"
	I0916 10:46:23.180106       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m04"
	
	
	==> kube-proxy [15571e99ab074e3b158931e74a462086cc1bc9b84b6b39d511e64dbebca8dac3] <==
	I0916 10:44:25.058145       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:44:25.228881       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:44:25.228958       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:44:25.251975       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:44:25.252031       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:44:25.255017       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:44:25.255521       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:44:25.255550       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:44:25.256997       1 config.go:199] "Starting service config controller"
	I0916 10:44:25.257209       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:44:25.257043       1 config.go:328] "Starting node config controller"
	I0916 10:44:25.257490       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:44:25.257086       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:44:25.257634       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:44:25.357729       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:44:25.357756       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:44:25.360110       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [fc07020cd48414dd7978cd32b7fffa3b3bd5d7f72b79b3aa49e4082dffedf8e3] <==
	W0916 10:44:17.534480       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 10:44:17.534534       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:44:17.605947       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:44:17.605995       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:44:17.659989       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:44:17.660035       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0916 10:44:17.672435       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 10:44:17.672475       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 10:44:20.730788       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0916 10:45:11.758548       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-sb96x\": pod kube-proxy-sb96x is already assigned to node \"ha-770465-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-sb96x" node="ha-770465-m03"
	E0916 10:45:11.758691       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-sb96x\": pod kube-proxy-sb96x is already assigned to node \"ha-770465-m03\"" pod="kube-system/kube-proxy-sb96x"
	E0916 10:45:35.573275       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-klfw4\": pod busybox-7dff88458-klfw4 is already assigned to node \"ha-770465-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-klfw4" node="ha-770465-m02"
	E0916 10:45:35.573342       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 1f91390f-bdef-4a3b-a8bc-e717d87dee4b(default/busybox-7dff88458-klfw4) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-klfw4"
	E0916 10:45:35.573361       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-klfw4\": pod busybox-7dff88458-klfw4 is already assigned to node \"ha-770465-m02\"" pod="default/busybox-7dff88458-klfw4"
	I0916 10:45:35.573394       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-klfw4" node="ha-770465-m02"
	E0916 10:46:21.389563       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-tw9dw\": pod kindnet-tw9dw is already assigned to node \"ha-770465-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-tw9dw" node="ha-770465-m04"
	E0916 10:46:21.389661       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 211d67ad-c4dc-498b-9ce1-aa4f469a1a54(kube-system/kindnet-tw9dw) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-tw9dw"
	E0916 10:46:21.389685       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-tw9dw\": pod kindnet-tw9dw is already assigned to node \"ha-770465-m04\"" pod="kube-system/kindnet-tw9dw"
	I0916 10:46:21.389710       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-tw9dw" node="ha-770465-m04"
	E0916 10:46:21.390586       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-bflwn\": pod kindnet-bflwn is already assigned to node \"ha-770465-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-bflwn" node="ha-770465-m04"
	E0916 10:46:21.390625       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 59d75712-5683-4b1c-a6ef-2a669d75da7a(kube-system/kindnet-bflwn) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-bflwn"
	E0916 10:46:21.390641       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-bflwn\": pod kindnet-bflwn is already assigned to node \"ha-770465-m04\"" pod="kube-system/kindnet-bflwn"
	I0916 10:46:21.390663       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-bflwn" node="ha-770465-m04"
	E0916 10:46:21.422131       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-vkdfk\": pod kindnet-vkdfk is already assigned to node \"ha-770465-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-vkdfk" node="ha-770465-m04"
	E0916 10:46:21.422653       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-vkdfk\": pod kindnet-vkdfk is already assigned to node \"ha-770465-m04\"" pod="kube-system/kindnet-vkdfk"
	
	
	==> kubelet <==
	Sep 16 10:44:24 ha-770465 kubelet[1704]: E0916 10:44:24.825617    1704 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df3b02d9cb86350ba85f30c87a530adb921da52a208fc5e705a2f6d395882486\": failed to find network info for sandbox \"df3b02d9cb86350ba85f30c87a530adb921da52a208fc5e705a2f6d395882486\"" pod="kube-system/coredns-7c65d6cfc9-9lw9q"
	Sep 16 10:44:24 ha-770465 kubelet[1704]: E0916 10:44:24.825650    1704 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df3b02d9cb86350ba85f30c87a530adb921da52a208fc5e705a2f6d395882486\": failed to find network info for sandbox \"df3b02d9cb86350ba85f30c87a530adb921da52a208fc5e705a2f6d395882486\"" pod="kube-system/coredns-7c65d6cfc9-9lw9q"
	Sep 16 10:44:24 ha-770465 kubelet[1704]: E0916 10:44:24.825717    1704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-9lw9q_kube-system(4d7cd19e-6f4d-44ce-a248-d0fdecdb9fe8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-9lw9q_kube-system(4d7cd19e-6f4d-44ce-a248-d0fdecdb9fe8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df3b02d9cb86350ba85f30c87a530adb921da52a208fc5e705a2f6d395882486\\\": failed to find network info for sandbox \\\"df3b02d9cb86350ba85f30c87a530adb921da52a208fc5e705a2f6d395882486\\\"\"" pod="kube-system/coredns-7c65d6cfc9-9lw9q" podUID="4d7cd19e-6f4d-44ce-a248-d0fdecdb9fe8"
	Sep 16 10:44:25 ha-770465 kubelet[1704]: I0916 10:44:25.333152    1704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/cf470925-4874-4744-8015-700e93ab924f-tmp\") pod \"storage-provisioner\" (UID: \"cf470925-4874-4744-8015-700e93ab924f\") " pod="kube-system/storage-provisioner"
	Sep 16 10:44:25 ha-770465 kubelet[1704]: I0916 10:44:25.333218    1704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-454b8\" (UniqueName: \"kubernetes.io/projected/cf470925-4874-4744-8015-700e93ab924f-kube-api-access-454b8\") pod \"storage-provisioner\" (UID: \"cf470925-4874-4744-8015-700e93ab924f\") " pod="kube-system/storage-provisioner"
	Sep 16 10:44:25 ha-770465 kubelet[1704]: I0916 10:44:25.663438    1704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gd2mt" podStartSLOduration=1.663397327 podStartE2EDuration="1.663397327s" podCreationTimestamp="2024-09-16 10:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:44:25.663397383 +0000 UTC m=+6.164732829" watchObservedRunningTime="2024-09-16 10:44:25.663397327 +0000 UTC m=+6.164732774"
	Sep 16 10:44:25 ha-770465 kubelet[1704]: I0916 10:44:25.690523    1704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-grjh8" podStartSLOduration=1.690501142 podStartE2EDuration="1.690501142s" podCreationTimestamp="2024-09-16 10:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:44:25.690388847 +0000 UTC m=+6.191724292" watchObservedRunningTime="2024-09-16 10:44:25.690501142 +0000 UTC m=+6.191836589"
	Sep 16 10:44:26 ha-770465 kubelet[1704]: I0916 10:44:26.665088    1704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.665066696 podStartE2EDuration="1.665066696s" podCreationTimestamp="2024-09-16 10:44:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:44:26.664635761 +0000 UTC m=+7.165971208" watchObservedRunningTime="2024-09-16 10:44:26.665066696 +0000 UTC m=+7.166402143"
	Sep 16 10:44:29 ha-770465 kubelet[1704]: I0916 10:44:29.936500    1704 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 16 10:44:29 ha-770465 kubelet[1704]: I0916 10:44:29.937326    1704 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 16 10:44:35 ha-770465 kubelet[1704]: E0916 10:44:35.674383    1704 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcc9bac8074b52d743f59693715a647784ed10dbbb42086888fe28ee8e71f6b5\": failed to find network info for sandbox \"fcc9bac8074b52d743f59693715a647784ed10dbbb42086888fe28ee8e71f6b5\""
	Sep 16 10:44:35 ha-770465 kubelet[1704]: E0916 10:44:35.674448    1704 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcc9bac8074b52d743f59693715a647784ed10dbbb42086888fe28ee8e71f6b5\": failed to find network info for sandbox \"fcc9bac8074b52d743f59693715a647784ed10dbbb42086888fe28ee8e71f6b5\"" pod="kube-system/coredns-7c65d6cfc9-sbs22"
	Sep 16 10:44:35 ha-770465 kubelet[1704]: E0916 10:44:35.674473    1704 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcc9bac8074b52d743f59693715a647784ed10dbbb42086888fe28ee8e71f6b5\": failed to find network info for sandbox \"fcc9bac8074b52d743f59693715a647784ed10dbbb42086888fe28ee8e71f6b5\"" pod="kube-system/coredns-7c65d6cfc9-sbs22"
	Sep 16 10:44:35 ha-770465 kubelet[1704]: E0916 10:44:35.674519    1704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-sbs22_kube-system(89925692-76b4-481f-bac7-16f06bea792a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-sbs22_kube-system(89925692-76b4-481f-bac7-16f06bea792a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fcc9bac8074b52d743f59693715a647784ed10dbbb42086888fe28ee8e71f6b5\\\": failed to find network info for sandbox \\\"fcc9bac8074b52d743f59693715a647784ed10dbbb42086888fe28ee8e71f6b5\\\"\"" pod="kube-system/coredns-7c65d6cfc9-sbs22" podUID="89925692-76b4-481f-bac7-16f06bea792a"
	Sep 16 10:44:40 ha-770465 kubelet[1704]: I0916 10:44:40.724936    1704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-9lw9q" podStartSLOduration=16.724911472 podStartE2EDuration="16.724911472s" podCreationTimestamp="2024-09-16 10:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:44:40.724155516 +0000 UTC m=+21.225490962" watchObservedRunningTime="2024-09-16 10:44:40.724911472 +0000 UTC m=+21.226246917"
	Sep 16 10:44:50 ha-770465 kubelet[1704]: I0916 10:44:50.714495    1704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-sbs22" podStartSLOduration=26.714472953 podStartE2EDuration="26.714472953s" podCreationTimestamp="2024-09-16 10:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:44:50.713735376 +0000 UTC m=+31.215070822" watchObservedRunningTime="2024-09-16 10:44:50.714472953 +0000 UTC m=+31.215808398"
	Sep 16 10:45:35 ha-770465 kubelet[1704]: E0916 10:45:35.668078    1704 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-jwdp5], unattached volumes=[], failed to process volumes=[kube-api-access-jwdp5]: context canceled" pod="default/busybox-7dff88458-lrb95" podUID="b2be2502-120d-4678-8b3d-8a6be089d9f1"
	Sep 16 10:45:35 ha-770465 kubelet[1704]: I0916 10:45:35.820115    1704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6kx6\" (UniqueName: \"kubernetes.io/projected/d5a45010-f551-4f0c-bb3e-d70e2eed9df0-kube-api-access-s6kx6\") pod \"busybox-7dff88458-845rc\" (UID: \"d5a45010-f551-4f0c-bb3e-d70e2eed9df0\") " pod="default/busybox-7dff88458-845rc"
	Sep 16 10:45:35 ha-770465 kubelet[1704]: I0916 10:45:35.820393    1704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwdp5\" (UniqueName: \"kubernetes.io/projected/b2be2502-120d-4678-8b3d-8a6be089d9f1-kube-api-access-jwdp5\") pod \"busybox-7dff88458-lrb95\" (UID: \"b2be2502-120d-4678-8b3d-8a6be089d9f1\") " pod="default/busybox-7dff88458-lrb95"
	Sep 16 10:45:36 ha-770465 kubelet[1704]: I0916 10:45:36.022087    1704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwdp5\" (UniqueName: \"kubernetes.io/projected/b2be2502-120d-4678-8b3d-8a6be089d9f1-kube-api-access-jwdp5\") pod \"b2be2502-120d-4678-8b3d-8a6be089d9f1\" (UID: \"b2be2502-120d-4678-8b3d-8a6be089d9f1\") "
	Sep 16 10:45:36 ha-770465 kubelet[1704]: I0916 10:45:36.023981    1704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2be2502-120d-4678-8b3d-8a6be089d9f1-kube-api-access-jwdp5" (OuterVolumeSpecName: "kube-api-access-jwdp5") pod "b2be2502-120d-4678-8b3d-8a6be089d9f1" (UID: "b2be2502-120d-4678-8b3d-8a6be089d9f1"). InnerVolumeSpecName "kube-api-access-jwdp5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:45:36 ha-770465 kubelet[1704]: I0916 10:45:36.122817    1704 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-jwdp5\" (UniqueName: \"kubernetes.io/projected/b2be2502-120d-4678-8b3d-8a6be089d9f1-kube-api-access-jwdp5\") on node \"ha-770465\" DevicePath \"\""
	Sep 16 10:45:37 ha-770465 kubelet[1704]: I0916 10:45:37.626360    1704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2be2502-120d-4678-8b3d-8a6be089d9f1" path="/var/lib/kubelet/pods/b2be2502-120d-4678-8b3d-8a6be089d9f1/volumes"
	Sep 16 10:45:38 ha-770465 kubelet[1704]: I0916 10:45:38.806899    1704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-7dff88458-845rc" podStartSLOduration=1.878632565 podStartE2EDuration="3.806873189s" podCreationTimestamp="2024-09-16 10:45:35 +0000 UTC" firstStartedPulling="2024-09-16 10:45:36.297526121 +0000 UTC m=+76.798861550" lastFinishedPulling="2024-09-16 10:45:38.225766737 +0000 UTC m=+78.727102174" observedRunningTime="2024-09-16 10:45:38.80675608 +0000 UTC m=+79.308091526" watchObservedRunningTime="2024-09-16 10:45:38.806873189 +0000 UTC m=+79.308208635"
	Sep 16 10:46:05 ha-770465 kubelet[1704]: E0916 10:46:05.822846    1704 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.49.2:51478->192.168.49.2:10010: write tcp 192.168.49.2:51478->192.168.49.2:10010: write: broken pipe
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-770465 -n ha-770465
helpers_test.go:261: (dbg) Run:  kubectl --context ha-770465 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context ha-770465 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (559.391µs)
helpers_test.go:263: kubectl --context ha-770465 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestMultiControlPlane/serial/NodeLabels (2.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (17.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 node start m02 -v=7 --alsologtostderr
E0916 10:47:08.257178   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:47:08.263604   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:47:08.275008   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:47:08.296362   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:47:08.338021   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:47:08.419497   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:47:08.581184   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:47:08.902786   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:47:09.544310   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:47:10.826340   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-770465 node start m02 -v=7 --alsologtostderr: (14.453952719s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 status -v=7 --alsologtostderr
E0916 10:47:13.388659   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:448: (dbg) Run:  kubectl get nodes
ha_test.go:448: (dbg) Non-zero exit: kubectl get nodes: fork/exec /usr/local/bin/kubectl: exec format error (564.118µs)
ha_test.go:450: failed to kubectl get nodes. args "kubectl get nodes" : fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-770465
helpers_test.go:235: (dbg) docker inspect ha-770465:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c7d04b23d2abf966ca171fd8833a886574de3fcaa485d5ebf8d16c3f7eff5dbf",
	        "Created": "2024-09-16T10:44:02.535590959Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 67096,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:44:02.647879467Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/c7d04b23d2abf966ca171fd8833a886574de3fcaa485d5ebf8d16c3f7eff5dbf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c7d04b23d2abf966ca171fd8833a886574de3fcaa485d5ebf8d16c3f7eff5dbf/hostname",
	        "HostsPath": "/var/lib/docker/containers/c7d04b23d2abf966ca171fd8833a886574de3fcaa485d5ebf8d16c3f7eff5dbf/hosts",
	        "LogPath": "/var/lib/docker/containers/c7d04b23d2abf966ca171fd8833a886574de3fcaa485d5ebf8d16c3f7eff5dbf/c7d04b23d2abf966ca171fd8833a886574de3fcaa485d5ebf8d16c3f7eff5dbf-json.log",
	        "Name": "/ha-770465",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-770465:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-770465",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7cf5268ac52be23fef4a82bd45f08106e1a207d0ea8d2788ec989044339c0bf7-init/diff:/var/lib/docker/overlay2/e391a31d00d345931d18da68f8a7d7338830f6d9b98fe203461225d03a65d594/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7cf5268ac52be23fef4a82bd45f08106e1a207d0ea8d2788ec989044339c0bf7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7cf5268ac52be23fef4a82bd45f08106e1a207d0ea8d2788ec989044339c0bf7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7cf5268ac52be23fef4a82bd45f08106e1a207d0ea8d2788ec989044339c0bf7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-770465",
	                "Source": "/var/lib/docker/volumes/ha-770465/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-770465",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-770465",
	                "name.minikube.sigs.k8s.io": "ha-770465",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "44b97868fe538185a93dd3ffee226f783c7a36b13e0f3eef97b478a02c3be30d",
	            "SandboxKey": "/var/run/docker/netns/44b97868fe53",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32788"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32789"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32792"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32790"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32791"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-770465": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "c95c64bb41bdebd7017cdb4d495e3e500618752ab547ea09aa27d1cdaf23b64d",
	                    "EndpointID": "7cdb8c3026b37e52aeed2849f3891bcd317a8955c9a3c33cd2c85ef8edba5112",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-770465",
	                        "c7d04b23d2ab"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-770465 -n ha-770465
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-770465 logs -n 25: (1.236618969s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-770465 ssh -n                                                                 | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | ha-770465-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-770465 cp ha-770465-m03:/home/docker/cp-test.txt                              | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | ha-770465:/home/docker/cp-test_ha-770465-m03_ha-770465.txt                       |           |         |         |                     |                     |
	| ssh     | ha-770465 ssh -n                                                                 | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | ha-770465-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-770465 ssh -n ha-770465 sudo cat                                              | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | /home/docker/cp-test_ha-770465-m03_ha-770465.txt                                 |           |         |         |                     |                     |
	| cp      | ha-770465 cp ha-770465-m03:/home/docker/cp-test.txt                              | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | ha-770465-m02:/home/docker/cp-test_ha-770465-m03_ha-770465-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-770465 ssh -n                                                                 | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | ha-770465-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-770465 ssh -n ha-770465-m02 sudo cat                                          | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | /home/docker/cp-test_ha-770465-m03_ha-770465-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-770465 cp ha-770465-m03:/home/docker/cp-test.txt                              | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | ha-770465-m04:/home/docker/cp-test_ha-770465-m03_ha-770465-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-770465 ssh -n                                                                 | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | ha-770465-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-770465 ssh -n ha-770465-m04 sudo cat                                          | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | /home/docker/cp-test_ha-770465-m03_ha-770465-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-770465 cp testdata/cp-test.txt                                                | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | ha-770465-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-770465 ssh -n                                                                 | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | ha-770465-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-770465 cp ha-770465-m04:/home/docker/cp-test.txt                              | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1340522930/001/cp-test_ha-770465-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-770465 ssh -n                                                                 | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | ha-770465-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-770465 cp ha-770465-m04:/home/docker/cp-test.txt                              | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | ha-770465:/home/docker/cp-test_ha-770465-m04_ha-770465.txt                       |           |         |         |                     |                     |
	| ssh     | ha-770465 ssh -n                                                                 | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | ha-770465-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-770465 ssh -n ha-770465 sudo cat                                              | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | /home/docker/cp-test_ha-770465-m04_ha-770465.txt                                 |           |         |         |                     |                     |
	| cp      | ha-770465 cp ha-770465-m04:/home/docker/cp-test.txt                              | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | ha-770465-m02:/home/docker/cp-test_ha-770465-m04_ha-770465-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-770465 ssh -n                                                                 | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | ha-770465-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-770465 ssh -n ha-770465-m02 sudo cat                                          | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | /home/docker/cp-test_ha-770465-m04_ha-770465-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-770465 cp ha-770465-m04:/home/docker/cp-test.txt                              | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | ha-770465-m03:/home/docker/cp-test_ha-770465-m04_ha-770465-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-770465 ssh -n                                                                 | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | ha-770465-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-770465 ssh -n ha-770465-m03 sudo cat                                          | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | /home/docker/cp-test_ha-770465-m04_ha-770465-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-770465 node stop m02 -v=7                                                     | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-770465 node start m02 -v=7                                                    | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:47 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:43:57
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:43:57.194814   66415 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:43:57.195071   66415 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:43:57.195080   66415 out.go:358] Setting ErrFile to fd 2...
	I0916 10:43:57.195084   66415 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:43:57.195271   66415 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 10:43:57.195892   66415 out.go:352] Setting JSON to false
	I0916 10:43:57.196843   66415 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1581,"bootTime":1726481856,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:43:57.196943   66415 start.go:139] virtualization: kvm guest
	I0916 10:43:57.199443   66415 out.go:177] * [ha-770465] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:43:57.201260   66415 notify.go:220] Checking for updates...
	I0916 10:43:57.201316   66415 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:43:57.203072   66415 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:43:57.204887   66415 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:43:57.206727   66415 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 10:43:57.208588   66415 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:43:57.210353   66415 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:43:57.212180   66415 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:43:57.235492   66415 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:43:57.235632   66415 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:43:57.285551   66415 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2024-09-16 10:43:57.276396234 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:43:57.285662   66415 docker.go:318] overlay module found
	I0916 10:43:57.287818   66415 out.go:177] * Using the docker driver based on user configuration
	I0916 10:43:57.289265   66415 start.go:297] selected driver: docker
	I0916 10:43:57.289278   66415 start.go:901] validating driver "docker" against <nil>
	I0916 10:43:57.289304   66415 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:43:57.290089   66415 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:43:57.337613   66415 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2024-09-16 10:43:57.328917373 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:43:57.337780   66415 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:43:57.338033   66415 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:43:57.339771   66415 out.go:177] * Using Docker driver with root privileges
	I0916 10:43:57.341286   66415 cni.go:84] Creating CNI manager for ""
	I0916 10:43:57.341356   66415 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0916 10:43:57.341369   66415 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 10:43:57.341446   66415 start.go:340] cluster config:
	{Name:ha-770465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-770465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:contain
erd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:43:57.342936   66415 out.go:177] * Starting "ha-770465" primary control-plane node in "ha-770465" cluster
	I0916 10:43:57.344192   66415 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 10:43:57.345502   66415 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:43:57.346627   66415 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:43:57.346662   66415 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4
	I0916 10:43:57.346672   66415 cache.go:56] Caching tarball of preloaded images
	I0916 10:43:57.346727   66415 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:43:57.346745   66415 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 10:43:57.346753   66415 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 10:43:57.347073   66415 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/config.json ...
	I0916 10:43:57.347098   66415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/config.json: {Name:mkb67ba9c685f6e37a3398a22655544c40d6e0f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0916 10:43:57.366525   66415 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:43:57.366547   66415 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:43:57.366647   66415 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:43:57.366661   66415 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:43:57.366666   66415 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:43:57.366673   66415 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:43:57.366680   66415 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:43:57.367964   66415 image.go:273] response: 
	I0916 10:43:57.421502   66415 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:43:57.421548   66415 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:43:57.421585   66415 start.go:360] acquireMachinesLock for ha-770465: {Name:mk79463d2cf034afd16e2c9f41174a568f4314aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:43:57.421697   66415 start.go:364] duration metric: took 92.559µs to acquireMachinesLock for "ha-770465"
	I0916 10:43:57.421735   66415 start.go:93] Provisioning new machine with config: &{Name:ha-770465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-770465 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 10:43:57.421827   66415 start.go:125] createHost starting for "" (driver="docker")
	I0916 10:43:57.423956   66415 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 10:43:57.424303   66415 start.go:159] libmachine.API.Create for "ha-770465" (driver="docker")
	I0916 10:43:57.424342   66415 client.go:168] LocalClient.Create starting
	I0916 10:43:57.424443   66415 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem
	I0916 10:43:57.424488   66415 main.go:141] libmachine: Decoding PEM data...
	I0916 10:43:57.424510   66415 main.go:141] libmachine: Parsing certificate...
	I0916 10:43:57.424584   66415 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem
	I0916 10:43:57.424610   66415 main.go:141] libmachine: Decoding PEM data...
	I0916 10:43:57.424625   66415 main.go:141] libmachine: Parsing certificate...
	I0916 10:43:57.425030   66415 cli_runner.go:164] Run: docker network inspect ha-770465 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 10:43:57.441724   66415 cli_runner.go:211] docker network inspect ha-770465 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 10:43:57.441785   66415 network_create.go:284] running [docker network inspect ha-770465] to gather additional debugging logs...
	I0916 10:43:57.441802   66415 cli_runner.go:164] Run: docker network inspect ha-770465
	W0916 10:43:57.457787   66415 cli_runner.go:211] docker network inspect ha-770465 returned with exit code 1
	I0916 10:43:57.457820   66415 network_create.go:287] error running [docker network inspect ha-770465]: docker network inspect ha-770465: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-770465 not found
	I0916 10:43:57.457832   66415 network_create.go:289] output of [docker network inspect ha-770465]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-770465 not found
	
	** /stderr **
	I0916 10:43:57.457910   66415 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:43:57.475572   66415 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000a22d80}
	I0916 10:43:57.475632   66415 network_create.go:124] attempt to create docker network ha-770465 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0916 10:43:57.475687   66415 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-770465 ha-770465
	I0916 10:43:57.536989   66415 network_create.go:108] docker network ha-770465 192.168.49.0/24 created
	I0916 10:43:57.537020   66415 kic.go:121] calculated static IP "192.168.49.2" for the "ha-770465" container
	I0916 10:43:57.537082   66415 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 10:43:57.553026   66415 cli_runner.go:164] Run: docker volume create ha-770465 --label name.minikube.sigs.k8s.io=ha-770465 --label created_by.minikube.sigs.k8s.io=true
	I0916 10:43:57.570659   66415 oci.go:103] Successfully created a docker volume ha-770465
	I0916 10:43:57.570756   66415 cli_runner.go:164] Run: docker run --rm --name ha-770465-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-770465 --entrypoint /usr/bin/test -v ha-770465:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 10:43:58.090213   66415 oci.go:107] Successfully prepared a docker volume ha-770465
	I0916 10:43:58.090264   66415 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:43:58.090286   66415 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 10:43:58.090352   66415 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-770465:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 10:44:02.470698   66415 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-770465:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.38028137s)
	I0916 10:44:02.470729   66415 kic.go:203] duration metric: took 4.3804387s to extract preloaded images to volume ...
	W0916 10:44:02.470887   66415 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 10:44:02.471006   66415 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 10:44:02.519215   66415 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-770465 --name ha-770465 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-770465 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-770465 --network ha-770465 --ip 192.168.49.2 --volume ha-770465:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 10:44:02.807062   66415 cli_runner.go:164] Run: docker container inspect ha-770465 --format={{.State.Running}}
	I0916 10:44:02.824971   66415 cli_runner.go:164] Run: docker container inspect ha-770465 --format={{.State.Status}}
	I0916 10:44:02.843878   66415 cli_runner.go:164] Run: docker exec ha-770465 stat /var/lib/dpkg/alternatives/iptables
	I0916 10:44:02.893080   66415 oci.go:144] the created container "ha-770465" has a running status.
	I0916 10:44:02.893111   66415 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa...
	I0916 10:44:03.031285   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 10:44:03.031333   66415 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 10:44:03.057161   66415 cli_runner.go:164] Run: docker container inspect ha-770465 --format={{.State.Status}}
	I0916 10:44:03.074631   66415 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 10:44:03.074653   66415 kic_runner.go:114] Args: [docker exec --privileged ha-770465 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 10:44:03.118992   66415 cli_runner.go:164] Run: docker container inspect ha-770465 --format={{.State.Status}}
	I0916 10:44:03.139540   66415 machine.go:93] provisionDockerMachine start ...
	I0916 10:44:03.139648   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:44:03.165705   66415 main.go:141] libmachine: Using SSH client type: native
	I0916 10:44:03.165984   66415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 10:44:03.165999   66415 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:44:03.166893   66415 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38230->127.0.0.1:32788: read: connection reset by peer
	I0916 10:44:06.299158   66415 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-770465
	
	I0916 10:44:06.299186   66415 ubuntu.go:169] provisioning hostname "ha-770465"
	I0916 10:44:06.299240   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:44:06.316285   66415 main.go:141] libmachine: Using SSH client type: native
	I0916 10:44:06.316491   66415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 10:44:06.316513   66415 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-770465 && echo "ha-770465" | sudo tee /etc/hostname
	I0916 10:44:06.458736   66415 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-770465
	
	I0916 10:44:06.458818   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:44:06.475716   66415 main.go:141] libmachine: Using SSH client type: native
	I0916 10:44:06.475931   66415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0916 10:44:06.475948   66415 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-770465' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-770465/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-770465' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:44:06.611976   66415 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:44:06.612006   66415 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 10:44:06.612041   66415 ubuntu.go:177] setting up certificates
	I0916 10:44:06.612055   66415 provision.go:84] configureAuth start
	I0916 10:44:06.612119   66415 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465
	I0916 10:44:06.629965   66415 provision.go:143] copyHostCerts
	I0916 10:44:06.630000   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:44:06.630031   66415 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 10:44:06.630040   66415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:44:06.630104   66415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 10:44:06.630182   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:44:06.630200   66415 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 10:44:06.630206   66415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:44:06.630229   66415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 10:44:06.630271   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:44:06.630289   66415 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 10:44:06.630292   66415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:44:06.630312   66415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 10:44:06.630364   66415 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.ha-770465 san=[127.0.0.1 192.168.49.2 ha-770465 localhost minikube]
	I0916 10:44:07.000349   66415 provision.go:177] copyRemoteCerts
	I0916 10:44:07.000421   66415 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:44:07.000454   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:44:07.016954   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa Username:docker}
	I0916 10:44:07.112204   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:44:07.112262   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 10:44:07.133619   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:44:07.133693   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0916 10:44:07.154592   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:44:07.154659   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:44:07.176443   66415 provision.go:87] duration metric: took 564.373064ms to configureAuth
	I0916 10:44:07.176469   66415 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:44:07.176636   66415 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:44:07.176648   66415 machine.go:96] duration metric: took 4.037078889s to provisionDockerMachine
	I0916 10:44:07.176654   66415 client.go:171] duration metric: took 9.752302538s to LocalClient.Create
	I0916 10:44:07.176673   66415 start.go:167] duration metric: took 9.752388319s to libmachine.API.Create "ha-770465"
	I0916 10:44:07.176684   66415 start.go:293] postStartSetup for "ha-770465" (driver="docker")
	I0916 10:44:07.176697   66415 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:44:07.176737   66415 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:44:07.176783   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:44:07.193817   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa Username:docker}
	I0916 10:44:07.288395   66415 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:44:07.291547   66415 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:44:07.291585   66415 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:44:07.291593   66415 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:44:07.291600   66415 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:44:07.291610   66415 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 10:44:07.291661   66415 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 10:44:07.291787   66415 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 10:44:07.291800   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /etc/ssl/certs/111892.pem
	I0916 10:44:07.291895   66415 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:44:07.299886   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:44:07.321632   66415 start.go:296] duration metric: took 144.925404ms for postStartSetup
	I0916 10:44:07.321943   66415 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465
	I0916 10:44:07.339383   66415 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/config.json ...
	I0916 10:44:07.339676   66415 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:44:07.339718   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:44:07.356862   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa Username:docker}
	I0916 10:44:07.448433   66415 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:44:07.452607   66415 start.go:128] duration metric: took 10.030761291s to createHost
	I0916 10:44:07.452643   66415 start.go:83] releasing machines lock for "ha-770465", held for 10.030930716s
	I0916 10:44:07.452715   66415 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465
	I0916 10:44:07.470126   66415 ssh_runner.go:195] Run: cat /version.json
	I0916 10:44:07.470159   66415 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:44:07.470170   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:44:07.470211   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:44:07.487483   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa Username:docker}
	I0916 10:44:07.488822   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa Username:docker}
	I0916 10:44:07.579262   66415 ssh_runner.go:195] Run: systemctl --version
	I0916 10:44:07.583511   66415 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:44:07.660764   66415 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 10:44:07.684454   66415 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:44:07.684520   66415 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:44:07.710536   66415 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 10:44:07.710556   66415 start.go:495] detecting cgroup driver to use...
	I0916 10:44:07.710597   66415 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:44:07.710645   66415 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 10:44:07.721841   66415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 10:44:07.732369   66415 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:44:07.732417   66415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:44:07.744738   66415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:44:07.757954   66415 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:44:07.830894   66415 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:44:07.906423   66415 docker.go:233] disabling docker service ...
	I0916 10:44:07.906481   66415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:44:07.923643   66415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:44:07.933929   66415 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:44:08.013748   66415 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:44:08.085472   66415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:44:08.096207   66415 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:44:08.111049   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 10:44:08.120105   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:44:08.129009   66415 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:44:08.129067   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:44:08.138760   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:44:08.147708   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:44:08.156735   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:44:08.165815   66415 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:44:08.174496   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:44:08.183615   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:44:08.192589   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:44:08.201635   66415 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:44:08.209166   66415 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:44:08.216698   66415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:44:08.289661   66415 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 10:44:08.399092   66415 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 10:44:08.399168   66415 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 10:44:08.402696   66415 start.go:563] Will wait 60s for crictl version
	I0916 10:44:08.402742   66415 ssh_runner.go:195] Run: which crictl
	I0916 10:44:08.405875   66415 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:44:08.438290   66415 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 10:44:08.438384   66415 ssh_runner.go:195] Run: containerd --version
	I0916 10:44:08.459416   66415 ssh_runner.go:195] Run: containerd --version
	I0916 10:44:08.484059   66415 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 10:44:08.485880   66415 cli_runner.go:164] Run: docker network inspect ha-770465 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:44:08.502600   66415 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:44:08.506126   66415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:44:08.516729   66415 kubeadm.go:883] updating cluster {Name:ha-770465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-770465 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:44:08.516867   66415 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:44:08.516917   66415 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:44:08.547534   66415 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 10:44:08.547554   66415 containerd.go:534] Images already preloaded, skipping extraction
	I0916 10:44:08.547603   66415 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:44:08.579979   66415 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 10:44:08.580000   66415 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:44:08.580007   66415 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I0916 10:44:08.580095   66415 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-770465 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-770465 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:44:08.580150   66415 ssh_runner.go:195] Run: sudo crictl info
	I0916 10:44:08.612440   66415 cni.go:84] Creating CNI manager for ""
	I0916 10:44:08.612464   66415 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 10:44:08.612476   66415 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:44:08.612503   66415 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-770465 NodeName:ha-770465 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:44:08.612664   66415 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-770465"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:44:08.612691   66415 kube-vip.go:115] generating kube-vip config ...
	I0916 10:44:08.612737   66415 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 10:44:08.623862   66415 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:44:08.623951   66415 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0916 10:44:08.623996   66415 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:44:08.631955   66415 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:44:08.632030   66415 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0916 10:44:08.639833   66415 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0916 10:44:08.655913   66415 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:44:08.673288   66415 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0916 10:44:08.690703   66415 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0916 10:44:08.707684   66415 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 10:44:08.710984   66415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:44:08.721537   66415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:44:08.797240   66415 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:44:08.810193   66415 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465 for IP: 192.168.49.2
	I0916 10:44:08.810217   66415 certs.go:194] generating shared ca certs ...
	I0916 10:44:08.810235   66415 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:44:08.810405   66415 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 10:44:08.810474   66415 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 10:44:08.810489   66415 certs.go:256] generating profile certs ...
	I0916 10:44:08.810562   66415 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.key
	I0916 10:44:08.810586   66415 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.crt with IP's: []
	I0916 10:44:09.290023   66415 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.crt ...
	I0916 10:44:09.290065   66415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.crt: {Name:mk3f167f76dda721d4d80ee048f18145ce2629ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:44:09.290248   66415 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.key ...
	I0916 10:44:09.290265   66415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.key: {Name:mk6ced1c16707f60b003e2ae9bbcd7fda238e598 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:44:09.290343   66415 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key.9688a104
	I0916 10:44:09.290357   66415 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt.9688a104 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0916 10:44:09.664203   66415 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt.9688a104 ...
	I0916 10:44:09.664242   66415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt.9688a104: {Name:mkad1b6852c8388971568713edf6b18ce679ff85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:44:09.664435   66415 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key.9688a104 ...
	I0916 10:44:09.664455   66415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key.9688a104: {Name:mk5087bee3d1e77d2ebdef457c71d782601e19c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:44:09.664555   66415 certs.go:381] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt.9688a104 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt
	I0916 10:44:09.664671   66415 certs.go:385] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key.9688a104 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key
	I0916 10:44:09.664757   66415 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.key
	I0916 10:44:09.664778   66415 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.crt with IP's: []
	I0916 10:44:09.828335   66415 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.crt ...
	I0916 10:44:09.828371   66415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.crt: {Name:mk7f0ffcb83dc64ecaf281ed8f885cb7c5ec4cc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:44:09.828542   66415 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.key ...
	I0916 10:44:09.828554   66415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.key: {Name:mk73142a656af1c1c1d3237c115a645da1705db4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:44:09.828625   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:44:09.828641   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:44:09.828654   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:44:09.828667   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:44:09.828680   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:44:09.828692   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:44:09.828704   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:44:09.828715   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:44:09.828764   66415 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 10:44:09.828797   66415 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 10:44:09.828807   66415 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:44:09.828830   66415 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 10:44:09.828854   66415 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:44:09.828874   66415 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 10:44:09.828909   66415 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:44:09.828938   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /usr/share/ca-certificates/111892.pem
	I0916 10:44:09.828951   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:44:09.828963   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem -> /usr/share/ca-certificates/11189.pem
	I0916 10:44:09.829537   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:44:09.851991   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:44:09.874486   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:44:09.896556   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:44:09.917869   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 10:44:09.939801   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 10:44:09.962103   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:44:09.985070   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 10:44:10.007372   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 10:44:10.028974   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:44:10.050458   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 10:44:10.073083   66415 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:44:10.089629   66415 ssh_runner.go:195] Run: openssl version
	I0916 10:44:10.094826   66415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 10:44:10.103639   66415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 10:44:10.106923   66415 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 10:44:10.106980   66415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 10:44:10.113257   66415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:44:10.121955   66415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:44:10.130807   66415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:44:10.134190   66415 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:44:10.134258   66415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:44:10.140652   66415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:44:10.149294   66415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 10:44:10.158265   66415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 10:44:10.161562   66415 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 10:44:10.161623   66415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 10:44:10.167931   66415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 10:44:10.176485   66415 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:44:10.179545   66415 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:44:10.179597   66415 kubeadm.go:392] StartCluster: {Name:ha-770465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-770465 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:44:10.179686   66415 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 10:44:10.179761   66415 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:44:10.212206   66415 cri.go:89] found id: ""
	I0916 10:44:10.212258   66415 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:44:10.220696   66415 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:44:10.228773   66415 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 10:44:10.228835   66415 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:44:10.236821   66415 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:44:10.236839   66415 kubeadm.go:157] found existing configuration files:
	
	I0916 10:44:10.236876   66415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 10:44:10.244721   66415 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:44:10.244770   66415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:44:10.252531   66415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 10:44:10.260482   66415 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:44:10.260533   66415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:44:10.268539   66415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 10:44:10.276817   66415 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:44:10.276882   66415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:44:10.285223   66415 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 10:44:10.293714   66415 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:44:10.293780   66415 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:44:10.301921   66415 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 10:44:10.336843   66415 kubeadm.go:310] W0916 10:44:10.336221    1157 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:44:10.337371   66415 kubeadm.go:310] W0916 10:44:10.336810    1157 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:44:10.354303   66415 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 10:44:10.405538   66415 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:44:20.311517   66415 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 10:44:20.311605   66415 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 10:44:20.311693   66415 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 10:44:20.311810   66415 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 10:44:20.311882   66415 kubeadm.go:310] OS: Linux
	I0916 10:44:20.311940   66415 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 10:44:20.311981   66415 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 10:44:20.312046   66415 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 10:44:20.312118   66415 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 10:44:20.312193   66415 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 10:44:20.312273   66415 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 10:44:20.312334   66415 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 10:44:20.312377   66415 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 10:44:20.312417   66415 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 10:44:20.312481   66415 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:44:20.312563   66415 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:44:20.312673   66415 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 10:44:20.312768   66415 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:44:20.314390   66415 out.go:235]   - Generating certificates and keys ...
	I0916 10:44:20.314466   66415 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 10:44:20.314534   66415 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 10:44:20.314617   66415 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 10:44:20.314683   66415 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 10:44:20.314735   66415 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 10:44:20.314775   66415 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 10:44:20.314820   66415 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 10:44:20.314906   66415 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-770465 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 10:44:20.314953   66415 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 10:44:20.315060   66415 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-770465 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0916 10:44:20.315124   66415 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 10:44:20.315179   66415 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 10:44:20.315218   66415 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 10:44:20.315274   66415 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:44:20.315317   66415 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:44:20.315371   66415 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 10:44:20.315416   66415 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:44:20.315471   66415 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:44:20.315542   66415 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:44:20.315622   66415 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:44:20.315677   66415 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:44:20.317306   66415 out.go:235]   - Booting up control plane ...
	I0916 10:44:20.317397   66415 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:44:20.317490   66415 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:44:20.317569   66415 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:44:20.317697   66415 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:44:20.317800   66415 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:44:20.317868   66415 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 10:44:20.317994   66415 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 10:44:20.318090   66415 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:44:20.318142   66415 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.612816ms
	I0916 10:44:20.318210   66415 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 10:44:20.318260   66415 kubeadm.go:310] [api-check] The API server is healthy after 5.986593008s
	I0916 10:44:20.318352   66415 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:44:20.318465   66415 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:44:20.318520   66415 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:44:20.318655   66415 kubeadm.go:310] [mark-control-plane] Marking the node ha-770465 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:44:20.318704   66415 kubeadm.go:310] [bootstrap-token] Using token: sszzzq.es5jj49460nx8z5d
	I0916 10:44:20.320889   66415 out.go:235]   - Configuring RBAC rules ...
	I0916 10:44:20.320981   66415 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:44:20.321068   66415 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:44:20.321189   66415 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:44:20.321331   66415 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:44:20.321472   66415 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:44:20.321564   66415 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:44:20.321699   66415 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:44:20.321739   66415 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 10:44:20.321786   66415 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 10:44:20.321792   66415 kubeadm.go:310] 
	I0916 10:44:20.321847   66415 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 10:44:20.321853   66415 kubeadm.go:310] 
	I0916 10:44:20.321916   66415 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 10:44:20.321922   66415 kubeadm.go:310] 
	I0916 10:44:20.321947   66415 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 10:44:20.322004   66415 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:44:20.322052   66415 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:44:20.322057   66415 kubeadm.go:310] 
	I0916 10:44:20.322115   66415 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 10:44:20.322125   66415 kubeadm.go:310] 
	I0916 10:44:20.322179   66415 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:44:20.322186   66415 kubeadm.go:310] 
	I0916 10:44:20.322232   66415 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 10:44:20.322295   66415 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:44:20.322357   66415 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:44:20.322363   66415 kubeadm.go:310] 
	I0916 10:44:20.322431   66415 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:44:20.322499   66415 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 10:44:20.322505   66415 kubeadm.go:310] 
	I0916 10:44:20.322589   66415 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token sszzzq.es5jj49460nx8z5d \
	I0916 10:44:20.322679   66415 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 \
	I0916 10:44:20.322699   66415 kubeadm.go:310] 	--control-plane 
	I0916 10:44:20.322704   66415 kubeadm.go:310] 
	I0916 10:44:20.322795   66415 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:44:20.322805   66415 kubeadm.go:310] 
	I0916 10:44:20.322911   66415 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token sszzzq.es5jj49460nx8z5d \
	I0916 10:44:20.323044   66415 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 
	I0916 10:44:20.323056   66415 cni.go:84] Creating CNI manager for ""
	I0916 10:44:20.323064   66415 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 10:44:20.324451   66415 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 10:44:20.325608   66415 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 10:44:20.329718   66415 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 10:44:20.329735   66415 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 10:44:20.346626   66415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 10:44:20.533947   66415 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:44:20.534024   66415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:44:20.534044   66415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-770465 minikube.k8s.io/updated_at=2024_09_16T10_44_20_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=ha-770465 minikube.k8s.io/primary=true
	I0916 10:44:20.541024   66415 ops.go:34] apiserver oom_adj: -16
	I0916 10:44:20.648080   66415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:44:21.148715   66415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:44:21.648948   66415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:44:22.148928   66415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:44:22.649021   66415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:44:23.148765   66415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:44:23.648809   66415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:44:24.148424   66415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:44:24.238563   66415 kubeadm.go:1113] duration metric: took 3.704600298s to wait for elevateKubeSystemPrivileges
	I0916 10:44:24.238597   66415 kubeadm.go:394] duration metric: took 14.059004214s to StartCluster
	I0916 10:44:24.238614   66415 settings.go:142] acquiring lock: {Name:mk5f140da961bb985c566f732d611f3e238f1f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:44:24.238673   66415 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:44:24.239304   66415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:44:24.239525   66415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 10:44:24.239543   66415 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 10:44:24.239518   66415 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 10:44:24.239634   66415 addons.go:69] Setting default-storageclass=true in profile "ha-770465"
	I0916 10:44:24.239647   66415 start.go:241] waiting for startup goroutines ...
	I0916 10:44:24.239652   66415 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-770465"
	I0916 10:44:24.239624   66415 addons.go:69] Setting storage-provisioner=true in profile "ha-770465"
	I0916 10:44:24.239675   66415 addons.go:234] Setting addon storage-provisioner=true in "ha-770465"
	I0916 10:44:24.239717   66415 host.go:66] Checking if "ha-770465" exists ...
	I0916 10:44:24.239758   66415 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:44:24.239991   66415 cli_runner.go:164] Run: docker container inspect ha-770465 --format={{.State.Status}}
	I0916 10:44:24.240253   66415 cli_runner.go:164] Run: docker container inspect ha-770465 --format={{.State.Status}}
	I0916 10:44:24.260439   66415 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:44:24.260566   66415 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:44:24.260784   66415 kapi.go:59] client config for ha-770465: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:44:24.261175   66415 cert_rotation.go:140] Starting client certificate rotation controller
	I0916 10:44:24.261377   66415 addons.go:234] Setting addon default-storageclass=true in "ha-770465"
	I0916 10:44:24.261414   66415 host.go:66] Checking if "ha-770465" exists ...
	I0916 10:44:24.261756   66415 cli_runner.go:164] Run: docker container inspect ha-770465 --format={{.State.Status}}
	I0916 10:44:24.261813   66415 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:44:24.261829   66415 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:44:24.261871   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:44:24.283034   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa Username:docker}
	I0916 10:44:24.283292   66415 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:44:24.283312   66415 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:44:24.283369   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:44:24.302992   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa Username:docker}
	I0916 10:44:24.438174   66415 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 10:44:24.543471   66415 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:44:24.548575   66415 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:44:25.027937   66415 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0916 10:44:25.028051   66415 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 10:44:25.028071   66415 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 10:44:25.028144   66415 round_trippers.go:463] GET https://192.168.49.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0916 10:44:25.028155   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:25.028165   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:25.028170   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:25.038620   66415 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0916 10:44:25.039381   66415 round_trippers.go:463] PUT https://192.168.49.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0916 10:44:25.039400   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:25.039411   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:25.039417   66415 round_trippers.go:473]     Content-Type: application/json
	I0916 10:44:25.039425   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:25.042229   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:25.269771   66415 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0916 10:44:25.271062   66415 addons.go:510] duration metric: took 1.031511426s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0916 10:44:25.271149   66415 start.go:246] waiting for cluster config update ...
	I0916 10:44:25.271190   66415 start.go:255] writing updated cluster config ...
	I0916 10:44:25.273007   66415 out.go:201] 
	I0916 10:44:25.274609   66415 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:44:25.274679   66415 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/config.json ...
	I0916 10:44:25.276441   66415 out.go:177] * Starting "ha-770465-m02" control-plane node in "ha-770465" cluster
	I0916 10:44:25.278238   66415 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 10:44:25.279933   66415 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:44:25.281656   66415 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:44:25.281682   66415 cache.go:56] Caching tarball of preloaded images
	I0916 10:44:25.281688   66415 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:44:25.281812   66415 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 10:44:25.281827   66415 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 10:44:25.281905   66415 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/config.json ...
	W0916 10:44:25.301883   66415 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:44:25.301902   66415 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:44:25.301994   66415 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:44:25.302009   66415 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:44:25.302015   66415 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:44:25.302023   66415 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:44:25.302030   66415 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:44:25.303169   66415 image.go:273] response: 
	I0916 10:44:25.356924   66415 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:44:25.356973   66415 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:44:25.357014   66415 start.go:360] acquireMachinesLock for ha-770465-m02: {Name:mk1ae0810eb0d80ca7ae9fe74f31de5324d2e214 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:44:25.357127   66415 start.go:364] duration metric: took 91.548µs to acquireMachinesLock for "ha-770465-m02"
	I0916 10:44:25.357157   66415 start.go:93] Provisioning new machine with config: &{Name:ha-770465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-770465 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9
PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 10:44:25.357232   66415 start.go:125] createHost starting for "m02" (driver="docker")
	I0916 10:44:25.358945   66415 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 10:44:25.359076   66415 start.go:159] libmachine.API.Create for "ha-770465" (driver="docker")
	I0916 10:44:25.359102   66415 client.go:168] LocalClient.Create starting
	I0916 10:44:25.359196   66415 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem
	I0916 10:44:25.359231   66415 main.go:141] libmachine: Decoding PEM data...
	I0916 10:44:25.359248   66415 main.go:141] libmachine: Parsing certificate...
	I0916 10:44:25.359295   66415 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem
	I0916 10:44:25.359313   66415 main.go:141] libmachine: Decoding PEM data...
	I0916 10:44:25.359328   66415 main.go:141] libmachine: Parsing certificate...
	I0916 10:44:25.359516   66415 cli_runner.go:164] Run: docker network inspect ha-770465 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:44:25.376717   66415 network_create.go:77] Found existing network {name:ha-770465 subnet:0xc0019b8810 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0916 10:44:25.376751   66415 kic.go:121] calculated static IP "192.168.49.3" for the "ha-770465-m02" container
	I0916 10:44:25.376803   66415 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 10:44:25.394583   66415 cli_runner.go:164] Run: docker volume create ha-770465-m02 --label name.minikube.sigs.k8s.io=ha-770465-m02 --label created_by.minikube.sigs.k8s.io=true
	I0916 10:44:25.413245   66415 oci.go:103] Successfully created a docker volume ha-770465-m02
	I0916 10:44:25.413334   66415 cli_runner.go:164] Run: docker run --rm --name ha-770465-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-770465-m02 --entrypoint /usr/bin/test -v ha-770465-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 10:44:26.039602   66415 oci.go:107] Successfully prepared a docker volume ha-770465-m02
	I0916 10:44:26.039644   66415 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:44:26.039694   66415 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 10:44:26.039810   66415 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-770465-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 10:44:30.342140   66415 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-770465-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.302273546s)
	I0916 10:44:30.342171   66415 kic.go:203] duration metric: took 4.302475081s to extract preloaded images to volume ...
	W0916 10:44:30.342298   66415 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 10:44:30.342384   66415 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 10:44:30.387993   66415 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-770465-m02 --name ha-770465-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-770465-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-770465-m02 --network ha-770465 --ip 192.168.49.3 --volume ha-770465-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 10:44:30.687266   66415 cli_runner.go:164] Run: docker container inspect ha-770465-m02 --format={{.State.Running}}
	I0916 10:44:30.705239   66415 cli_runner.go:164] Run: docker container inspect ha-770465-m02 --format={{.State.Status}}
	I0916 10:44:30.723829   66415 cli_runner.go:164] Run: docker exec ha-770465-m02 stat /var/lib/dpkg/alternatives/iptables
	I0916 10:44:30.766096   66415 oci.go:144] the created container "ha-770465-m02" has a running status.
	I0916 10:44:30.766123   66415 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m02/id_rsa...
	I0916 10:44:30.971239   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 10:44:30.971311   66415 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 10:44:30.993690   66415 cli_runner.go:164] Run: docker container inspect ha-770465-m02 --format={{.State.Status}}
	I0916 10:44:31.011874   66415 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 10:44:31.011895   66415 kic_runner.go:114] Args: [docker exec --privileged ha-770465-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 10:44:31.129780   66415 cli_runner.go:164] Run: docker container inspect ha-770465-m02 --format={{.State.Status}}
	I0916 10:44:31.146757   66415 machine.go:93] provisionDockerMachine start ...
	I0916 10:44:31.146848   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m02
	I0916 10:44:31.168557   66415 main.go:141] libmachine: Using SSH client type: native
	I0916 10:44:31.168827   66415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 10:44:31.168846   66415 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:44:31.339063   66415 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-770465-m02
	
	I0916 10:44:31.339097   66415 ubuntu.go:169] provisioning hostname "ha-770465-m02"
	I0916 10:44:31.339169   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m02
	I0916 10:44:31.357687   66415 main.go:141] libmachine: Using SSH client type: native
	I0916 10:44:31.357868   66415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 10:44:31.357881   66415 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-770465-m02 && echo "ha-770465-m02" | sudo tee /etc/hostname
	I0916 10:44:31.502584   66415 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-770465-m02
	
	I0916 10:44:31.502667   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m02
	I0916 10:44:31.519216   66415 main.go:141] libmachine: Using SSH client type: native
	I0916 10:44:31.519395   66415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0916 10:44:31.519412   66415 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-770465-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-770465-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-770465-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:44:31.651722   66415 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:44:31.651778   66415 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 10:44:31.651799   66415 ubuntu.go:177] setting up certificates
	I0916 10:44:31.651808   66415 provision.go:84] configureAuth start
	I0916 10:44:31.651864   66415 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465-m02
	I0916 10:44:31.668932   66415 provision.go:143] copyHostCerts
	I0916 10:44:31.668968   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:44:31.669004   66415 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 10:44:31.669016   66415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:44:31.669089   66415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 10:44:31.669185   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:44:31.669211   66415 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 10:44:31.669218   66415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:44:31.669263   66415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 10:44:31.669325   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:44:31.669354   66415 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 10:44:31.669361   66415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:44:31.669395   66415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 10:44:31.669466   66415 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.ha-770465-m02 san=[127.0.0.1 192.168.49.3 ha-770465-m02 localhost minikube]
	I0916 10:44:32.008664   66415 provision.go:177] copyRemoteCerts
	I0916 10:44:32.008736   66415 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:44:32.008791   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m02
	I0916 10:44:32.027573   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m02/id_rsa Username:docker}
	I0916 10:44:32.124445   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:44:32.124511   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 10:44:32.146483   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:44:32.146552   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:44:32.169237   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:44:32.169301   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:44:32.192111   66415 provision.go:87] duration metric: took 540.289843ms to configureAuth
	I0916 10:44:32.192143   66415 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:44:32.192327   66415 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:44:32.192339   66415 machine.go:96] duration metric: took 1.045560198s to provisionDockerMachine
	I0916 10:44:32.192345   66415 client.go:171] duration metric: took 6.833236368s to LocalClient.Create
	I0916 10:44:32.192364   66415 start.go:167] duration metric: took 6.833289798s to libmachine.API.Create "ha-770465"
	I0916 10:44:32.192372   66415 start.go:293] postStartSetup for "ha-770465-m02" (driver="docker")
	I0916 10:44:32.192380   66415 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:44:32.192420   66415 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:44:32.192452   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m02
	I0916 10:44:32.209146   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m02/id_rsa Username:docker}
	I0916 10:44:32.304496   66415 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:44:32.307418   66415 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:44:32.307446   66415 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:44:32.307454   66415 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:44:32.307460   66415 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:44:32.307470   66415 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 10:44:32.307519   66415 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 10:44:32.307592   66415 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 10:44:32.307602   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /etc/ssl/certs/111892.pem
	I0916 10:44:32.307692   66415 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:44:32.315440   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:44:32.336935   66415 start.go:296] duration metric: took 144.547197ms for postStartSetup
	I0916 10:44:32.337279   66415 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465-m02
	I0916 10:44:32.353412   66415 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/config.json ...
	I0916 10:44:32.353669   66415 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:44:32.353710   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m02
	I0916 10:44:32.370893   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m02/id_rsa Username:docker}
	I0916 10:44:32.460437   66415 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:44:32.464654   66415 start.go:128] duration metric: took 7.10740609s to createHost
	I0916 10:44:32.464678   66415 start.go:83] releasing machines lock for "ha-770465-m02", held for 7.107536685s
	I0916 10:44:32.464753   66415 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465-m02
	I0916 10:44:32.484577   66415 out.go:177] * Found network options:
	I0916 10:44:32.486485   66415 out.go:177]   - NO_PROXY=192.168.49.2
	W0916 10:44:32.487930   66415 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:44:32.487974   66415 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 10:44:32.488043   66415 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:44:32.488083   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m02
	I0916 10:44:32.488151   66415 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:44:32.488222   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m02
	I0916 10:44:32.507086   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m02/id_rsa Username:docker}
	I0916 10:44:32.507166   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m02/id_rsa Username:docker}
	I0916 10:44:32.674952   66415 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 10:44:32.698208   66415 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:44:32.698295   66415 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:44:32.722654   66415 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 10:44:32.722678   66415 start.go:495] detecting cgroup driver to use...
	I0916 10:44:32.722706   66415 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:44:32.722746   66415 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 10:44:32.733969   66415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 10:44:32.744574   66415 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:44:32.744626   66415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:44:32.756928   66415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:44:32.770398   66415 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:44:32.848552   66415 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:44:32.933680   66415 docker.go:233] disabling docker service ...
	I0916 10:44:32.933736   66415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:44:32.951795   66415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:44:32.962537   66415 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:44:33.040640   66415 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:44:33.117197   66415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:44:33.127550   66415 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:44:33.142072   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 10:44:33.151064   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:44:33.159837   66415 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:44:33.159904   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:44:33.168895   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:44:33.177939   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:44:33.186896   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:44:33.195503   66415 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:44:33.203773   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:44:33.212377   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:44:33.221013   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:44:33.229862   66415 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:44:33.238263   66415 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:44:33.245872   66415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:44:33.319936   66415 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 10:44:33.428668   66415 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 10:44:33.428730   66415 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 10:44:33.432287   66415 start.go:563] Will wait 60s for crictl version
	I0916 10:44:33.432354   66415 ssh_runner.go:195] Run: which crictl
	I0916 10:44:33.435480   66415 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:44:33.467247   66415 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 10:44:33.467316   66415 ssh_runner.go:195] Run: containerd --version
	I0916 10:44:33.489656   66415 ssh_runner.go:195] Run: containerd --version
	I0916 10:44:33.512880   66415 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 10:44:33.514244   66415 out.go:177]   - env NO_PROXY=192.168.49.2
	I0916 10:44:33.515495   66415 cli_runner.go:164] Run: docker network inspect ha-770465 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:44:33.532235   66415 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:44:33.535660   66415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:44:33.546674   66415 mustload.go:65] Loading cluster: ha-770465
	I0916 10:44:33.546842   66415 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:44:33.547035   66415 cli_runner.go:164] Run: docker container inspect ha-770465 --format={{.State.Status}}
	I0916 10:44:33.563709   66415 host.go:66] Checking if "ha-770465" exists ...
	I0916 10:44:33.564100   66415 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465 for IP: 192.168.49.3
	I0916 10:44:33.564115   66415 certs.go:194] generating shared ca certs ...
	I0916 10:44:33.564130   66415 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:44:33.564264   66415 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 10:44:33.564313   66415 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 10:44:33.564323   66415 certs.go:256] generating profile certs ...
	I0916 10:44:33.564395   66415 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.key
	I0916 10:44:33.564422   66415 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key.64a8388d
	I0916 10:44:33.564433   66415 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt.64a8388d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0916 10:44:33.727218   66415 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt.64a8388d ...
	I0916 10:44:33.727252   66415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt.64a8388d: {Name:mkc920debfcb3a99b73d5e7c12a59e767fd08f28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:44:33.727426   66415 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key.64a8388d ...
	I0916 10:44:33.727440   66415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key.64a8388d: {Name:mkc04c70d0ba2d121f62899a67c94a0209c797d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:44:33.727513   66415 certs.go:381] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt.64a8388d -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt
	I0916 10:44:33.727643   66415 certs.go:385] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key.64a8388d -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key
	I0916 10:44:33.727790   66415 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.key
	I0916 10:44:33.727805   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:44:33.727819   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:44:33.727832   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:44:33.727844   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:44:33.727856   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:44:33.727869   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:44:33.727880   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:44:33.727892   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:44:33.727941   66415 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 10:44:33.727970   66415 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 10:44:33.727980   66415 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:44:33.728004   66415 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 10:44:33.728025   66415 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:44:33.728047   66415 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 10:44:33.728082   66415 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:44:33.728110   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /usr/share/ca-certificates/111892.pem
	I0916 10:44:33.728124   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:44:33.728136   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem -> /usr/share/ca-certificates/11189.pem
	I0916 10:44:33.728181   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:44:33.745014   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa Username:docker}
	I0916 10:44:33.832043   66415 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 10:44:33.835502   66415 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 10:44:33.846936   66415 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 10:44:33.850017   66415 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0916 10:44:33.861745   66415 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 10:44:33.865357   66415 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 10:44:33.877065   66415 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 10:44:33.880343   66415 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0916 10:44:33.891581   66415 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 10:44:33.894689   66415 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 10:44:33.905603   66415 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 10:44:33.908571   66415 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0916 10:44:33.919352   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:44:33.941883   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:44:33.964366   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:44:33.986170   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:44:34.008319   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0916 10:44:34.031142   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:44:34.053026   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:44:34.074843   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 10:44:34.096695   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 10:44:34.119039   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:44:34.140277   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 10:44:34.161465   66415 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 10:44:34.177873   66415 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0916 10:44:34.194185   66415 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 10:44:34.209874   66415 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0916 10:44:34.225730   66415 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 10:44:34.241906   66415 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0916 10:44:34.257709   66415 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 10:44:34.273936   66415 ssh_runner.go:195] Run: openssl version
	I0916 10:44:34.279150   66415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 10:44:34.287998   66415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 10:44:34.291319   66415 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 10:44:34.291369   66415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 10:44:34.297633   66415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:44:34.306391   66415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:44:34.315072   66415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:44:34.318294   66415 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:44:34.318352   66415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:44:34.324456   66415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:44:34.333012   66415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 10:44:34.341622   66415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 10:44:34.345125   66415 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 10:44:34.345171   66415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 10:44:34.351507   66415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 10:44:34.360056   66415 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:44:34.363128   66415 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:44:34.363177   66415 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.31.1 containerd true true} ...
	I0916 10:44:34.363262   66415 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-770465-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-770465 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:44:34.363292   66415 kube-vip.go:115] generating kube-vip config ...
	I0916 10:44:34.363334   66415 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 10:44:34.374326   66415 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:44:34.374400   66415 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 10:44:34.374460   66415 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:44:34.382536   66415 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:44:34.382606   66415 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 10:44:34.390792   66415 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0916 10:44:34.407366   66415 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:44:34.424930   66415 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 10:44:34.441722   66415 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 10:44:34.445008   66415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:44:34.455194   66415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:44:34.535021   66415 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:44:34.547499   66415 host.go:66] Checking if "ha-770465" exists ...
	I0916 10:44:34.547796   66415 start.go:317] joinCluster: &{Name:ha-770465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-770465 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:44:34.547925   66415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 10:44:34.547968   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:44:34.566237   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa Username:docker}
	I0916 10:44:34.708100   66415 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 10:44:34.708139   66415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2ux8zi.nv82uirjdh1l2nfj --discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-770465-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0916 10:44:38.829931   66415 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2ux8zi.nv82uirjdh1l2nfj --discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-770465-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (4.121756428s)
	I0916 10:44:38.829969   66415 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 10:44:39.645693   66415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-770465-m02 minikube.k8s.io/updated_at=2024_09_16T10_44_39_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=ha-770465 minikube.k8s.io/primary=false
	I0916 10:44:39.742050   66415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-770465-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 10:44:39.937226   66415 start.go:319] duration metric: took 5.389485167s to joinCluster
	I0916 10:44:39.937315   66415 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 10:44:39.937787   66415 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:44:39.939301   66415 out.go:177] * Verifying Kubernetes components...
	I0916 10:44:39.940876   66415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:44:40.422539   66415 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:44:40.439174   66415 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:44:40.439579   66415 kapi.go:59] client config for ha-770465: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 10:44:40.439680   66415 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 10:44:40.440006   66415 node_ready.go:35] waiting up to 6m0s for node "ha-770465-m02" to be "Ready" ...
	I0916 10:44:40.440125   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:44:40.440138   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:40.440152   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:40.440161   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:40.448886   66415 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0916 10:44:40.449675   66415 node_ready.go:49] node "ha-770465-m02" has status "Ready":"True"
	I0916 10:44:40.449701   66415 node_ready.go:38] duration metric: took 9.668969ms for node "ha-770465-m02" to be "Ready" ...
	I0916 10:44:40.449713   66415 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:44:40.449800   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:44:40.449814   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:40.449825   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:40.449833   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:40.454089   66415 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:44:40.463201   66415 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:40.463354   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:44:40.463368   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:40.463376   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:40.463386   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:40.466421   66415 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:44:40.467104   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:40.467120   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:40.467130   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:40.467135   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:40.469522   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:40.964218   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:44:40.964239   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:40.964247   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:40.964252   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:40.967136   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:40.967850   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:40.967903   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:40.967919   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:40.967929   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:40.970268   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:40.970958   66415 pod_ready.go:93] pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:40.970980   66415 pod_ready.go:82] duration metric: took 507.742956ms for pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:40.970990   66415 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sbs22" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:40.971053   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:40.971061   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:40.971068   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:40.971071   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:40.974690   66415 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:44:40.975248   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:40.975265   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:40.975274   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:40.975280   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:40.977441   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:41.471524   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:41.471546   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:41.471556   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:41.471561   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:41.474404   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:41.475254   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:41.475276   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:41.475287   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:41.475295   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:41.477551   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:41.972038   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:41.972060   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:41.972071   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:41.972089   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:41.974686   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:41.975507   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:41.975528   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:41.975538   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:41.975543   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:41.977837   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:42.471933   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:42.471960   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:42.471972   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:42.471977   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:42.474859   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:42.475561   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:42.475578   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:42.475586   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:42.475591   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:42.477822   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:42.971628   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:42.971655   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:42.971673   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:42.971680   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:42.974564   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:42.975358   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:42.975375   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:42.975388   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:42.975393   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:42.977642   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:42.978137   66415 pod_ready.go:103] pod "coredns-7c65d6cfc9-sbs22" in "kube-system" namespace has status "Ready":"False"
	I0916 10:44:43.471554   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:43.471576   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:43.471587   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:43.471593   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:43.474399   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:43.475010   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:43.475028   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:43.475038   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:43.475043   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:43.477419   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:43.971286   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:43.971306   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:43.971313   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:43.971318   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:43.973702   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:43.974286   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:43.974301   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:43.974308   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:43.974313   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:43.976360   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:44.471321   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:44.471343   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:44.471350   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:44.471354   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:44.473967   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:44.474610   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:44.474627   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:44.474637   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:44.474642   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:44.476820   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:44.971236   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:44.971257   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:44.971268   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:44.971277   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:44.973932   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:44.974727   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:44.974744   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:44.974751   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:44.974756   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:44.976923   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:45.471849   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:45.471873   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:45.471884   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:45.471888   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:45.474566   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:45.475187   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:45.475205   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:45.475212   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:45.475217   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:45.477291   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:45.477713   66415 pod_ready.go:103] pod "coredns-7c65d6cfc9-sbs22" in "kube-system" namespace has status "Ready":"False"
	I0916 10:44:45.971915   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:45.971935   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:45.971943   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:45.971946   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:45.974615   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:45.975214   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:45.975228   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:45.975237   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:45.975240   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:45.977499   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:46.471338   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:46.471361   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:46.471369   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:46.471375   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:46.474271   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:46.474975   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:46.474990   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:46.474998   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:46.475004   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:46.477142   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:46.971971   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:46.971992   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:46.972000   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:46.972003   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:46.974645   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:46.975233   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:46.975249   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:46.975256   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:46.975260   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:46.977325   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:47.471418   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:47.471443   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:47.471453   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:47.471458   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:47.474346   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:47.475089   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:47.475111   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:47.475121   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:47.475129   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:47.477434   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:47.477948   66415 pod_ready.go:103] pod "coredns-7c65d6cfc9-sbs22" in "kube-system" namespace has status "Ready":"False"
	I0916 10:44:47.972220   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:47.972241   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:47.972248   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:47.972253   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:47.974454   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:47.975031   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:47.975048   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:47.975056   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:47.975060   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:47.977117   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:48.471961   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:48.471983   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:48.471992   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:48.471995   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:48.474686   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:48.475339   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:48.475357   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:48.475366   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:48.475372   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:48.477680   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:48.971495   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:48.971516   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:48.971524   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:48.971530   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:48.974253   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:48.974944   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:48.974964   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:48.974975   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:48.974979   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:48.977164   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:49.472031   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:49.472049   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:49.472055   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:49.472058   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:49.474664   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:49.475283   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:49.475299   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:49.475307   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:49.475313   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:49.477626   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:49.478046   66415 pod_ready.go:103] pod "coredns-7c65d6cfc9-sbs22" in "kube-system" namespace has status "Ready":"False"
	I0916 10:44:49.971482   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:49.971504   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:49.971512   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:49.971515   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:49.977019   66415 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:44:49.977646   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:49.977664   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:49.977669   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:49.977673   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:49.979911   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:50.471907   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:50.471933   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:50.471944   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:50.471950   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:50.474692   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:50.475399   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:50.475415   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:50.475425   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:50.475430   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:50.477580   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:50.971347   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:44:50.971368   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:50.971376   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:50.971380   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:50.974251   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:50.975002   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:50.975020   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:50.975028   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:50.975032   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:50.977288   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:50.977740   66415 pod_ready.go:93] pod "coredns-7c65d6cfc9-sbs22" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:50.977758   66415 pod_ready.go:82] duration metric: took 10.00676272s for pod "coredns-7c65d6cfc9-sbs22" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:50.977769   66415 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:50.977830   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465
	I0916 10:44:50.977838   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:50.977845   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:50.977849   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:50.979915   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:50.980392   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:50.980406   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:50.980413   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:50.980416   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:50.982289   66415 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:44:50.982729   66415 pod_ready.go:93] pod "etcd-ha-770465" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:50.982748   66415 pod_ready.go:82] duration metric: took 4.970311ms for pod "etcd-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:50.982757   66415 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:50.982808   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m02
	I0916 10:44:50.982816   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:50.982822   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:50.982827   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:50.984719   66415 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:44:50.985276   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:44:50.985292   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:50.985299   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:50.985304   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:50.987264   66415 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:44:51.483906   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m02
	I0916 10:44:51.483928   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:51.483936   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:51.483941   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:51.486635   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:51.487196   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:44:51.487213   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:51.487221   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:51.487225   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:51.489524   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:51.983390   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m02
	I0916 10:44:51.983413   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:51.983421   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:51.983424   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:51.986343   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:51.986909   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:44:51.986927   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:51.986934   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:51.986938   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:51.989448   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:52.483469   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m02
	I0916 10:44:52.483492   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:52.483500   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:52.483504   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:52.486301   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:52.486911   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:44:52.486927   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:52.486932   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:52.486935   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:52.489214   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:52.983022   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m02
	I0916 10:44:52.983045   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:52.983055   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:52.983061   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:52.985856   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:52.986473   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:44:52.986492   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:52.986502   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:52.986510   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:52.988919   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:52.989377   66415 pod_ready.go:103] pod "etcd-ha-770465-m02" in "kube-system" namespace has status "Ready":"False"
	I0916 10:44:53.483836   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m02
	I0916 10:44:53.483863   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:53.483871   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:53.483875   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:53.486813   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:53.487481   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:44:53.487500   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:53.487510   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:53.487517   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:53.490185   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:53.983005   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m02
	I0916 10:44:53.983026   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:53.983033   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:53.983037   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:53.985814   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:53.986395   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:44:53.986414   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:53.986425   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:53.986431   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:53.988667   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:54.483764   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m02
	I0916 10:44:54.483790   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:54.483807   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:54.483816   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:54.486551   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:54.487231   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:44:54.487252   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:54.487264   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:54.487269   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:54.489916   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:54.983912   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m02
	I0916 10:44:54.983933   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:54.983941   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:54.983946   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:54.986591   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:54.987196   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:44:54.987212   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:54.987222   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:54.987226   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:54.989780   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:54.990316   66415 pod_ready.go:103] pod "etcd-ha-770465-m02" in "kube-system" namespace has status "Ready":"False"
	I0916 10:44:55.483155   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m02
	I0916 10:44:55.483177   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:55.483187   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:55.483191   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:55.485960   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:55.486554   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:44:55.486573   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:55.486581   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:55.486586   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:55.489033   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:55.982908   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m02
	I0916 10:44:55.982929   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:55.982937   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:55.982941   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:55.985669   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:55.986372   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:44:55.986389   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:55.986396   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:55.986401   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:55.988702   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:56.483520   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m02
	I0916 10:44:56.483540   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:56.483547   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:56.483552   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:56.486337   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:56.486960   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:44:56.486978   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:56.486986   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:56.486991   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:56.489646   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:56.983120   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m02
	I0916 10:44:56.983141   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:56.983148   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:56.983152   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:56.985997   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:56.986804   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:44:56.986822   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:56.986832   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:56.986837   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:56.989239   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:57.483539   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m02
	I0916 10:44:57.483563   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:57.483571   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:57.483575   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:57.486475   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:57.487046   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:44:57.487062   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:57.487072   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:57.487078   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:57.489747   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:57.490211   66415 pod_ready.go:93] pod "etcd-ha-770465-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:57.490231   66415 pod_ready.go:82] duration metric: took 6.507468086s for pod "etcd-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:57.490247   66415 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:57.490304   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465
	I0916 10:44:57.490309   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:57.490318   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:57.490322   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:57.492582   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:57.493154   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:57.493172   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:57.493179   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:57.493184   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:57.495245   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:57.495696   66415 pod_ready.go:93] pod "kube-apiserver-ha-770465" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:57.495714   66415 pod_ready.go:82] duration metric: took 5.461087ms for pod "kube-apiserver-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:57.495726   66415 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:57.495865   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m02
	I0916 10:44:57.495878   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:57.495888   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:57.495894   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:57.498354   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:57.498981   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:44:57.498994   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:57.499002   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:57.499007   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:57.501125   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:57.501560   66415 pod_ready.go:93] pod "kube-apiserver-ha-770465-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:57.501580   66415 pod_ready.go:82] duration metric: took 5.847741ms for pod "kube-apiserver-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:57.501590   66415 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:57.501644   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-770465
	I0916 10:44:57.501655   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:57.501663   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:57.501669   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:57.503690   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:57.504409   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:57.504425   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:57.504436   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:57.504444   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:57.506188   66415 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:44:57.506577   66415 pod_ready.go:93] pod "kube-controller-manager-ha-770465" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:57.506596   66415 pod_ready.go:82] duration metric: took 4.999332ms for pod "kube-controller-manager-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:57.506605   66415 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:57.506653   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-770465-m02
	I0916 10:44:57.506661   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:57.506667   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:57.506675   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:57.508471   66415 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:44:57.509039   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:44:57.509055   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:57.509061   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:57.509066   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:57.510842   66415 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:44:57.511253   66415 pod_ready.go:93] pod "kube-controller-manager-ha-770465-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:57.511279   66415 pod_ready.go:82] duration metric: took 4.665305ms for pod "kube-controller-manager-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:57.511290   66415 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4qgcs" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:57.683630   66415 request.go:632] Waited for 172.264763ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4qgcs
	I0916 10:44:57.683690   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4qgcs
	I0916 10:44:57.683695   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:57.683701   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:57.683706   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:57.686543   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:57.884519   66415 request.go:632] Waited for 197.380218ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:44:57.884599   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:44:57.884611   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:57.884621   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:57.884633   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:57.887441   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:57.887923   66415 pod_ready.go:93] pod "kube-proxy-4qgcs" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:57.887942   66415 pod_ready.go:82] duration metric: took 376.646228ms for pod "kube-proxy-4qgcs" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:57.887951   66415 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gd2mt" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:58.084019   66415 request.go:632] Waited for 196.003042ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gd2mt
	I0916 10:44:58.084110   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gd2mt
	I0916 10:44:58.084117   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:58.084124   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:58.084133   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:58.086867   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:58.283716   66415 request.go:632] Waited for 196.276486ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:58.283804   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:58.283822   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:58.283832   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:58.283838   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:58.286294   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:58.286746   66415 pod_ready.go:93] pod "kube-proxy-gd2mt" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:58.286764   66415 pod_ready.go:82] duration metric: took 398.806827ms for pod "kube-proxy-gd2mt" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:58.286775   66415 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:58.483879   66415 request.go:632] Waited for 197.025817ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465
	I0916 10:44:58.483931   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465
	I0916 10:44:58.483936   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:58.483943   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:58.483947   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:58.486667   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:58.684569   66415 request.go:632] Waited for 197.3405ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:58.684662   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:44:58.684672   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:58.684680   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:58.684683   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:58.687093   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:58.687525   66415 pod_ready.go:93] pod "kube-scheduler-ha-770465" in "kube-system" namespace has status "Ready":"True"
	I0916 10:44:58.687542   66415 pod_ready.go:82] duration metric: took 400.759791ms for pod "kube-scheduler-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:44:58.687555   66415 pod_ready.go:39] duration metric: took 18.237829446s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:44:58.687576   66415 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:44:58.687634   66415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:44:58.698573   66415 api_server.go:72] duration metric: took 18.761215592s to wait for apiserver process to appear ...
	I0916 10:44:58.698608   66415 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:44:58.698628   66415 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 10:44:58.702854   66415 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 10:44:58.702934   66415 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I0916 10:44:58.702942   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:58.702950   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:58.702955   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:58.703681   66415 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 10:44:58.703843   66415 api_server.go:141] control plane version: v1.31.1
	I0916 10:44:58.703867   66415 api_server.go:131] duration metric: took 5.250776ms to wait for apiserver health ...
	I0916 10:44:58.703874   66415 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:44:58.884320   66415 request.go:632] Waited for 180.346886ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:44:58.884395   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:44:58.884404   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:58.884415   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:58.884425   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:58.888635   66415 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:44:58.892728   66415 system_pods.go:59] 17 kube-system pods found
	I0916 10:44:58.892780   66415 system_pods.go:61] "coredns-7c65d6cfc9-9lw9q" [4d7cd19e-6f4d-44ce-a248-d0fdecdb9fe8] Running
	I0916 10:44:58.892791   66415 system_pods.go:61] "coredns-7c65d6cfc9-sbs22" [89925692-76b4-481f-bac7-16f06bea792a] Running
	I0916 10:44:58.892797   66415 system_pods.go:61] "etcd-ha-770465" [041e8a27-b6a3-4201-990f-c367da8c5eb5] Running
	I0916 10:44:58.892803   66415 system_pods.go:61] "etcd-ha-770465-m02" [1f922c60-bf68-44df-aa8c-42dce50e4dac] Running
	I0916 10:44:58.892808   66415 system_pods.go:61] "kindnet-grjh8" [98658171-1486-44a9-8a20-0d77ea019206] Running
	I0916 10:44:58.892814   66415 system_pods.go:61] "kindnet-kht59" [3124d723-6fc6-4dce-9ecf-bb40b4665208] Running
	I0916 10:44:58.892820   66415 system_pods.go:61] "kube-apiserver-ha-770465" [aa8ba784-661f-4074-b0d3-f98cddf8ce8c] Running
	I0916 10:44:58.892826   66415 system_pods.go:61] "kube-apiserver-ha-770465-m02" [7acdd845-890e-449a-b112-2efbc19a7ef5] Running
	I0916 10:44:58.892835   66415 system_pods.go:61] "kube-controller-manager-ha-770465" [f0a59ee1-bf5e-432d-a185-f9497c75ada4] Running
	I0916 10:44:58.892841   66415 system_pods.go:61] "kube-controller-manager-ha-770465-m02" [55d7421b-5679-416d-a6ae-c36738c1ec32] Running
	I0916 10:44:58.892846   66415 system_pods.go:61] "kube-proxy-4qgcs" [0dbed632-c44c-4a17-8486-b328e2cc41d3] Running
	I0916 10:44:58.892853   66415 system_pods.go:61] "kube-proxy-gd2mt" [fc3bf04d-635a-4264-883b-2fd72cac2e24] Running
	I0916 10:44:58.892859   66415 system_pods.go:61] "kube-scheduler-ha-770465" [85f95d4d-a2d4-45cf-8afe-be446333b9d0] Running
	I0916 10:44:58.892865   66415 system_pods.go:61] "kube-scheduler-ha-770465-m02" [ec894fe5-a992-49f2-a96f-caab3848f7dd] Running
	I0916 10:44:58.892873   66415 system_pods.go:61] "kube-vip-ha-770465" [8fe17ef3-e8ea-42a9-bfd4-5f556d0a3f77] Running
	I0916 10:44:58.892878   66415 system_pods.go:61] "kube-vip-ha-770465-m02" [ffd7ee7b-efe1-4d2d-b2f8-5fd898de7dbf] Running
	I0916 10:44:58.892883   66415 system_pods.go:61] "storage-provisioner" [cf470925-4874-4744-8015-700e93ab924f] Running
	I0916 10:44:58.892892   66415 system_pods.go:74] duration metric: took 189.008696ms to wait for pod list to return data ...
	I0916 10:44:58.892904   66415 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:44:59.084361   66415 request.go:632] Waited for 191.360753ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:44:59.084413   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:44:59.084418   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:59.084432   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:59.084440   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:59.087222   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:44:59.087471   66415 default_sa.go:45] found service account: "default"
	I0916 10:44:59.087489   66415 default_sa.go:55] duration metric: took 194.578547ms for default service account to be created ...
	I0916 10:44:59.087497   66415 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:44:59.283908   66415 request.go:632] Waited for 196.345018ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:44:59.283997   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:44:59.284011   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:59.284024   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:59.284035   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:59.287894   66415 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:44:59.291907   66415 system_pods.go:86] 17 kube-system pods found
	I0916 10:44:59.291934   66415 system_pods.go:89] "coredns-7c65d6cfc9-9lw9q" [4d7cd19e-6f4d-44ce-a248-d0fdecdb9fe8] Running
	I0916 10:44:59.291940   66415 system_pods.go:89] "coredns-7c65d6cfc9-sbs22" [89925692-76b4-481f-bac7-16f06bea792a] Running
	I0916 10:44:59.291944   66415 system_pods.go:89] "etcd-ha-770465" [041e8a27-b6a3-4201-990f-c367da8c5eb5] Running
	I0916 10:44:59.291948   66415 system_pods.go:89] "etcd-ha-770465-m02" [1f922c60-bf68-44df-aa8c-42dce50e4dac] Running
	I0916 10:44:59.291952   66415 system_pods.go:89] "kindnet-grjh8" [98658171-1486-44a9-8a20-0d77ea019206] Running
	I0916 10:44:59.291958   66415 system_pods.go:89] "kindnet-kht59" [3124d723-6fc6-4dce-9ecf-bb40b4665208] Running
	I0916 10:44:59.291964   66415 system_pods.go:89] "kube-apiserver-ha-770465" [aa8ba784-661f-4074-b0d3-f98cddf8ce8c] Running
	I0916 10:44:59.291970   66415 system_pods.go:89] "kube-apiserver-ha-770465-m02" [7acdd845-890e-449a-b112-2efbc19a7ef5] Running
	I0916 10:44:59.291978   66415 system_pods.go:89] "kube-controller-manager-ha-770465" [f0a59ee1-bf5e-432d-a185-f9497c75ada4] Running
	I0916 10:44:59.291988   66415 system_pods.go:89] "kube-controller-manager-ha-770465-m02" [55d7421b-5679-416d-a6ae-c36738c1ec32] Running
	I0916 10:44:59.291996   66415 system_pods.go:89] "kube-proxy-4qgcs" [0dbed632-c44c-4a17-8486-b328e2cc41d3] Running
	I0916 10:44:59.292003   66415 system_pods.go:89] "kube-proxy-gd2mt" [fc3bf04d-635a-4264-883b-2fd72cac2e24] Running
	I0916 10:44:59.292007   66415 system_pods.go:89] "kube-scheduler-ha-770465" [85f95d4d-a2d4-45cf-8afe-be446333b9d0] Running
	I0916 10:44:59.292013   66415 system_pods.go:89] "kube-scheduler-ha-770465-m02" [ec894fe5-a992-49f2-a96f-caab3848f7dd] Running
	I0916 10:44:59.292017   66415 system_pods.go:89] "kube-vip-ha-770465" [8fe17ef3-e8ea-42a9-bfd4-5f556d0a3f77] Running
	I0916 10:44:59.292022   66415 system_pods.go:89] "kube-vip-ha-770465-m02" [ffd7ee7b-efe1-4d2d-b2f8-5fd898de7dbf] Running
	I0916 10:44:59.292025   66415 system_pods.go:89] "storage-provisioner" [cf470925-4874-4744-8015-700e93ab924f] Running
	I0916 10:44:59.292032   66415 system_pods.go:126] duration metric: took 204.529072ms to wait for k8s-apps to be running ...
	I0916 10:44:59.292040   66415 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:44:59.292098   66415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:44:59.302720   66415 system_svc.go:56] duration metric: took 10.671731ms WaitForService to wait for kubelet
	I0916 10:44:59.302745   66415 kubeadm.go:582] duration metric: took 19.365391948s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:44:59.302761   66415 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:44:59.484174   66415 request.go:632] Waited for 181.324017ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0916 10:44:59.484220   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0916 10:44:59.484225   66415 round_trippers.go:469] Request Headers:
	I0916 10:44:59.484234   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:44:59.484241   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:44:59.487361   66415 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:44:59.488071   66415 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:44:59.488097   66415 node_conditions.go:123] node cpu capacity is 8
	I0916 10:44:59.488111   66415 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:44:59.488116   66415 node_conditions.go:123] node cpu capacity is 8
	I0916 10:44:59.488121   66415 node_conditions.go:105] duration metric: took 185.35596ms to run NodePressure ...
	I0916 10:44:59.488134   66415 start.go:241] waiting for startup goroutines ...
	I0916 10:44:59.488187   66415 start.go:255] writing updated cluster config ...
	I0916 10:44:59.490522   66415 out.go:201] 
	I0916 10:44:59.491848   66415 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:44:59.491958   66415 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/config.json ...
	I0916 10:44:59.493433   66415 out.go:177] * Starting "ha-770465-m03" control-plane node in "ha-770465" cluster
	I0916 10:44:59.494431   66415 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 10:44:59.495519   66415 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:44:59.496566   66415 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:44:59.496592   66415 cache.go:56] Caching tarball of preloaded images
	I0916 10:44:59.496593   66415 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:44:59.496681   66415 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 10:44:59.496694   66415 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 10:44:59.496800   66415 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/config.json ...
	W0916 10:44:59.514737   66415 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:44:59.514756   66415 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:44:59.514845   66415 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:44:59.514862   66415 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:44:59.514869   66415 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:44:59.514880   66415 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:44:59.514891   66415 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:44:59.515886   66415 image.go:273] response: 
	I0916 10:44:59.564683   66415 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:44:59.564725   66415 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:44:59.564763   66415 start.go:360] acquireMachinesLock for ha-770465-m03: {Name:mk5962b775140909e26682052ad5dc2dfc9dc910 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:44:59.564857   66415 start.go:364] duration metric: took 76.168µs to acquireMachinesLock for "ha-770465-m03"
	I0916 10:44:59.564881   66415 start.go:93] Provisioning new machine with config: &{Name:ha-770465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-770465 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:fals
e kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Sock
etVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 10:44:59.564979   66415 start.go:125] createHost starting for "m03" (driver="docker")
	I0916 10:44:59.566542   66415 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 10:44:59.566644   66415 start.go:159] libmachine.API.Create for "ha-770465" (driver="docker")
	I0916 10:44:59.566676   66415 client.go:168] LocalClient.Create starting
	I0916 10:44:59.566751   66415 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem
	I0916 10:44:59.566779   66415 main.go:141] libmachine: Decoding PEM data...
	I0916 10:44:59.566794   66415 main.go:141] libmachine: Parsing certificate...
	I0916 10:44:59.566842   66415 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem
	I0916 10:44:59.566860   66415 main.go:141] libmachine: Decoding PEM data...
	I0916 10:44:59.566870   66415 main.go:141] libmachine: Parsing certificate...
	I0916 10:44:59.567053   66415 cli_runner.go:164] Run: docker network inspect ha-770465 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:44:59.582958   66415 network_create.go:77] Found existing network {name:ha-770465 subnet:0xc001b6ede0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0916 10:44:59.583001   66415 kic.go:121] calculated static IP "192.168.49.4" for the "ha-770465-m03" container
	I0916 10:44:59.583055   66415 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 10:44:59.598626   66415 cli_runner.go:164] Run: docker volume create ha-770465-m03 --label name.minikube.sigs.k8s.io=ha-770465-m03 --label created_by.minikube.sigs.k8s.io=true
	I0916 10:44:59.615801   66415 oci.go:103] Successfully created a docker volume ha-770465-m03
	I0916 10:44:59.615876   66415 cli_runner.go:164] Run: docker run --rm --name ha-770465-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-770465-m03 --entrypoint /usr/bin/test -v ha-770465-m03:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 10:45:00.190475   66415 oci.go:107] Successfully prepared a docker volume ha-770465-m03
	I0916 10:45:00.190519   66415 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:45:00.190543   66415 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 10:45:00.190614   66415 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-770465-m03:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 10:45:04.534280   66415 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-770465-m03:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.343625941s)
	I0916 10:45:04.534312   66415 kic.go:203] duration metric: took 4.343765248s to extract preloaded images to volume ...
	W0916 10:45:04.534449   66415 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 10:45:04.534558   66415 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 10:45:04.580679   66415 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-770465-m03 --name ha-770465-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-770465-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-770465-m03 --network ha-770465 --ip 192.168.49.4 --volume ha-770465-m03:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 10:45:04.869227   66415 cli_runner.go:164] Run: docker container inspect ha-770465-m03 --format={{.State.Running}}
	I0916 10:45:04.887147   66415 cli_runner.go:164] Run: docker container inspect ha-770465-m03 --format={{.State.Status}}
	I0916 10:45:04.906000   66415 cli_runner.go:164] Run: docker exec ha-770465-m03 stat /var/lib/dpkg/alternatives/iptables
	I0916 10:45:04.948553   66415 oci.go:144] the created container "ha-770465-m03" has a running status.
	I0916 10:45:04.948587   66415 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m03/id_rsa...
	I0916 10:45:05.207508   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 10:45:05.207553   66415 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 10:45:05.231999   66415 cli_runner.go:164] Run: docker container inspect ha-770465-m03 --format={{.State.Status}}
	I0916 10:45:05.261630   66415 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 10:45:05.261651   66415 kic_runner.go:114] Args: [docker exec --privileged ha-770465-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 10:45:05.334531   66415 cli_runner.go:164] Run: docker container inspect ha-770465-m03 --format={{.State.Status}}
	I0916 10:45:05.357202   66415 machine.go:93] provisionDockerMachine start ...
	I0916 10:45:05.357327   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m03
	I0916 10:45:05.380706   66415 main.go:141] libmachine: Using SSH client type: native
	I0916 10:45:05.380963   66415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32798 <nil> <nil>}
	I0916 10:45:05.380981   66415 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:45:05.575184   66415 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-770465-m03
	
	I0916 10:45:05.575210   66415 ubuntu.go:169] provisioning hostname "ha-770465-m03"
	I0916 10:45:05.575277   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m03
	I0916 10:45:05.593397   66415 main.go:141] libmachine: Using SSH client type: native
	I0916 10:45:05.593595   66415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32798 <nil> <nil>}
	I0916 10:45:05.593610   66415 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-770465-m03 && echo "ha-770465-m03" | sudo tee /etc/hostname
	I0916 10:45:05.742858   66415 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-770465-m03
	
	I0916 10:45:05.742938   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m03
	I0916 10:45:05.760321   66415 main.go:141] libmachine: Using SSH client type: native
	I0916 10:45:05.760542   66415 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32798 <nil> <nil>}
	I0916 10:45:05.760562   66415 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-770465-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-770465-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-770465-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:45:05.895802   66415 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:45:05.895834   66415 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 10:45:05.895889   66415 ubuntu.go:177] setting up certificates
	I0916 10:45:05.895906   66415 provision.go:84] configureAuth start
	I0916 10:45:05.895985   66415 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465-m03
	I0916 10:45:05.911809   66415 provision.go:143] copyHostCerts
	I0916 10:45:05.911848   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:45:05.911876   66415 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 10:45:05.911884   66415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:45:05.911946   66415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 10:45:05.912022   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:45:05.912039   66415 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 10:45:05.912045   66415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:45:05.912076   66415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 10:45:05.912150   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:45:05.912173   66415 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 10:45:05.912183   66415 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:45:05.912216   66415 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 10:45:05.912291   66415 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.ha-770465-m03 san=[127.0.0.1 192.168.49.4 ha-770465-m03 localhost minikube]
	I0916 10:45:06.068789   66415 provision.go:177] copyRemoteCerts
	I0916 10:45:06.068869   66415 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:45:06.068904   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m03
	I0916 10:45:06.085761   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m03/id_rsa Username:docker}
	I0916 10:45:06.184583   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:45:06.184648   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 10:45:06.207594   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:45:06.207661   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:45:06.231109   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:45:06.231182   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:45:06.253831   66415 provision.go:87] duration metric: took 357.907291ms to configureAuth
	I0916 10:45:06.253858   66415 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:45:06.254076   66415 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:45:06.254088   66415 machine.go:96] duration metric: took 896.863995ms to provisionDockerMachine
	I0916 10:45:06.254094   66415 client.go:171] duration metric: took 6.687407939s to LocalClient.Create
	I0916 10:45:06.254111   66415 start.go:167] duration metric: took 6.68746971s to libmachine.API.Create "ha-770465"
	I0916 10:45:06.254121   66415 start.go:293] postStartSetup for "ha-770465-m03" (driver="docker")
	I0916 10:45:06.254129   66415 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:45:06.254170   66415 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:45:06.254205   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m03
	I0916 10:45:06.271529   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m03/id_rsa Username:docker}
	I0916 10:45:06.369004   66415 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:45:06.372170   66415 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:45:06.372213   66415 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:45:06.372224   66415 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:45:06.372232   66415 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:45:06.372245   66415 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 10:45:06.372305   66415 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 10:45:06.372405   66415 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 10:45:06.372419   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /etc/ssl/certs/111892.pem
	I0916 10:45:06.372527   66415 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:45:06.381102   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:45:06.405113   66415 start.go:296] duration metric: took 150.97696ms for postStartSetup
	I0916 10:45:06.405529   66415 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465-m03
	I0916 10:45:06.424234   66415 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/config.json ...
	I0916 10:45:06.424580   66415 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:45:06.424633   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m03
	I0916 10:45:06.442721   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m03/id_rsa Username:docker}
	I0916 10:45:06.536953   66415 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:45:06.541227   66415 start.go:128] duration metric: took 6.976233835s to createHost
	I0916 10:45:06.541247   66415 start.go:83] releasing machines lock for "ha-770465-m03", held for 6.976380181s
	I0916 10:45:06.541308   66415 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465-m03
	I0916 10:45:06.560689   66415 out.go:177] * Found network options:
	I0916 10:45:06.562367   66415 out.go:177]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0916 10:45:06.563605   66415 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:45:06.563625   66415 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:45:06.563649   66415 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:45:06.563660   66415 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 10:45:06.563765   66415 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:45:06.563815   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m03
	I0916 10:45:06.563856   66415 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:45:06.563917   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m03
	I0916 10:45:06.582285   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m03/id_rsa Username:docker}
	I0916 10:45:06.582354   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m03/id_rsa Username:docker}
	I0916 10:45:06.672545   66415 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 10:45:06.755905   66415 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:45:06.755987   66415 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:45:06.783569   66415 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 10:45:06.783590   66415 start.go:495] detecting cgroup driver to use...
	I0916 10:45:06.783619   66415 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:45:06.783661   66415 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 10:45:06.795082   66415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 10:45:06.805528   66415 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:45:06.805583   66415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:45:06.818406   66415 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:45:06.831869   66415 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:45:06.911232   66415 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:45:06.991504   66415 docker.go:233] disabling docker service ...
	I0916 10:45:06.991558   66415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:45:07.009613   66415 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:45:07.019917   66415 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:45:07.096709   66415 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:45:07.183239   66415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:45:07.193849   66415 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:45:07.208791   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 10:45:07.218040   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:45:07.227010   66415 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:45:07.227070   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:45:07.235760   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:45:07.244619   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:45:07.253413   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:45:07.262188   66415 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:45:07.270742   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:45:07.280436   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:45:07.289512   66415 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:45:07.299610   66415 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:45:07.307608   66415 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:45:07.315452   66415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:45:07.392303   66415 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 10:45:07.492075   66415 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 10:45:07.492156   66415 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 10:45:07.495997   66415 start.go:563] Will wait 60s for crictl version
	I0916 10:45:07.496058   66415 ssh_runner.go:195] Run: which crictl
	I0916 10:45:07.499621   66415 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:45:07.530979   66415 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 10:45:07.531037   66415 ssh_runner.go:195] Run: containerd --version
	I0916 10:45:07.553670   66415 ssh_runner.go:195] Run: containerd --version
	I0916 10:45:07.577751   66415 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 10:45:07.578947   66415 out.go:177]   - env NO_PROXY=192.168.49.2
	I0916 10:45:07.580384   66415 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0916 10:45:07.581546   66415 cli_runner.go:164] Run: docker network inspect ha-770465 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:45:07.599279   66415 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:45:07.602751   66415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:45:07.613228   66415 mustload.go:65] Loading cluster: ha-770465
	I0916 10:45:07.613453   66415 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:45:07.613660   66415 cli_runner.go:164] Run: docker container inspect ha-770465 --format={{.State.Status}}
	I0916 10:45:07.631284   66415 host.go:66] Checking if "ha-770465" exists ...
	I0916 10:45:07.631559   66415 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465 for IP: 192.168.49.4
	I0916 10:45:07.631571   66415 certs.go:194] generating shared ca certs ...
	I0916 10:45:07.631585   66415 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:45:07.631691   66415 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 10:45:07.631726   66415 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 10:45:07.631732   66415 certs.go:256] generating profile certs ...
	I0916 10:45:07.631852   66415 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.key
	I0916 10:45:07.631878   66415 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key.0a02bdd9
	I0916 10:45:07.631890   66415 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt.0a02bdd9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0916 10:45:07.870795   66415 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt.0a02bdd9 ...
	I0916 10:45:07.870830   66415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt.0a02bdd9: {Name:mka449d4a69b81e5b7f938f495ca4fdede03c234 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:45:07.871041   66415 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key.0a02bdd9 ...
	I0916 10:45:07.871058   66415 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key.0a02bdd9: {Name:mkc376f567171135c13f12509ad123c34cd9ac74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:45:07.871130   66415 certs.go:381] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt.0a02bdd9 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt
	I0916 10:45:07.871273   66415 certs.go:385] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key.0a02bdd9 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key
	I0916 10:45:07.871404   66415 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.key
	I0916 10:45:07.871418   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:45:07.871431   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:45:07.871447   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:45:07.871460   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:45:07.871473   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:45:07.871487   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:45:07.871499   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:45:07.871514   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:45:07.871567   66415 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 10:45:07.871593   66415 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 10:45:07.871602   66415 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:45:07.871626   66415 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 10:45:07.871649   66415 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:45:07.871669   66415 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 10:45:07.871704   66415 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:45:07.871729   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:45:07.871759   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem -> /usr/share/ca-certificates/11189.pem
	I0916 10:45:07.871769   66415 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /usr/share/ca-certificates/111892.pem
	I0916 10:45:07.871812   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:45:07.888778   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa Username:docker}
	I0916 10:45:07.980075   66415 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 10:45:07.984043   66415 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 10:45:07.997143   66415 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 10:45:08.000621   66415 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0916 10:45:08.012919   66415 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 10:45:08.016032   66415 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 10:45:08.027457   66415 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 10:45:08.030609   66415 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0916 10:45:08.041790   66415 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 10:45:08.044902   66415 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 10:45:08.056504   66415 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 10:45:08.059865   66415 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0916 10:45:08.071499   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:45:08.094547   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:45:08.116716   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:45:08.138495   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:45:08.160346   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0916 10:45:08.182469   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 10:45:08.204661   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:45:08.226629   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 10:45:08.250352   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:45:08.272717   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 10:45:08.296202   66415 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 10:45:08.320615   66415 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 10:45:08.336913   66415 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0916 10:45:08.352727   66415 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 10:45:08.369340   66415 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0916 10:45:08.386394   66415 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 10:45:08.404496   66415 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0916 10:45:08.422422   66415 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 10:45:08.440137   66415 ssh_runner.go:195] Run: openssl version
	I0916 10:45:08.445569   66415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 10:45:08.454324   66415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 10:45:08.457572   66415 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 10:45:08.457621   66415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 10:45:08.463846   66415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 10:45:08.473094   66415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 10:45:08.482669   66415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 10:45:08.486051   66415 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 10:45:08.486121   66415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 10:45:08.492744   66415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:45:08.501979   66415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:45:08.510762   66415 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:45:08.513979   66415 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:45:08.514041   66415 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:45:08.521011   66415 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:45:08.530448   66415 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:45:08.533627   66415 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:45:08.533677   66415 kubeadm.go:934] updating node {m03 192.168.49.4 8443 v1.31.1 containerd true true} ...
	I0916 10:45:08.533755   66415 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-770465-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-770465 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:45:08.533783   66415 kube-vip.go:115] generating kube-vip config ...
	I0916 10:45:08.533820   66415 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 10:45:08.545954   66415 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:45:08.546042   66415 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 10:45:08.546098   66415 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:45:08.554713   66415 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:45:08.554780   66415 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 10:45:08.563181   66415 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0916 10:45:08.579611   66415 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:45:08.596526   66415 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 10:45:08.612985   66415 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 10:45:08.616443   66415 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:45:08.626212   66415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:45:08.705912   66415 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:45:08.718211   66415 host.go:66] Checking if "ha-770465" exists ...
	I0916 10:45:08.718482   66415 start.go:317] joinCluster: &{Name:ha-770465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-770465 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:f
alse kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:45:08.718627   66415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 10:45:08.718682   66415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:45:08.737887   66415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa Username:docker}
	I0916 10:45:08.880844   66415 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 10:45:08.880899   66415 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token npdktg.b5hiz94b3qw4i8jd --discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-770465-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0916 10:45:13.725094   66415 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token npdktg.b5hiz94b3qw4i8jd --discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=ha-770465-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (4.844166666s)
	I0916 10:45:13.725176   66415 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 10:45:14.542159   66415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-770465-m03 minikube.k8s.io/updated_at=2024_09_16T10_45_14_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=ha-770465 minikube.k8s.io/primary=false
	I0916 10:45:14.615336   66415 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-770465-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0916 10:45:14.711922   66415 start.go:319] duration metric: took 5.993439292s to joinCluster
	I0916 10:45:14.712001   66415 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 10:45:14.712310   66415 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:45:14.713916   66415 out.go:177] * Verifying Kubernetes components...
	I0916 10:45:14.715231   66415 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:45:15.139449   66415 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:45:15.225571   66415 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:45:15.225922   66415 kapi.go:59] client config for ha-770465: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 10:45:15.226013   66415 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 10:45:15.226288   66415 node_ready.go:35] waiting up to 6m0s for node "ha-770465-m03" to be "Ready" ...
	I0916 10:45:15.226394   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:15.226406   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:15.226415   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:15.226425   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:15.229541   66415 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:45:15.230483   66415 node_ready.go:49] node "ha-770465-m03" has status "Ready":"True"
	I0916 10:45:15.230502   66415 node_ready.go:38] duration metric: took 4.188874ms for node "ha-770465-m03" to be "Ready" ...
	I0916 10:45:15.230513   66415 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:45:15.230599   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:45:15.230615   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:15.230626   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:15.230632   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:15.237022   66415 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:45:15.246945   66415 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:15.247073   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:45:15.247085   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:15.247095   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:15.247104   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:15.250134   66415 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:45:15.250989   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:45:15.251008   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:15.251019   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:15.251028   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:15.253409   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:15.253985   66415 pod_ready.go:93] pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace has status "Ready":"True"
	I0916 10:45:15.254008   66415 pod_ready.go:82] duration metric: took 7.029652ms for pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:15.254020   66415 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sbs22" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:15.254109   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:45:15.254118   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:15.254127   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:15.254134   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:15.256650   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:15.257327   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:45:15.257343   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:15.257350   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:15.257354   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:15.259587   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:15.260200   66415 pod_ready.go:93] pod "coredns-7c65d6cfc9-sbs22" in "kube-system" namespace has status "Ready":"True"
	I0916 10:45:15.260224   66415 pod_ready.go:82] duration metric: took 6.194306ms for pod "coredns-7c65d6cfc9-sbs22" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:15.260238   66415 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:15.260308   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465
	I0916 10:45:15.260317   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:15.260327   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:15.260334   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:15.262540   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:15.263070   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:45:15.263083   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:15.263090   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:15.263094   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:15.265480   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:15.265966   66415 pod_ready.go:93] pod "etcd-ha-770465" in "kube-system" namespace has status "Ready":"True"
	I0916 10:45:15.265986   66415 pod_ready.go:82] duration metric: took 5.740232ms for pod "etcd-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:15.265996   66415 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:15.266050   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m02
	I0916 10:45:15.266057   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:15.266064   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:15.266070   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:15.268454   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:15.268978   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:45:15.268990   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:15.268996   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:15.268999   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:15.271198   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:15.271640   66415 pod_ready.go:93] pod "etcd-ha-770465-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:45:15.271658   66415 pod_ready.go:82] duration metric: took 5.655922ms for pod "etcd-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:15.271667   66415 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:15.426937   66415 request.go:632] Waited for 155.196467ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:15.427080   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:15.427108   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:15.427122   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:15.427129   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:15.430137   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:15.627010   66415 request.go:632] Waited for 196.158788ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:15.627088   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:15.627098   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:15.627109   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:15.627117   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:15.630187   66415 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:45:15.826933   66415 request.go:632] Waited for 54.206651ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:15.826999   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:15.827012   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:15.827022   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:15.827029   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:15.830062   66415 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:45:16.027142   66415 request.go:632] Waited for 196.329602ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:16.027217   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:16.027225   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:16.027235   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:16.027243   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:16.030149   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:16.272870   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:16.272894   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:16.272906   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:16.272911   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:16.275317   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:16.427393   66415 request.go:632] Waited for 151.30038ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:16.427480   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:16.427489   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:16.427500   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:16.427512   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:16.430200   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:16.771917   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:16.771950   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:16.771963   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:16.771972   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:16.774755   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:16.826490   66415 request.go:632] Waited for 51.134782ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:16.826565   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:16.826576   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:16.826585   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:16.826591   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:16.829568   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:17.272851   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:17.272872   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:17.272880   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:17.272885   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:17.275345   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:17.275923   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:17.275941   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:17.275951   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:17.275958   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:17.278128   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:17.278630   66415 pod_ready.go:103] pod "etcd-ha-770465-m03" in "kube-system" namespace has status "Ready":"False"
	I0916 10:45:17.771978   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:17.772001   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:17.772008   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:17.772013   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:17.774837   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:17.775571   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:17.775591   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:17.775603   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:17.775608   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:17.777858   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:18.272690   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:18.272712   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:18.272724   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:18.272729   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:18.275206   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:18.275856   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:18.275870   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:18.275877   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:18.275881   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:18.278015   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:18.771887   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:18.771909   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:18.771918   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:18.771922   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:18.774768   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:18.775310   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:18.775329   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:18.775339   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:18.775346   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:18.777552   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:19.272522   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:19.272551   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:19.272564   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:19.272570   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:19.275540   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:19.276295   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:19.276316   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:19.276324   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:19.276333   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:19.278572   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:19.279103   66415 pod_ready.go:103] pod "etcd-ha-770465-m03" in "kube-system" namespace has status "Ready":"False"
	I0916 10:45:19.772499   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:19.772517   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:19.772523   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:19.772535   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:19.775612   66415 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:45:19.776430   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:19.776453   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:19.776463   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:19.776470   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:19.779064   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:20.271907   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:20.271930   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:20.271938   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:20.271943   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:20.274632   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:20.275259   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:20.275276   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:20.275283   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:20.275289   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:20.277589   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:20.772052   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:20.772074   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:20.772082   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:20.772087   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:20.774878   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:20.775464   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:20.775480   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:20.775487   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:20.775492   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:20.778228   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:21.271926   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:21.271950   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:21.271959   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:21.271965   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:21.274684   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:21.275255   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:21.275271   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:21.275279   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:21.275285   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:21.277593   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:21.772465   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:21.772485   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:21.772493   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:21.772497   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:21.775269   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:21.775942   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:21.775959   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:21.775973   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:21.775979   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:21.778399   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:21.778887   66415 pod_ready.go:103] pod "etcd-ha-770465-m03" in "kube-system" namespace has status "Ready":"False"
	I0916 10:45:22.272405   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:22.272426   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:22.272433   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:22.272438   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:22.275089   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:22.275678   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:22.275694   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:22.275701   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:22.275705   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:22.277906   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:22.772807   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:22.772828   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:22.772836   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:22.772841   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:22.775792   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:22.777011   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:22.777038   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:22.777049   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:22.777056   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:22.780076   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:23.271941   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:23.271964   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:23.271975   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:23.271981   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:23.274664   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:23.275231   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:23.275248   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:23.275258   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:23.275268   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:23.277763   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:23.772654   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:23.772676   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:23.772684   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:23.772689   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:23.775526   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:23.776159   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:23.776181   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:23.776191   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:23.776195   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:23.778660   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:23.779120   66415 pod_ready.go:103] pod "etcd-ha-770465-m03" in "kube-system" namespace has status "Ready":"False"
	I0916 10:45:24.272092   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:24.272114   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:24.272121   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:24.272126   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:24.274925   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:24.275482   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:24.275499   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:24.275507   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:24.275510   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:24.277858   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:24.772790   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:24.772817   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:24.772827   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:24.772831   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:24.775707   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:24.776499   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:24.776522   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:24.776533   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:24.776540   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:24.779240   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:25.272689   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:25.272714   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:25.272726   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:25.272733   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:25.275511   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:25.276148   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:25.276165   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:25.276172   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:25.276176   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:25.278596   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:25.772446   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:25.772466   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:25.772474   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:25.772486   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:25.775323   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:25.776034   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:25.776052   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:25.776060   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:25.776065   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:25.778529   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:26.272443   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:45:26.272463   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:26.272470   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:26.272475   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:26.275036   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:26.275595   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:26.275610   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:26.275617   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:26.275620   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:26.277833   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:26.278251   66415 pod_ready.go:93] pod "etcd-ha-770465-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:45:26.278269   66415 pod_ready.go:82] duration metric: took 11.006595583s for pod "etcd-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:26.278286   66415 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:26.278342   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465
	I0916 10:45:26.278350   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:26.278356   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:26.278359   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:26.280281   66415 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:45:26.280784   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:45:26.280797   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:26.280804   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:26.280808   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:26.282725   66415 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:45:26.283250   66415 pod_ready.go:93] pod "kube-apiserver-ha-770465" in "kube-system" namespace has status "Ready":"True"
	I0916 10:45:26.283271   66415 pod_ready.go:82] duration metric: took 4.977851ms for pod "kube-apiserver-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:26.283284   66415 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:26.283357   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m02
	I0916 10:45:26.283366   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:26.283374   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:26.283377   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:26.285562   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:26.286101   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:45:26.286113   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:26.286120   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:26.286124   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:26.288170   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:26.288650   66415 pod_ready.go:93] pod "kube-apiserver-ha-770465-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:45:26.288665   66415 pod_ready.go:82] duration metric: took 5.373681ms for pod "kube-apiserver-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:26.288673   66415 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:26.288719   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m03
	I0916 10:45:26.288726   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:26.288733   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:26.288738   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:26.290631   66415 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:45:26.291287   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:26.291306   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:26.291313   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:26.291316   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:26.293057   66415 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:45:26.788914   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m03
	I0916 10:45:26.788935   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:26.788942   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:26.788947   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:26.791615   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:26.792370   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:26.792390   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:26.792401   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:26.792406   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:26.794640   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:27.289498   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m03
	I0916 10:45:27.289516   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:27.289524   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:27.289528   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:27.292303   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:27.293030   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:27.293049   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:27.293059   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:27.293064   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:27.295163   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:27.789322   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m03
	I0916 10:45:27.789358   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:27.789368   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:27.789374   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:27.791953   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:27.792673   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:27.792689   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:27.792697   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:27.792702   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:27.794817   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:28.289565   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m03
	I0916 10:45:28.289588   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:28.289598   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:28.289603   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:28.292575   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:28.293195   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:28.293211   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:28.293219   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:28.293227   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:28.295508   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:28.295922   66415 pod_ready.go:103] pod "kube-apiserver-ha-770465-m03" in "kube-system" namespace has status "Ready":"False"
	I0916 10:45:28.788946   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m03
	I0916 10:45:28.788971   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:28.788982   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:28.788987   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:28.791615   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:28.792241   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:28.792257   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:28.792264   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:28.792269   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:28.794388   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:29.289233   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m03
	I0916 10:45:29.289253   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:29.289276   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:29.289281   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:29.292248   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:29.292926   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:29.292943   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:29.292949   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:29.292954   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:29.294987   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:29.789890   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m03
	I0916 10:45:29.789915   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:29.789927   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:29.789935   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:29.792751   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:29.793493   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:29.793509   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:29.793527   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:29.793529   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:29.795945   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:30.289433   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m03
	I0916 10:45:30.289455   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:30.289464   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:30.289469   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:30.292290   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:30.292866   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:30.292880   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:30.292887   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:30.292891   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:30.294819   66415 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:45:30.789677   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m03
	I0916 10:45:30.789698   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:30.789706   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:30.789710   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:30.792471   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:30.793161   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:30.793177   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:30.793185   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:30.793188   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:30.795638   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:30.796173   66415 pod_ready.go:103] pod "kube-apiserver-ha-770465-m03" in "kube-system" namespace has status "Ready":"False"
	I0916 10:45:31.289587   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m03
	I0916 10:45:31.289612   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:31.289622   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:31.289626   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:31.292485   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:31.293147   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:31.293160   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:31.293166   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:31.293172   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:31.295303   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:31.789031   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m03
	I0916 10:45:31.789055   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:31.789067   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:31.789072   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:31.791647   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:31.792326   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:31.792343   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:31.792350   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:31.792353   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:31.794506   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:31.794877   66415 pod_ready.go:93] pod "kube-apiserver-ha-770465-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:45:31.794896   66415 pod_ready.go:82] duration metric: took 5.506214149s for pod "kube-apiserver-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:31.794905   66415 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:31.794961   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-770465
	I0916 10:45:31.794969   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:31.794979   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:31.794986   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:31.797071   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:31.797642   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:45:31.797656   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:31.797663   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:31.797666   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:31.799614   66415 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:45:31.800062   66415 pod_ready.go:93] pod "kube-controller-manager-ha-770465" in "kube-system" namespace has status "Ready":"True"
	I0916 10:45:31.800079   66415 pod_ready.go:82] duration metric: took 5.168498ms for pod "kube-controller-manager-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:31.800089   66415 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:31.800139   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-770465-m02
	I0916 10:45:31.800148   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:31.800158   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:31.800165   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:31.802091   66415 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:45:31.802812   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:45:31.802825   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:31.802832   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:31.802836   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:31.807180   66415 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:45:31.807666   66415 pod_ready.go:93] pod "kube-controller-manager-ha-770465-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:45:31.807682   66415 pod_ready.go:82] duration metric: took 7.588075ms for pod "kube-controller-manager-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:31.807692   66415 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:31.807799   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-770465-m03
	I0916 10:45:31.807810   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:31.807820   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:31.807831   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:31.809946   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:31.810500   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:31.810517   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:31.810526   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:31.810531   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:31.812555   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:31.813045   66415 pod_ready.go:93] pod "kube-controller-manager-ha-770465-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:45:31.813063   66415 pod_ready.go:82] duration metric: took 5.364715ms for pod "kube-controller-manager-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:31.813073   66415 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4qgcs" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:31.813125   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4qgcs
	I0916 10:45:31.813132   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:31.813139   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:31.813146   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:31.815060   66415 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:45:31.872914   66415 request.go:632] Waited for 57.265145ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:45:31.872977   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:45:31.872984   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:31.872998   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:31.873006   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:31.875763   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:31.876209   66415 pod_ready.go:93] pod "kube-proxy-4qgcs" in "kube-system" namespace has status "Ready":"True"
	I0916 10:45:31.876228   66415 pod_ready.go:82] duration metric: took 63.14631ms for pod "kube-proxy-4qgcs" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:31.876238   66415 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gd2mt" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:32.072597   66415 request.go:632] Waited for 196.279835ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gd2mt
	I0916 10:45:32.072669   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gd2mt
	I0916 10:45:32.072676   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:32.072685   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:32.072693   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:32.075391   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:32.272945   66415 request.go:632] Waited for 196.824277ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:45:32.273006   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:45:32.273013   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:32.273024   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:32.273034   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:32.275911   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:32.276399   66415 pod_ready.go:93] pod "kube-proxy-gd2mt" in "kube-system" namespace has status "Ready":"True"
	I0916 10:45:32.276417   66415 pod_ready.go:82] duration metric: took 400.172027ms for pod "kube-proxy-gd2mt" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:32.276428   66415 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qlspc" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:32.473481   66415 request.go:632] Waited for 196.973475ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qlspc
	I0916 10:45:32.473557   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qlspc
	I0916 10:45:32.473564   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:32.473575   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:32.473589   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:32.476400   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:32.673476   66415 request.go:632] Waited for 196.380614ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:32.673535   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:32.673540   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:32.673547   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:32.673554   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:32.676386   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:32.676834   66415 pod_ready.go:93] pod "kube-proxy-qlspc" in "kube-system" namespace has status "Ready":"True"
	I0916 10:45:32.676854   66415 pod_ready.go:82] duration metric: took 400.419202ms for pod "kube-proxy-qlspc" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:32.676863   66415 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:32.873017   66415 request.go:632] Waited for 196.092276ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465
	I0916 10:45:32.873081   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465
	I0916 10:45:32.873088   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:32.873096   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:32.873106   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:32.875939   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:33.072916   66415 request.go:632] Waited for 196.185471ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:45:33.072974   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:45:33.072979   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:33.072986   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:33.072993   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:33.075478   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:33.076046   66415 pod_ready.go:93] pod "kube-scheduler-ha-770465" in "kube-system" namespace has status "Ready":"True"
	I0916 10:45:33.076068   66415 pod_ready.go:82] duration metric: took 399.198084ms for pod "kube-scheduler-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:33.076082   66415 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:33.272926   66415 request.go:632] Waited for 196.751102ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465-m02
	I0916 10:45:33.272985   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465-m02
	I0916 10:45:33.272991   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:33.273000   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:33.273007   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:33.275772   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:33.472542   66415 request.go:632] Waited for 196.275401ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:45:33.472618   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:45:33.472624   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:33.472631   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:33.472635   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:33.475553   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:33.476139   66415 pod_ready.go:93] pod "kube-scheduler-ha-770465-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:45:33.476159   66415 pod_ready.go:82] duration metric: took 400.066183ms for pod "kube-scheduler-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:33.476170   66415 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:33.673295   66415 request.go:632] Waited for 197.05213ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465-m03
	I0916 10:45:33.673387   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465-m03
	I0916 10:45:33.673394   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:33.673401   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:33.673407   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:33.676250   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:33.872868   66415 request.go:632] Waited for 196.005771ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:33.872919   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:45:33.872924   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:33.872931   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:33.872935   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:33.875690   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:33.876144   66415 pod_ready.go:93] pod "kube-scheduler-ha-770465-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:45:33.876162   66415 pod_ready.go:82] duration metric: took 399.984234ms for pod "kube-scheduler-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:45:33.876172   66415 pod_ready.go:39] duration metric: took 18.645648206s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:45:33.876184   66415 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:45:33.876239   66415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:45:33.886742   66415 api_server.go:72] duration metric: took 19.17471158s to wait for apiserver process to appear ...
	I0916 10:45:33.886763   66415 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:45:33.886784   66415 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 10:45:33.890485   66415 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 10:45:33.890550   66415 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I0916 10:45:33.890558   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:33.890566   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:33.890573   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:33.891303   66415 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 10:45:33.891376   66415 api_server.go:141] control plane version: v1.31.1
	I0916 10:45:33.891394   66415 api_server.go:131] duration metric: took 4.624477ms to wait for apiserver health ...
	I0916 10:45:33.891407   66415 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:45:34.072867   66415 request.go:632] Waited for 181.36247ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:45:34.072931   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:45:34.072938   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:34.072949   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:34.072959   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:34.078891   66415 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:45:34.085918   66415 system_pods.go:59] 24 kube-system pods found
	I0916 10:45:34.085947   66415 system_pods.go:61] "coredns-7c65d6cfc9-9lw9q" [4d7cd19e-6f4d-44ce-a248-d0fdecdb9fe8] Running
	I0916 10:45:34.085952   66415 system_pods.go:61] "coredns-7c65d6cfc9-sbs22" [89925692-76b4-481f-bac7-16f06bea792a] Running
	I0916 10:45:34.085957   66415 system_pods.go:61] "etcd-ha-770465" [041e8a27-b6a3-4201-990f-c367da8c5eb5] Running
	I0916 10:45:34.085963   66415 system_pods.go:61] "etcd-ha-770465-m02" [1f922c60-bf68-44df-aa8c-42dce50e4dac] Running
	I0916 10:45:34.085969   66415 system_pods.go:61] "etcd-ha-770465-m03" [876ca26f-ca18-47df-9bfb-302321d66136] Running
	I0916 10:45:34.085974   66415 system_pods.go:61] "kindnet-66kfj" [9cd2abf1-a30e-40f4-af5c-59d50e367b36] Running
	I0916 10:45:34.085978   66415 system_pods.go:61] "kindnet-grjh8" [98658171-1486-44a9-8a20-0d77ea019206] Running
	I0916 10:45:34.085983   66415 system_pods.go:61] "kindnet-kht59" [3124d723-6fc6-4dce-9ecf-bb40b4665208] Running
	I0916 10:45:34.085988   66415 system_pods.go:61] "kube-apiserver-ha-770465" [aa8ba784-661f-4074-b0d3-f98cddf8ce8c] Running
	I0916 10:45:34.085995   66415 system_pods.go:61] "kube-apiserver-ha-770465-m02" [7acdd845-890e-449a-b112-2efbc19a7ef5] Running
	I0916 10:45:34.086000   66415 system_pods.go:61] "kube-apiserver-ha-770465-m03" [8ab3c55a-d726-456c-803b-e13c1032b13a] Running
	I0916 10:45:34.086003   66415 system_pods.go:61] "kube-controller-manager-ha-770465" [f0a59ee1-bf5e-432d-a185-f9497c75ada4] Running
	I0916 10:45:34.086013   66415 system_pods.go:61] "kube-controller-manager-ha-770465-m02" [55d7421b-5679-416d-a6ae-c36738c1ec32] Running
	I0916 10:45:34.086016   66415 system_pods.go:61] "kube-controller-manager-ha-770465-m03" [37613674-2ba4-4425-824a-d00d16ea7b62] Running
	I0916 10:45:34.086020   66415 system_pods.go:61] "kube-proxy-4qgcs" [0dbed632-c44c-4a17-8486-b328e2cc41d3] Running
	I0916 10:45:34.086023   66415 system_pods.go:61] "kube-proxy-gd2mt" [fc3bf04d-635a-4264-883b-2fd72cac2e24] Running
	I0916 10:45:34.086026   66415 system_pods.go:61] "kube-proxy-qlspc" [703b3f60-1294-49fd-aab1-35474b771351] Running
	I0916 10:45:34.086029   66415 system_pods.go:61] "kube-scheduler-ha-770465" [85f95d4d-a2d4-45cf-8afe-be446333b9d0] Running
	I0916 10:45:34.086032   66415 system_pods.go:61] "kube-scheduler-ha-770465-m02" [ec894fe5-a992-49f2-a96f-caab3848f7dd] Running
	I0916 10:45:34.086035   66415 system_pods.go:61] "kube-scheduler-ha-770465-m03" [0c6e312a-4eca-4ca3-b0d9-d70f7bf5087c] Running
	I0916 10:45:34.086038   66415 system_pods.go:61] "kube-vip-ha-770465" [8fe17ef3-e8ea-42a9-bfd4-5f556d0a3f77] Running
	I0916 10:45:34.086041   66415 system_pods.go:61] "kube-vip-ha-770465-m02" [ffd7ee7b-efe1-4d2d-b2f8-5fd898de7dbf] Running
	I0916 10:45:34.086044   66415 system_pods.go:61] "kube-vip-ha-770465-m03" [408226b8-b640-4b83-bd7e-a61b37724197] Running
	I0916 10:45:34.086047   66415 system_pods.go:61] "storage-provisioner" [cf470925-4874-4744-8015-700e93ab924f] Running
	I0916 10:45:34.086052   66415 system_pods.go:74] duration metric: took 194.637339ms to wait for pod list to return data ...
	I0916 10:45:34.086061   66415 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:45:34.273409   66415 request.go:632] Waited for 187.276734ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:45:34.273465   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:45:34.273470   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:34.273479   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:34.273483   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:34.276479   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:34.276588   66415 default_sa.go:45] found service account: "default"
	I0916 10:45:34.276602   66415 default_sa.go:55] duration metric: took 190.535855ms for default service account to be created ...
	I0916 10:45:34.276611   66415 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:45:34.472907   66415 request.go:632] Waited for 196.233603ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:45:34.472963   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:45:34.472968   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:34.472976   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:34.472983   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:34.478381   66415 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:45:34.484510   66415 system_pods.go:86] 24 kube-system pods found
	I0916 10:45:34.484539   66415 system_pods.go:89] "coredns-7c65d6cfc9-9lw9q" [4d7cd19e-6f4d-44ce-a248-d0fdecdb9fe8] Running
	I0916 10:45:34.484545   66415 system_pods.go:89] "coredns-7c65d6cfc9-sbs22" [89925692-76b4-481f-bac7-16f06bea792a] Running
	I0916 10:45:34.484549   66415 system_pods.go:89] "etcd-ha-770465" [041e8a27-b6a3-4201-990f-c367da8c5eb5] Running
	I0916 10:45:34.484553   66415 system_pods.go:89] "etcd-ha-770465-m02" [1f922c60-bf68-44df-aa8c-42dce50e4dac] Running
	I0916 10:45:34.484557   66415 system_pods.go:89] "etcd-ha-770465-m03" [876ca26f-ca18-47df-9bfb-302321d66136] Running
	I0916 10:45:34.484560   66415 system_pods.go:89] "kindnet-66kfj" [9cd2abf1-a30e-40f4-af5c-59d50e367b36] Running
	I0916 10:45:34.484564   66415 system_pods.go:89] "kindnet-grjh8" [98658171-1486-44a9-8a20-0d77ea019206] Running
	I0916 10:45:34.484567   66415 system_pods.go:89] "kindnet-kht59" [3124d723-6fc6-4dce-9ecf-bb40b4665208] Running
	I0916 10:45:34.484571   66415 system_pods.go:89] "kube-apiserver-ha-770465" [aa8ba784-661f-4074-b0d3-f98cddf8ce8c] Running
	I0916 10:45:34.484576   66415 system_pods.go:89] "kube-apiserver-ha-770465-m02" [7acdd845-890e-449a-b112-2efbc19a7ef5] Running
	I0916 10:45:34.484583   66415 system_pods.go:89] "kube-apiserver-ha-770465-m03" [8ab3c55a-d726-456c-803b-e13c1032b13a] Running
	I0916 10:45:34.484587   66415 system_pods.go:89] "kube-controller-manager-ha-770465" [f0a59ee1-bf5e-432d-a185-f9497c75ada4] Running
	I0916 10:45:34.484594   66415 system_pods.go:89] "kube-controller-manager-ha-770465-m02" [55d7421b-5679-416d-a6ae-c36738c1ec32] Running
	I0916 10:45:34.484597   66415 system_pods.go:89] "kube-controller-manager-ha-770465-m03" [37613674-2ba4-4425-824a-d00d16ea7b62] Running
	I0916 10:45:34.484604   66415 system_pods.go:89] "kube-proxy-4qgcs" [0dbed632-c44c-4a17-8486-b328e2cc41d3] Running
	I0916 10:45:34.484608   66415 system_pods.go:89] "kube-proxy-gd2mt" [fc3bf04d-635a-4264-883b-2fd72cac2e24] Running
	I0916 10:45:34.484613   66415 system_pods.go:89] "kube-proxy-qlspc" [703b3f60-1294-49fd-aab1-35474b771351] Running
	I0916 10:45:34.484617   66415 system_pods.go:89] "kube-scheduler-ha-770465" [85f95d4d-a2d4-45cf-8afe-be446333b9d0] Running
	I0916 10:45:34.484623   66415 system_pods.go:89] "kube-scheduler-ha-770465-m02" [ec894fe5-a992-49f2-a96f-caab3848f7dd] Running
	I0916 10:45:34.484627   66415 system_pods.go:89] "kube-scheduler-ha-770465-m03" [0c6e312a-4eca-4ca3-b0d9-d70f7bf5087c] Running
	I0916 10:45:34.484630   66415 system_pods.go:89] "kube-vip-ha-770465" [8fe17ef3-e8ea-42a9-bfd4-5f556d0a3f77] Running
	I0916 10:45:34.484633   66415 system_pods.go:89] "kube-vip-ha-770465-m02" [ffd7ee7b-efe1-4d2d-b2f8-5fd898de7dbf] Running
	I0916 10:45:34.484638   66415 system_pods.go:89] "kube-vip-ha-770465-m03" [408226b8-b640-4b83-bd7e-a61b37724197] Running
	I0916 10:45:34.484641   66415 system_pods.go:89] "storage-provisioner" [cf470925-4874-4744-8015-700e93ab924f] Running
	I0916 10:45:34.484647   66415 system_pods.go:126] duration metric: took 208.029152ms to wait for k8s-apps to be running ...
	I0916 10:45:34.484655   66415 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:45:34.484697   66415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:45:34.495613   66415 system_svc.go:56] duration metric: took 10.94482ms WaitForService to wait for kubelet
	I0916 10:45:34.495647   66415 kubeadm.go:582] duration metric: took 19.78361955s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:45:34.495666   66415 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:45:34.672922   66415 request.go:632] Waited for 177.188345ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0916 10:45:34.672994   66415 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0916 10:45:34.673000   66415 round_trippers.go:469] Request Headers:
	I0916 10:45:34.673007   66415 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:45:34.673014   66415 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:45:34.675880   66415 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:45:34.676738   66415 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:45:34.676757   66415 node_conditions.go:123] node cpu capacity is 8
	I0916 10:45:34.676769   66415 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:45:34.676775   66415 node_conditions.go:123] node cpu capacity is 8
	I0916 10:45:34.676780   66415 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:45:34.676785   66415 node_conditions.go:123] node cpu capacity is 8
	I0916 10:45:34.676793   66415 node_conditions.go:105] duration metric: took 181.121718ms to run NodePressure ...
	I0916 10:45:34.676807   66415 start.go:241] waiting for startup goroutines ...
	I0916 10:45:34.676830   66415 start.go:255] writing updated cluster config ...
	I0916 10:45:34.677124   66415 ssh_runner.go:195] Run: rm -f paused
	I0916 10:45:34.683263   66415 out.go:177] * Done! kubectl is now configured to use "ha-770465" cluster and "default" namespace by default
	E0916 10:45:34.684495   66415 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	e01ca3a0115c5       8c811b4aec35f       About a minute ago   Running             busybox                   0                   55f666e26fe6c       busybox-7dff88458-845rc
	505568793f357       c69fa2e9cbf5f       2 minutes ago        Running             coredns                   0                   1fd35ed82463b       coredns-7c65d6cfc9-sbs22
	120ff8a81efa1       c69fa2e9cbf5f       2 minutes ago        Running             coredns                   0                   be59c99f1c75f       coredns-7c65d6cfc9-9lw9q
	ec0de017ccfa5       6e38f40d628db       2 minutes ago        Running             storage-provisioner       0                   f2ec4aec1e0b2       storage-provisioner
	b31c2d77265e3       12968670680f4       2 minutes ago        Running             kindnet-cni               0                   3fc06a79ff69e       kindnet-grjh8
	15571e99ab074       60c005f310ff3       2 minutes ago        Running             kube-proxy                0                   21353a9cca68d       kube-proxy-gd2mt
	75391807e9839       38af8ddebf499       2 minutes ago        Running             kube-vip                  0                   bbeb0c20f3069       kube-vip-ha-770465
	8b022d1d91205       2e96e5913fc06       3 minutes ago        Running             etcd                      0                   1e24ae4d4e2d8       etcd-ha-770465
	fc07020cd4841       9aa1fad941575       3 minutes ago        Running             kube-scheduler            0                   d47515013434a       kube-scheduler-ha-770465
	780f65ad6abab       175ffd71cce3d       3 minutes ago        Running             kube-controller-manager   0                   51746ddbcbea1       kube-controller-manager-ha-770465
	535bd4e938e3a       6bab7719df100       3 minutes ago        Running             kube-apiserver            0                   53fe88679ccf5       kube-apiserver-ha-770465
	
	
	==> containerd <==
	Sep 16 10:44:49 ha-770465 containerd[863]: time="2024-09-16T10:44:49.662101077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 10:44:49 ha-770465 containerd[863]: time="2024-09-16T10:44:49.662117526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:44:49 ha-770465 containerd[863]: time="2024-09-16T10:44:49.662203637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:44:49 ha-770465 containerd[863]: time="2024-09-16T10:44:49.708041585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-sbs22,Uid:89925692-76b4-481f-bac7-16f06bea792a,Namespace:kube-system,Attempt:0,} returns sandbox id \"1fd35ed82463bdeaed95b6c537cfd734fd3f5a191985667470b39c1feb3c143b\""
	Sep 16 10:44:49 ha-770465 containerd[863]: time="2024-09-16T10:44:49.710652827Z" level=info msg="CreateContainer within sandbox \"1fd35ed82463bdeaed95b6c537cfd734fd3f5a191985667470b39c1feb3c143b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Sep 16 10:44:49 ha-770465 containerd[863]: time="2024-09-16T10:44:49.722332652Z" level=info msg="CreateContainer within sandbox \"1fd35ed82463bdeaed95b6c537cfd734fd3f5a191985667470b39c1feb3c143b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"505568793f3574e96c0799007e1921a7d91c4ad3c8aeba5624a5d0c4a02e46d5\""
	Sep 16 10:44:49 ha-770465 containerd[863]: time="2024-09-16T10:44:49.722879183Z" level=info msg="StartContainer for \"505568793f3574e96c0799007e1921a7d91c4ad3c8aeba5624a5d0c4a02e46d5\""
	Sep 16 10:44:49 ha-770465 containerd[863]: time="2024-09-16T10:44:49.766136140Z" level=info msg="StartContainer for \"505568793f3574e96c0799007e1921a7d91c4ad3c8aeba5624a5d0c4a02e46d5\" returns successfully"
	Sep 16 10:45:36 ha-770465 containerd[863]: time="2024-09-16T10:45:36.214606138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-7dff88458-845rc,Uid:d5a45010-f551-4f0c-bb3e-d70e2eed9df0,Namespace:default,Attempt:0,}"
	Sep 16 10:45:36 ha-770465 containerd[863]: time="2024-09-16T10:45:36.250670965Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 10:45:36 ha-770465 containerd[863]: time="2024-09-16T10:45:36.250739768Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 10:45:36 ha-770465 containerd[863]: time="2024-09-16T10:45:36.250751691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:45:36 ha-770465 containerd[863]: time="2024-09-16T10:45:36.250856205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:45:36 ha-770465 containerd[863]: time="2024-09-16T10:45:36.296227347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-7dff88458-845rc,Uid:d5a45010-f551-4f0c-bb3e-d70e2eed9df0,Namespace:default,Attempt:0,} returns sandbox id \"55f666e26fe6ce338a9bd6c1802eafd533c1692af41e714ab63be449b882ad5b\""
	Sep 16 10:45:36 ha-770465 containerd[863]: time="2024-09-16T10:45:36.299315327Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 16 10:45:38 ha-770465 containerd[863]: time="2024-09-16T10:45:38.220119236Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 10:45:38 ha-770465 containerd[863]: time="2024-09-16T10:45:38.221190631Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28: active requests=0, bytes read=727667"
	Sep 16 10:45:38 ha-770465 containerd[863]: time="2024-09-16T10:45:38.222543840Z" level=info msg="ImageCreate event name:\"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 10:45:38 ha-770465 containerd[863]: time="2024-09-16T10:45:38.224559402Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 10:45:38 ha-770465 containerd[863]: time="2024-09-16T10:45:38.224888789Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28\" with image id \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\", repo tag \"gcr.io/k8s-minikube/busybox:1.28\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\", size \"725911\" in 1.92552792s"
	Sep 16 10:45:38 ha-770465 containerd[863]: time="2024-09-16T10:45:38.224923141Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\" returns image reference \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\""
	Sep 16 10:45:38 ha-770465 containerd[863]: time="2024-09-16T10:45:38.227034844Z" level=info msg="CreateContainer within sandbox \"55f666e26fe6ce338a9bd6c1802eafd533c1692af41e714ab63be449b882ad5b\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Sep 16 10:45:38 ha-770465 containerd[863]: time="2024-09-16T10:45:38.239551777Z" level=info msg="CreateContainer within sandbox \"55f666e26fe6ce338a9bd6c1802eafd533c1692af41e714ab63be449b882ad5b\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"e01ca3a0115c593fb62c91c1fe233bb2dcacc8fba6d38a7be8e09dc401933a28\""
	Sep 16 10:45:38 ha-770465 containerd[863]: time="2024-09-16T10:45:38.240116822Z" level=info msg="StartContainer for \"e01ca3a0115c593fb62c91c1fe233bb2dcacc8fba6d38a7be8e09dc401933a28\""
	Sep 16 10:45:38 ha-770465 containerd[863]: time="2024-09-16T10:45:38.296790736Z" level=info msg="StartContainer for \"e01ca3a0115c593fb62c91c1fe233bb2dcacc8fba6d38a7be8e09dc401933a28\" returns successfully"
	
	
	==> coredns [120ff8a81efa1183e1409d1cdb8fa5e1e7c675ebb3d0f165783c5512f48e07ce] <==
	[INFO] 127.0.0.1:47401 - 6102 "HINFO IN 7552043894687877427.7409354771220060933. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009655762s
	[INFO] 10.244.2.2:41874 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000284968s
	[INFO] 10.244.2.2:43872 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000938288s
	[INFO] 10.244.1.2:52261 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161563s
	[INFO] 10.244.1.2:56357 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001567449s
	[INFO] 10.244.1.2:42838 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000111184s
	[INFO] 10.244.1.2:53654 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001745214s
	[INFO] 10.244.0.4:53747 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.011812399s
	[INFO] 10.244.2.2:58497 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001353637s
	[INFO] 10.244.2.2:44119 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000158419s
	[INFO] 10.244.1.2:54873 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000164329s
	[INFO] 10.244.1.2:44900 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001619482s
	[INFO] 10.244.1.2:52029 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070813s
	[INFO] 10.244.0.4:56319 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144654s
	[INFO] 10.244.0.4:58425 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0002097s
	[INFO] 10.244.2.2:50531 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000233084s
	[INFO] 10.244.1.2:57721 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200098s
	[INFO] 10.244.1.2:47494 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147603s
	[INFO] 10.244.1.2:55948 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104458s
	[INFO] 10.244.1.2:41737 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000105046s
	[INFO] 10.244.0.4:56889 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184697s
	[INFO] 10.244.0.4:58113 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000142403s
	[INFO] 10.244.2.2:46838 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000183592s
	[INFO] 10.244.2.2:57080 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000106185s
	[INFO] 10.244.1.2:47643 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000156174s
	
	
	==> coredns [505568793f3574e96c0799007e1921a7d91c4ad3c8aeba5624a5d0c4a02e46d5] <==
	[INFO] 10.244.0.4:52021 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000136338s
	[INFO] 10.244.0.4:55747 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112985s
	[INFO] 10.244.2.2:51737 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184765s
	[INFO] 10.244.2.2:53734 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001929846s
	[INFO] 10.244.2.2:48077 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000125445s
	[INFO] 10.244.2.2:56941 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093993s
	[INFO] 10.244.2.2:53593 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00010639s
	[INFO] 10.244.2.2:53291 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000714s
	[INFO] 10.244.1.2:54655 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185177s
	[INFO] 10.244.1.2:48932 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002062451s
	[INFO] 10.244.1.2:41866 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000103063s
	[INFO] 10.244.1.2:51846 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000082591s
	[INFO] 10.244.1.2:55756 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087775s
	[INFO] 10.244.0.4:55553 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098067s
	[INFO] 10.244.0.4:54433 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008689s
	[INFO] 10.244.2.2:46677 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00019083s
	[INFO] 10.244.2.2:33741 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073821s
	[INFO] 10.244.2.2:54300 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000115863s
	[INFO] 10.244.0.4:41373 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000182332s
	[INFO] 10.244.0.4:46249 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000174562s
	[INFO] 10.244.2.2:53722 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000107299s
	[INFO] 10.244.2.2:37649 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000141192s
	[INFO] 10.244.1.2:47658 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000179545s
	[INFO] 10.244.1.2:40089 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000124796s
	[INFO] 10.244.1.2:58475 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000130146s
	
	
	==> describe nodes <==
	Name:               ha-770465
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-770465
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-770465
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_44_20_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:44:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-770465
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:47:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:45:52 +0000   Mon, 16 Sep 2024 10:44:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:45:52 +0000   Mon, 16 Sep 2024 10:44:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:45:52 +0000   Mon, 16 Sep 2024 10:44:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:45:52 +0000   Mon, 16 Sep 2024 10:44:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-770465
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 0ba1e4cf0f2047a2ba0924f2c23df268
	  System UUID:                f3656390-934b-423a-8190-9f78053eddee
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-845rc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 coredns-7c65d6cfc9-9lw9q             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m51s
	  kube-system                 coredns-7c65d6cfc9-sbs22             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m51s
	  kube-system                 etcd-ha-770465                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m58s
	  kube-system                 kindnet-grjh8                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m51s
	  kube-system                 kube-apiserver-ha-770465             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m58s
	  kube-system                 kube-controller-manager-ha-770465    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m56s
	  kube-system                 kube-proxy-gd2mt                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m51s
	  kube-system                 kube-scheduler-ha-770465             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m56s
	  kube-system                 kube-vip-ha-770465                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m58s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 2m50s  kube-proxy       
	  Normal   Starting                 2m56s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m56s  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  2m56s  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m56s  kubelet          Node ha-770465 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m56s  kubelet          Node ha-770465 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m56s  kubelet          Node ha-770465 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m52s  node-controller  Node ha-770465 event: Registered Node ha-770465 in Controller
	  Normal   RegisteredNode           2m30s  node-controller  Node ha-770465 event: Registered Node ha-770465 in Controller
	  Normal   RegisteredNode           115s   node-controller  Node ha-770465 event: Registered Node ha-770465 in Controller
	  Normal   RegisteredNode           8s     node-controller  Node ha-770465 event: Registered Node ha-770465 in Controller
	
	
	Name:               ha-770465-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-770465-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-770465
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_44_39_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:44:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-770465-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:47:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:47:13 +0000   Mon, 16 Sep 2024 10:44:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:47:13 +0000   Mon, 16 Sep 2024 10:44:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:47:13 +0000   Mon, 16 Sep 2024 10:44:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:47:13 +0000   Mon, 16 Sep 2024 10:44:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-770465-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 211cc65cbed447e5a5ec82cc38a4ae4b
	  System UUID:                0ec75a9b-7a96-466a-872e-476404dc1e5d
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-klfw4                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 etcd-ha-770465-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m36s
	  kube-system                 kindnet-kht59                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m38s
	  kube-system                 kube-apiserver-ha-770465-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m36s
	  kube-system                 kube-controller-manager-ha-770465-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m36s
	  kube-system                 kube-proxy-4qgcs                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m38s
	  kube-system                 kube-scheduler-ha-770465-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-vip-ha-770465-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m34s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  2m38s (x8 over 2m38s)  kubelet          Node ha-770465-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m38s (x7 over 2m38s)  kubelet          Node ha-770465-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m38s (x7 over 2m38s)  kubelet          Node ha-770465-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  2m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           2m37s                  node-controller  Node ha-770465-m02 event: Registered Node ha-770465-m02 in Controller
	  Normal   RegisteredNode           2m30s                  node-controller  Node ha-770465-m02 event: Registered Node ha-770465-m02 in Controller
	  Normal   RegisteredNode           115s                   node-controller  Node ha-770465-m02 event: Registered Node ha-770465-m02 in Controller
	  Normal   Starting                 15s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 15s                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  15s (x8 over 15s)      kubelet          Node ha-770465-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15s (x7 over 15s)      kubelet          Node ha-770465-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15s (x7 over 15s)      kubelet          Node ha-770465-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  15s                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           8s                     node-controller  Node ha-770465-m02 event: Registered Node ha-770465-m02 in Controller
	
	
	Name:               ha-770465-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-770465-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-770465
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_45_14_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:45:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-770465-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:47:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:46:13 +0000   Mon, 16 Sep 2024 10:45:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:46:13 +0000   Mon, 16 Sep 2024 10:45:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:46:13 +0000   Mon, 16 Sep 2024 10:45:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:46:13 +0000   Mon, 16 Sep 2024 10:45:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-770465-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 d21ac7bc08cb49e1a337ea803b228e0a
	  System UUID:                e87efbe7-d110-423f-ad1f-3d6b898d752e
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-dlndh                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 etcd-ha-770465-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m2s
	  kube-system                 kindnet-66kfj                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m4s
	  kube-system                 kube-apiserver-ha-770465-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-controller-manager-ha-770465-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-proxy-qlspc                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-scheduler-ha-770465-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-vip-ha-770465-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m1s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  2m4s (x8 over 2m4s)  kubelet          Node ha-770465-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s (x7 over 2m4s)  kubelet          Node ha-770465-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s (x7 over 2m4s)  kubelet          Node ha-770465-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m2s                 node-controller  Node ha-770465-m03 event: Registered Node ha-770465-m03 in Controller
	  Normal  RegisteredNode           2m                   node-controller  Node ha-770465-m03 event: Registered Node ha-770465-m03 in Controller
	  Normal  RegisteredNode           115s                 node-controller  Node ha-770465-m03 event: Registered Node ha-770465-m03 in Controller
	  Normal  RegisteredNode           8s                   node-controller  Node ha-770465-m03 event: Registered Node ha-770465-m03 in Controller
	
	
	Name:               ha-770465-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-770465-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-770465
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_46_20_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:46:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-770465-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:47:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:46:20 +0000   Mon, 16 Sep 2024 10:46:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:46:20 +0000   Mon, 16 Sep 2024 10:46:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:46:20 +0000   Mon, 16 Sep 2024 10:46:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:46:20 +0000   Mon, 16 Sep 2024 10:46:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-770465-m04
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 77bf457e10f94471aaa4387428b4961a
	  System UUID:                82d9765a-9474-4a2c-ae78-19bbbf1ab150
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-bflwn       100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      54s
	  kube-system                 kube-proxy-78l2l    0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%)  100m (1%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 53s                kube-proxy       
	  Normal  NodeHasSufficientMemory  56s (x2 over 56s)  kubelet          Node ha-770465-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x2 over 56s)  kubelet          Node ha-770465-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x2 over 56s)  kubelet          Node ha-770465-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  56s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           55s                node-controller  Node ha-770465-m04 event: Registered Node ha-770465-m04 in Controller
	  Normal  RegisteredNode           55s                node-controller  Node ha-770465-m04 event: Registered Node ha-770465-m04 in Controller
	  Normal  NodeReady                55s                kubelet          Node ha-770465-m04 status is now: NodeReady
	  Normal  RegisteredNode           52s                node-controller  Node ha-770465-m04 event: Registered Node ha-770465-m04 in Controller
	  Normal  RegisteredNode           8s                 node-controller  Node ha-770465-m04 event: Registered Node ha-770465-m04 in Controller
	
	
	==> dmesg <==
	[Sep16 10:17]  #2
	[  +0.001391]  #3
	[  +0.000000]  #4
	[  +0.003060] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.003238] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.002037] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.002135]  #5
	[  +0.000696]  #6
	[  +0.003195]  #7
	[  +0.058540] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +0.423486] i8042: Warning: Keylock active
	[  +0.007424] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.002994] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.000654] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000607] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000622] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000638] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000657] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000725] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000607] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000608] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.588189] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +7.251152] kauditd_printk_skb: 46 callbacks suppressed
	
	
	==> etcd [8b022d1d912058b6aec308a7f6777b3f8fcb7b0b8c051be8ff2b7c53dc37450c] <==
	{"level":"info","ts":"2024-09-16T10:45:12.954316Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"1849ecf187a2b8dd"}
	{"level":"info","ts":"2024-09-16T10:45:13.020302Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"1849ecf187a2b8dd"}
	{"level":"info","ts":"2024-09-16T10:45:13.424152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(1750190452317010141 12593026477526642892 17455162631699035958)"}
	{"level":"info","ts":"2024-09-16T10:45:13.424314Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-09-16T10:45:13.424388Z","caller":"etcdserver/server.go:1996","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"1849ecf187a2b8dd"}
	{"level":"warn","ts":"2024-09-16T10:46:46.456383Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"f23d31ee9f17f736","error":"unexpected EOF"}
	{"level":"warn","ts":"2024-09-16T10:46:46.456473Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"f23d31ee9f17f736","error":"failed to read f23d31ee9f17f736 on stream Message (unexpected EOF)"}
	{"level":"warn","ts":"2024-09-16T10:46:46.456393Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"f23d31ee9f17f736","error":"unexpected EOF"}
	{"level":"warn","ts":"2024-09-16T10:46:46.488106Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"f23d31ee9f17f736","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T10:46:46.488171Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"f23d31ee9f17f736","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T10:46:46.645950Z","caller":"rafthttp/stream.go:223","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"f23d31ee9f17f736"}
	{"level":"warn","ts":"2024-09-16T10:46:50.489304Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"f23d31ee9f17f736","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T10:46:50.489362Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"f23d31ee9f17f736","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T10:46:50.964777Z","caller":"rafthttp/stream.go:194","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"f23d31ee9f17f736"}
	{"level":"warn","ts":"2024-09-16T10:46:54.490752Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"f23d31ee9f17f736","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T10:46:54.490809Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"f23d31ee9f17f736","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T10:46:59.500032Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"f23d31ee9f17f736","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T10:46:59.500089Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"f23d31ee9f17f736","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-16T10:47:01.741007Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"f23d31ee9f17f736"}
	{"level":"info","ts":"2024-09-16T10:47:01.741475Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"f23d31ee9f17f736"}
	{"level":"info","ts":"2024-09-16T10:47:01.743867Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"f23d31ee9f17f736"}
	{"level":"info","ts":"2024-09-16T10:47:01.750769Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"aec36adc501070cc","to":"f23d31ee9f17f736","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-16T10:47:01.750833Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"f23d31ee9f17f736"}
	{"level":"info","ts":"2024-09-16T10:47:01.753898Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"aec36adc501070cc","to":"f23d31ee9f17f736","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-16T10:47:01.753939Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"f23d31ee9f17f736"}
	
	
	==> kernel <==
	 10:47:15 up 29 min,  0 users,  load average: 1.15, 0.95, 0.67
	Linux ha-770465 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [b31c2d77265e3a87517539fba911addc87dcfa7cd4932f3fa5cfa6b294afd8aa] <==
	I0916 10:46:35.757632       1 main.go:322] Node ha-770465-m04 has CIDR [10.244.3.0/24] 
	I0916 10:46:45.759849       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0916 10:46:45.759887       1 main.go:322] Node ha-770465-m02 has CIDR [10.244.1.0/24] 
	I0916 10:46:45.760059       1 main.go:295] Handling node with IPs: map[192.168.49.4:{}]
	I0916 10:46:45.760075       1 main.go:322] Node ha-770465-m03 has CIDR [10.244.2.0/24] 
	I0916 10:46:45.760144       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0916 10:46:45.760158       1 main.go:322] Node ha-770465-m04 has CIDR [10.244.3.0/24] 
	I0916 10:46:45.760220       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:46:45.760232       1 main.go:299] handling current node
	I0916 10:46:55.754295       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:46:55.754333       1 main.go:299] handling current node
	I0916 10:46:55.754348       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0916 10:46:55.754353       1 main.go:322] Node ha-770465-m02 has CIDR [10.244.1.0/24] 
	I0916 10:46:55.754504       1 main.go:295] Handling node with IPs: map[192.168.49.4:{}]
	I0916 10:46:55.754514       1 main.go:322] Node ha-770465-m03 has CIDR [10.244.2.0/24] 
	I0916 10:46:55.754569       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0916 10:46:55.754591       1 main.go:322] Node ha-770465-m04 has CIDR [10.244.3.0/24] 
	I0916 10:47:05.756725       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:47:05.756759       1 main.go:299] handling current node
	I0916 10:47:05.756775       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0916 10:47:05.756779       1 main.go:322] Node ha-770465-m02 has CIDR [10.244.1.0/24] 
	I0916 10:47:05.756919       1 main.go:295] Handling node with IPs: map[192.168.49.4:{}]
	I0916 10:47:05.756932       1 main.go:322] Node ha-770465-m03 has CIDR [10.244.2.0/24] 
	I0916 10:47:05.756992       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0916 10:47:05.757000       1 main.go:322] Node ha-770465-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [535bd4e938e3aeb6ecfbd02d81bf8fc060b9bb649a67b3f28d6b43d2c199e4ba] <==
	I0916 10:44:17.977097       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:44:17.981779       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:44:18.429026       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:44:19.732485       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:44:19.743980       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 10:44:19.753201       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:44:24.080680       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0916 10:44:24.080680       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0916 10:44:24.180774       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0916 10:46:04.259087       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:54782: use of closed network connection
	E0916 10:46:04.412401       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:54796: use of closed network connection
	E0916 10:46:04.568563       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:54808: use of closed network connection
	E0916 10:46:04.740761       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:54822: use of closed network connection
	E0916 10:46:04.905896       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:54836: use of closed network connection
	E0916 10:46:05.060982       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:54858: use of closed network connection
	E0916 10:46:05.228361       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:54878: use of closed network connection
	E0916 10:46:05.380406       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:54894: use of closed network connection
	E0916 10:46:05.547512       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:54904: use of closed network connection
	E0916 10:46:05.822889       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:54930: use of closed network connection
	E0916 10:46:05.978196       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:54942: use of closed network connection
	E0916 10:46:06.125590       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:54966: use of closed network connection
	E0916 10:46:06.271367       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:54990: use of closed network connection
	E0916 10:46:06.417557       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:55004: use of closed network connection
	E0916 10:46:06.561545       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:55012: use of closed network connection
	W0916 10:46:57.980256       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.4]
	
	
	==> kube-controller-manager [780f65ad6abab29bdde89c430c29bcd890f45aa17487c1bfd744c963df712f3d] <==
	I0916 10:45:38.289841       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="26.154709ms"
	I0916 10:45:38.289939       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="51.595µs"
	I0916 10:45:38.814556       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="7.74076ms"
	I0916 10:45:38.814650       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.1µs"
	I0916 10:45:41.523632       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="61.768µs"
	I0916 10:45:42.638341       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m03"
	I0916 10:45:52.041669       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465"
	I0916 10:46:03.827357       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.858169ms"
	I0916 10:46:03.827464       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="59.444µs"
	I0916 10:46:08.674888       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m02"
	I0916 10:46:13.096265       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m03"
	E0916 10:46:19.274244       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-8wfr5 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-8wfr5\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0916 10:46:19.399508       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-770465-m04\" does not exist"
	I0916 10:46:19.440331       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-770465-m04" podCIDRs=["10.244.3.0/24"]
	I0916 10:46:19.440377       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m04"
	I0916 10:46:19.440419       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m04"
	I0916 10:46:19.874388       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m04"
	I0916 10:46:20.121826       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m04"
	I0916 10:46:20.188037       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m04"
	I0916 10:46:20.657563       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-770465-m04"
	I0916 10:46:20.657871       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m04"
	I0916 10:46:20.671968       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m04"
	I0916 10:46:23.179728       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-770465-m04"
	I0916 10:46:23.180106       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m04"
	I0916 10:47:13.362215       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m02"
	
	
	==> kube-proxy [15571e99ab074e3b158931e74a462086cc1bc9b84b6b39d511e64dbebca8dac3] <==
	I0916 10:44:25.058145       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:44:25.228881       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:44:25.228958       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:44:25.251975       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:44:25.252031       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:44:25.255017       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:44:25.255521       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:44:25.255550       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:44:25.256997       1 config.go:199] "Starting service config controller"
	I0916 10:44:25.257209       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:44:25.257043       1 config.go:328] "Starting node config controller"
	I0916 10:44:25.257490       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:44:25.257086       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:44:25.257634       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:44:25.357729       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:44:25.357756       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:44:25.360110       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [fc07020cd48414dd7978cd32b7fffa3b3bd5d7f72b79b3aa49e4082dffedf8e3] <==
	W0916 10:44:17.534480       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 10:44:17.534534       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:44:17.605947       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:44:17.605995       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:44:17.659989       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:44:17.660035       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0916 10:44:17.672435       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 10:44:17.672475       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 10:44:20.730788       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0916 10:45:11.758548       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-sb96x\": pod kube-proxy-sb96x is already assigned to node \"ha-770465-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-sb96x" node="ha-770465-m03"
	E0916 10:45:11.758691       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-sb96x\": pod kube-proxy-sb96x is already assigned to node \"ha-770465-m03\"" pod="kube-system/kube-proxy-sb96x"
	E0916 10:45:35.573275       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-klfw4\": pod busybox-7dff88458-klfw4 is already assigned to node \"ha-770465-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-klfw4" node="ha-770465-m02"
	E0916 10:45:35.573342       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 1f91390f-bdef-4a3b-a8bc-e717d87dee4b(default/busybox-7dff88458-klfw4) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-klfw4"
	E0916 10:45:35.573361       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-klfw4\": pod busybox-7dff88458-klfw4 is already assigned to node \"ha-770465-m02\"" pod="default/busybox-7dff88458-klfw4"
	I0916 10:45:35.573394       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-klfw4" node="ha-770465-m02"
	E0916 10:46:21.389563       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-tw9dw\": pod kindnet-tw9dw is already assigned to node \"ha-770465-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-tw9dw" node="ha-770465-m04"
	E0916 10:46:21.389661       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 211d67ad-c4dc-498b-9ce1-aa4f469a1a54(kube-system/kindnet-tw9dw) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-tw9dw"
	E0916 10:46:21.389685       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-tw9dw\": pod kindnet-tw9dw is already assigned to node \"ha-770465-m04\"" pod="kube-system/kindnet-tw9dw"
	I0916 10:46:21.389710       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-tw9dw" node="ha-770465-m04"
	E0916 10:46:21.390586       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-bflwn\": pod kindnet-bflwn is already assigned to node \"ha-770465-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-bflwn" node="ha-770465-m04"
	E0916 10:46:21.390625       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 59d75712-5683-4b1c-a6ef-2a669d75da7a(kube-system/kindnet-bflwn) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-bflwn"
	E0916 10:46:21.390641       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-bflwn\": pod kindnet-bflwn is already assigned to node \"ha-770465-m04\"" pod="kube-system/kindnet-bflwn"
	I0916 10:46:21.390663       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-bflwn" node="ha-770465-m04"
	E0916 10:46:21.422131       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-vkdfk\": pod kindnet-vkdfk is already assigned to node \"ha-770465-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-vkdfk" node="ha-770465-m04"
	E0916 10:46:21.422653       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-vkdfk\": pod kindnet-vkdfk is already assigned to node \"ha-770465-m04\"" pod="kube-system/kindnet-vkdfk"
	
	
	==> kubelet <==
	Sep 16 10:44:24 ha-770465 kubelet[1704]: E0916 10:44:24.825617    1704 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df3b02d9cb86350ba85f30c87a530adb921da52a208fc5e705a2f6d395882486\": failed to find network info for sandbox \"df3b02d9cb86350ba85f30c87a530adb921da52a208fc5e705a2f6d395882486\"" pod="kube-system/coredns-7c65d6cfc9-9lw9q"
	Sep 16 10:44:24 ha-770465 kubelet[1704]: E0916 10:44:24.825650    1704 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df3b02d9cb86350ba85f30c87a530adb921da52a208fc5e705a2f6d395882486\": failed to find network info for sandbox \"df3b02d9cb86350ba85f30c87a530adb921da52a208fc5e705a2f6d395882486\"" pod="kube-system/coredns-7c65d6cfc9-9lw9q"
	Sep 16 10:44:24 ha-770465 kubelet[1704]: E0916 10:44:24.825717    1704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-9lw9q_kube-system(4d7cd19e-6f4d-44ce-a248-d0fdecdb9fe8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-9lw9q_kube-system(4d7cd19e-6f4d-44ce-a248-d0fdecdb9fe8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df3b02d9cb86350ba85f30c87a530adb921da52a208fc5e705a2f6d395882486\\\": failed to find network info for sandbox \\\"df3b02d9cb86350ba85f30c87a530adb921da52a208fc5e705a2f6d395882486\\\"\"" pod="kube-system/coredns-7c65d6cfc9-9lw9q" podUID="4d7cd19e-6f4d-44ce-a248-d0fdecdb9fe8"
	Sep 16 10:44:25 ha-770465 kubelet[1704]: I0916 10:44:25.333152    1704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/cf470925-4874-4744-8015-700e93ab924f-tmp\") pod \"storage-provisioner\" (UID: \"cf470925-4874-4744-8015-700e93ab924f\") " pod="kube-system/storage-provisioner"
	Sep 16 10:44:25 ha-770465 kubelet[1704]: I0916 10:44:25.333218    1704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-454b8\" (UniqueName: \"kubernetes.io/projected/cf470925-4874-4744-8015-700e93ab924f-kube-api-access-454b8\") pod \"storage-provisioner\" (UID: \"cf470925-4874-4744-8015-700e93ab924f\") " pod="kube-system/storage-provisioner"
	Sep 16 10:44:25 ha-770465 kubelet[1704]: I0916 10:44:25.663438    1704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gd2mt" podStartSLOduration=1.663397327 podStartE2EDuration="1.663397327s" podCreationTimestamp="2024-09-16 10:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:44:25.663397383 +0000 UTC m=+6.164732829" watchObservedRunningTime="2024-09-16 10:44:25.663397327 +0000 UTC m=+6.164732774"
	Sep 16 10:44:25 ha-770465 kubelet[1704]: I0916 10:44:25.690523    1704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-grjh8" podStartSLOduration=1.690501142 podStartE2EDuration="1.690501142s" podCreationTimestamp="2024-09-16 10:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:44:25.690388847 +0000 UTC m=+6.191724292" watchObservedRunningTime="2024-09-16 10:44:25.690501142 +0000 UTC m=+6.191836589"
	Sep 16 10:44:26 ha-770465 kubelet[1704]: I0916 10:44:26.665088    1704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.665066696 podStartE2EDuration="1.665066696s" podCreationTimestamp="2024-09-16 10:44:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:44:26.664635761 +0000 UTC m=+7.165971208" watchObservedRunningTime="2024-09-16 10:44:26.665066696 +0000 UTC m=+7.166402143"
	Sep 16 10:44:29 ha-770465 kubelet[1704]: I0916 10:44:29.936500    1704 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 16 10:44:29 ha-770465 kubelet[1704]: I0916 10:44:29.937326    1704 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 16 10:44:35 ha-770465 kubelet[1704]: E0916 10:44:35.674383    1704 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcc9bac8074b52d743f59693715a647784ed10dbbb42086888fe28ee8e71f6b5\": failed to find network info for sandbox \"fcc9bac8074b52d743f59693715a647784ed10dbbb42086888fe28ee8e71f6b5\""
	Sep 16 10:44:35 ha-770465 kubelet[1704]: E0916 10:44:35.674448    1704 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcc9bac8074b52d743f59693715a647784ed10dbbb42086888fe28ee8e71f6b5\": failed to find network info for sandbox \"fcc9bac8074b52d743f59693715a647784ed10dbbb42086888fe28ee8e71f6b5\"" pod="kube-system/coredns-7c65d6cfc9-sbs22"
	Sep 16 10:44:35 ha-770465 kubelet[1704]: E0916 10:44:35.674473    1704 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcc9bac8074b52d743f59693715a647784ed10dbbb42086888fe28ee8e71f6b5\": failed to find network info for sandbox \"fcc9bac8074b52d743f59693715a647784ed10dbbb42086888fe28ee8e71f6b5\"" pod="kube-system/coredns-7c65d6cfc9-sbs22"
	Sep 16 10:44:35 ha-770465 kubelet[1704]: E0916 10:44:35.674519    1704 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-sbs22_kube-system(89925692-76b4-481f-bac7-16f06bea792a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-sbs22_kube-system(89925692-76b4-481f-bac7-16f06bea792a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fcc9bac8074b52d743f59693715a647784ed10dbbb42086888fe28ee8e71f6b5\\\": failed to find network info for sandbox \\\"fcc9bac8074b52d743f59693715a647784ed10dbbb42086888fe28ee8e71f6b5\\\"\"" pod="kube-system/coredns-7c65d6cfc9-sbs22" podUID="89925692-76b4-481f-bac7-16f06bea792a"
	Sep 16 10:44:40 ha-770465 kubelet[1704]: I0916 10:44:40.724936    1704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-9lw9q" podStartSLOduration=16.724911472 podStartE2EDuration="16.724911472s" podCreationTimestamp="2024-09-16 10:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:44:40.724155516 +0000 UTC m=+21.225490962" watchObservedRunningTime="2024-09-16 10:44:40.724911472 +0000 UTC m=+21.226246917"
	Sep 16 10:44:50 ha-770465 kubelet[1704]: I0916 10:44:50.714495    1704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-sbs22" podStartSLOduration=26.714472953 podStartE2EDuration="26.714472953s" podCreationTimestamp="2024-09-16 10:44:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:44:50.713735376 +0000 UTC m=+31.215070822" watchObservedRunningTime="2024-09-16 10:44:50.714472953 +0000 UTC m=+31.215808398"
	Sep 16 10:45:35 ha-770465 kubelet[1704]: E0916 10:45:35.668078    1704 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-jwdp5], unattached volumes=[], failed to process volumes=[kube-api-access-jwdp5]: context canceled" pod="default/busybox-7dff88458-lrb95" podUID="b2be2502-120d-4678-8b3d-8a6be089d9f1"
	Sep 16 10:45:35 ha-770465 kubelet[1704]: I0916 10:45:35.820115    1704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6kx6\" (UniqueName: \"kubernetes.io/projected/d5a45010-f551-4f0c-bb3e-d70e2eed9df0-kube-api-access-s6kx6\") pod \"busybox-7dff88458-845rc\" (UID: \"d5a45010-f551-4f0c-bb3e-d70e2eed9df0\") " pod="default/busybox-7dff88458-845rc"
	Sep 16 10:45:35 ha-770465 kubelet[1704]: I0916 10:45:35.820393    1704 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwdp5\" (UniqueName: \"kubernetes.io/projected/b2be2502-120d-4678-8b3d-8a6be089d9f1-kube-api-access-jwdp5\") pod \"busybox-7dff88458-lrb95\" (UID: \"b2be2502-120d-4678-8b3d-8a6be089d9f1\") " pod="default/busybox-7dff88458-lrb95"
	Sep 16 10:45:36 ha-770465 kubelet[1704]: I0916 10:45:36.022087    1704 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwdp5\" (UniqueName: \"kubernetes.io/projected/b2be2502-120d-4678-8b3d-8a6be089d9f1-kube-api-access-jwdp5\") pod \"b2be2502-120d-4678-8b3d-8a6be089d9f1\" (UID: \"b2be2502-120d-4678-8b3d-8a6be089d9f1\") "
	Sep 16 10:45:36 ha-770465 kubelet[1704]: I0916 10:45:36.023981    1704 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2be2502-120d-4678-8b3d-8a6be089d9f1-kube-api-access-jwdp5" (OuterVolumeSpecName: "kube-api-access-jwdp5") pod "b2be2502-120d-4678-8b3d-8a6be089d9f1" (UID: "b2be2502-120d-4678-8b3d-8a6be089d9f1"). InnerVolumeSpecName "kube-api-access-jwdp5". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:45:36 ha-770465 kubelet[1704]: I0916 10:45:36.122817    1704 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-jwdp5\" (UniqueName: \"kubernetes.io/projected/b2be2502-120d-4678-8b3d-8a6be089d9f1-kube-api-access-jwdp5\") on node \"ha-770465\" DevicePath \"\""
	Sep 16 10:45:37 ha-770465 kubelet[1704]: I0916 10:45:37.626360    1704 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2be2502-120d-4678-8b3d-8a6be089d9f1" path="/var/lib/kubelet/pods/b2be2502-120d-4678-8b3d-8a6be089d9f1/volumes"
	Sep 16 10:45:38 ha-770465 kubelet[1704]: I0916 10:45:38.806899    1704 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-7dff88458-845rc" podStartSLOduration=1.878632565 podStartE2EDuration="3.806873189s" podCreationTimestamp="2024-09-16 10:45:35 +0000 UTC" firstStartedPulling="2024-09-16 10:45:36.297526121 +0000 UTC m=+76.798861550" lastFinishedPulling="2024-09-16 10:45:38.225766737 +0000 UTC m=+78.727102174" observedRunningTime="2024-09-16 10:45:38.80675608 +0000 UTC m=+79.308091526" watchObservedRunningTime="2024-09-16 10:45:38.806873189 +0000 UTC m=+79.308208635"
	Sep 16 10:46:05 ha-770465 kubelet[1704]: E0916 10:46:05.822846    1704 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 192.168.49.2:51478->192.168.49.2:10010: write tcp 192.168.49.2:51478->192.168.49.2:10010: write: broken pipe
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-770465 -n ha-770465
helpers_test.go:261: (dbg) Run:  kubectl --context ha-770465 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context ha-770465 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (531.291µs)
helpers_test.go:263: kubectl --context ha-770465 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (17.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-770465 node delete m03 -v=7 --alsologtostderr: (8.378615192s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:511: (dbg) Non-zero exit: kubectl get nodes: fork/exec /usr/local/bin/kubectl: exec format error (568.615µs)
ha_test.go:513: failed to run kubectl get nodes. args "kubectl get nodes" : fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-770465
helpers_test.go:235: (dbg) docker inspect ha-770465:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c7d04b23d2abf966ca171fd8833a886574de3fcaa485d5ebf8d16c3f7eff5dbf",
	        "Created": "2024-09-16T10:44:02.535590959Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 92586,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:47:43.24863302Z",
	            "FinishedAt": "2024-09-16T10:47:42.564880298Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/c7d04b23d2abf966ca171fd8833a886574de3fcaa485d5ebf8d16c3f7eff5dbf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c7d04b23d2abf966ca171fd8833a886574de3fcaa485d5ebf8d16c3f7eff5dbf/hostname",
	        "HostsPath": "/var/lib/docker/containers/c7d04b23d2abf966ca171fd8833a886574de3fcaa485d5ebf8d16c3f7eff5dbf/hosts",
	        "LogPath": "/var/lib/docker/containers/c7d04b23d2abf966ca171fd8833a886574de3fcaa485d5ebf8d16c3f7eff5dbf/c7d04b23d2abf966ca171fd8833a886574de3fcaa485d5ebf8d16c3f7eff5dbf-json.log",
	        "Name": "/ha-770465",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-770465:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-770465",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7cf5268ac52be23fef4a82bd45f08106e1a207d0ea8d2788ec989044339c0bf7-init/diff:/var/lib/docker/overlay2/e391a31d00d345931d18da68f8a7d7338830f6d9b98fe203461225d03a65d594/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7cf5268ac52be23fef4a82bd45f08106e1a207d0ea8d2788ec989044339c0bf7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7cf5268ac52be23fef4a82bd45f08106e1a207d0ea8d2788ec989044339c0bf7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7cf5268ac52be23fef4a82bd45f08106e1a207d0ea8d2788ec989044339c0bf7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-770465",
	                "Source": "/var/lib/docker/volumes/ha-770465/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-770465",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-770465",
	                "name.minikube.sigs.k8s.io": "ha-770465",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8b3ce02b29e52777585464b6a8d2a46093438fdf1c190ca8dca87b4e393a92d2",
	            "SandboxKey": "/var/run/docker/netns/8b3ce02b29e5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32813"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32814"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32817"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32815"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32816"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-770465": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "c95c64bb41bdebd7017cdb4d495e3e500618752ab547ea09aa27d1cdaf23b64d",
	                    "EndpointID": "944cd996982f25c55d706f837301b97c4d0783ce0506dae578e064f456e28e74",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-770465",
	                        "c7d04b23d2ab"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-770465 -n ha-770465
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-770465 logs -n 25: (1.532808524s)
helpers_test.go:252: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-770465 ssh -n                                                                 | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | ha-770465-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-770465 ssh -n ha-770465-m02 sudo cat                                          | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | /home/docker/cp-test_ha-770465-m03_ha-770465-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-770465 cp ha-770465-m03:/home/docker/cp-test.txt                              | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | ha-770465-m04:/home/docker/cp-test_ha-770465-m03_ha-770465-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-770465 ssh -n                                                                 | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | ha-770465-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-770465 ssh -n ha-770465-m04 sudo cat                                          | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | /home/docker/cp-test_ha-770465-m03_ha-770465-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-770465 cp testdata/cp-test.txt                                                | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | ha-770465-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-770465 ssh -n                                                                 | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | ha-770465-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-770465 cp ha-770465-m04:/home/docker/cp-test.txt                              | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1340522930/001/cp-test_ha-770465-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-770465 ssh -n                                                                 | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | ha-770465-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-770465 cp ha-770465-m04:/home/docker/cp-test.txt                              | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | ha-770465:/home/docker/cp-test_ha-770465-m04_ha-770465.txt                       |           |         |         |                     |                     |
	| ssh     | ha-770465 ssh -n                                                                 | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | ha-770465-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-770465 ssh -n ha-770465 sudo cat                                              | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | /home/docker/cp-test_ha-770465-m04_ha-770465.txt                                 |           |         |         |                     |                     |
	| cp      | ha-770465 cp ha-770465-m04:/home/docker/cp-test.txt                              | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | ha-770465-m02:/home/docker/cp-test_ha-770465-m04_ha-770465-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-770465 ssh -n                                                                 | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | ha-770465-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-770465 ssh -n ha-770465-m02 sudo cat                                          | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | /home/docker/cp-test_ha-770465-m04_ha-770465-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-770465 cp ha-770465-m04:/home/docker/cp-test.txt                              | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | ha-770465-m03:/home/docker/cp-test_ha-770465-m04_ha-770465-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-770465 ssh -n                                                                 | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | ha-770465-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-770465 ssh -n ha-770465-m03 sudo cat                                          | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | /home/docker/cp-test_ha-770465-m04_ha-770465-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-770465 node stop m02 -v=7                                                     | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-770465 node start m02 -v=7                                                    | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:47 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-770465 -v=7                                                           | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:47 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-770465 -v=7                                                                | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:47 UTC | 16 Sep 24 10:47 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-770465 --wait=true -v=7                                                    | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:47 UTC | 16 Sep 24 10:49 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-770465                                                                | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:49 UTC |                     |
	| node    | ha-770465 node delete m03 -v=7                                                   | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:49 UTC | 16 Sep 24 10:49 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:47:42
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:47:42.907651   92290 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:47:42.907821   92290 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:47:42.907832   92290 out.go:358] Setting ErrFile to fd 2...
	I0916 10:47:42.907839   92290 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:47:42.908028   92290 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 10:47:42.908596   92290 out.go:352] Setting JSON to false
	I0916 10:47:42.909595   92290 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1807,"bootTime":1726481856,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:47:42.909705   92290 start.go:139] virtualization: kvm guest
	I0916 10:47:42.912077   92290 out.go:177] * [ha-770465] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:47:42.913364   92290 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:47:42.913378   92290 notify.go:220] Checking for updates...
	I0916 10:47:42.915704   92290 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:47:42.916927   92290 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:47:42.918034   92290 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 10:47:42.919054   92290 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:47:42.920058   92290 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:47:42.921616   92290 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:47:42.921732   92290 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:47:42.945684   92290 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:47:42.945825   92290 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:47:42.994916   92290 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:42 SystemTime:2024-09-16 10:47:42.985254045 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:47:42.995049   92290 docker.go:318] overlay module found
	I0916 10:47:42.996962   92290 out.go:177] * Using the docker driver based on existing profile
	I0916 10:47:42.998188   92290 start.go:297] selected driver: docker
	I0916 10:47:42.998203   92290 start.go:901] validating driver "docker" against &{Name:ha-770465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-770465 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:f
alse inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:47:42.998373   92290 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:47:42.998473   92290 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:47:43.046149   92290 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:42 SystemTime:2024-09-16 10:47:43.037304809 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:47:43.046741   92290 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:47:43.046769   92290 cni.go:84] Creating CNI manager for ""
	I0916 10:47:43.046815   92290 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0916 10:47:43.046869   92290 start.go:340] cluster config:
	{Name:ha-770465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-770465 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:fa
lse istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:47:43.048626   92290 out.go:177] * Starting "ha-770465" primary control-plane node in "ha-770465" cluster
	I0916 10:47:43.049863   92290 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 10:47:43.051121   92290 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:47:43.052349   92290 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:47:43.052378   92290 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:47:43.052415   92290 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4
	I0916 10:47:43.052431   92290 cache.go:56] Caching tarball of preloaded images
	I0916 10:47:43.052520   92290 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 10:47:43.052536   92290 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 10:47:43.052666   92290 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/config.json ...
	W0916 10:47:43.071808   92290 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:47:43.071827   92290 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:47:43.071898   92290 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:47:43.071915   92290 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:47:43.071921   92290 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:47:43.071928   92290 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:47:43.071935   92290 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:47:43.073093   92290 image.go:273] response: 
	I0916 10:47:43.124414   92290 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:47:43.124451   92290 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:47:43.124486   92290 start.go:360] acquireMachinesLock for ha-770465: {Name:mk79463d2cf034afd16e2c9f41174a568f4314aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:47:43.124562   92290 start.go:364] duration metric: took 50.027µs to acquireMachinesLock for "ha-770465"
	I0916 10:47:43.124584   92290 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:47:43.124590   92290 fix.go:54] fixHost starting: 
	I0916 10:47:43.124819   92290 cli_runner.go:164] Run: docker container inspect ha-770465 --format={{.State.Status}}
	I0916 10:47:43.142982   92290 fix.go:112] recreateIfNeeded on ha-770465: state=Stopped err=<nil>
	W0916 10:47:43.143023   92290 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:47:43.145234   92290 out.go:177] * Restarting existing docker container for "ha-770465" ...
	I0916 10:47:43.146506   92290 cli_runner.go:164] Run: docker start ha-770465
	I0916 10:47:43.417385   92290 cli_runner.go:164] Run: docker container inspect ha-770465 --format={{.State.Status}}
	I0916 10:47:43.434773   92290 kic.go:430] container "ha-770465" state is running.
	I0916 10:47:43.435221   92290 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465
	I0916 10:47:43.453305   92290 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/config.json ...
	I0916 10:47:43.453577   92290 machine.go:93] provisionDockerMachine start ...
	I0916 10:47:43.453645   92290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:47:43.472429   92290 main.go:141] libmachine: Using SSH client type: native
	I0916 10:47:43.472650   92290 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0916 10:47:43.472663   92290 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:47:43.473265   92290 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42060->127.0.0.1:32813: read: connection reset by peer
	I0916 10:47:46.607169   92290 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-770465
	
	I0916 10:47:46.607201   92290 ubuntu.go:169] provisioning hostname "ha-770465"
	I0916 10:47:46.607247   92290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:47:46.625189   92290 main.go:141] libmachine: Using SSH client type: native
	I0916 10:47:46.625387   92290 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0916 10:47:46.625401   92290 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-770465 && echo "ha-770465" | sudo tee /etc/hostname
	I0916 10:47:46.766728   92290 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-770465
	
	I0916 10:47:46.766793   92290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:47:46.784631   92290 main.go:141] libmachine: Using SSH client type: native
	I0916 10:47:46.784809   92290 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0916 10:47:46.784825   92290 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-770465' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-770465/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-770465' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:47:46.915700   92290 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:47:46.915725   92290 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 10:47:46.915764   92290 ubuntu.go:177] setting up certificates
	I0916 10:47:46.915776   92290 provision.go:84] configureAuth start
	I0916 10:47:46.915838   92290 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465
	I0916 10:47:46.932957   92290 provision.go:143] copyHostCerts
	I0916 10:47:46.932998   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:47:46.933035   92290 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 10:47:46.933047   92290 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:47:46.933121   92290 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 10:47:46.933218   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:47:46.933244   92290 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 10:47:46.933253   92290 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:47:46.933295   92290 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 10:47:46.933357   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:47:46.933379   92290 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 10:47:46.933385   92290 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:47:46.933417   92290 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 10:47:46.933485   92290 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.ha-770465 san=[127.0.0.1 192.168.49.2 ha-770465 localhost minikube]
	I0916 10:47:47.125945   92290 provision.go:177] copyRemoteCerts
	I0916 10:47:47.126012   92290 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:47:47.126093   92290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:47:47.143031   92290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa Username:docker}
	I0916 10:47:47.236126   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:47:47.236192   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 10:47:47.257911   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:47:47.257982   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0916 10:47:47.279829   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:47:47.279902   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:47:47.301662   92290 provision.go:87] duration metric: took 385.871985ms to configureAuth
	I0916 10:47:47.301690   92290 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:47:47.301898   92290 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:47:47.301909   92290 machine.go:96] duration metric: took 3.848317324s to provisionDockerMachine
	I0916 10:47:47.301917   92290 start.go:293] postStartSetup for "ha-770465" (driver="docker")
	I0916 10:47:47.301925   92290 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:47:47.301966   92290 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:47:47.302000   92290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:47:47.319275   92290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa Username:docker}
	I0916 10:47:47.413117   92290 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:47:47.416722   92290 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:47:47.416765   92290 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:47:47.416777   92290 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:47:47.416785   92290 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:47:47.416798   92290 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 10:47:47.416941   92290 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 10:47:47.417094   92290 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 10:47:47.417112   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /etc/ssl/certs/111892.pem
	I0916 10:47:47.417227   92290 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:47:47.425575   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:47:47.447869   92290 start.go:296] duration metric: took 145.936568ms for postStartSetup
	I0916 10:47:47.447946   92290 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:47:47.447991   92290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:47:47.466037   92290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa Username:docker}
	I0916 10:47:47.556541   92290 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:47:47.560741   92290 fix.go:56] duration metric: took 4.436142777s for fixHost
	I0916 10:47:47.560770   92290 start.go:83] releasing machines lock for "ha-770465", held for 4.43619478s
	I0916 10:47:47.560841   92290 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465
	I0916 10:47:47.577885   92290 ssh_runner.go:195] Run: cat /version.json
	I0916 10:47:47.577927   92290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:47:47.577957   92290 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:47:47.578022   92290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:47:47.596008   92290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa Username:docker}
	I0916 10:47:47.597059   92290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa Username:docker}
	I0916 10:47:47.760435   92290 ssh_runner.go:195] Run: systemctl --version
	I0916 10:47:47.764683   92290 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:47:47.768738   92290 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 10:47:47.785154   92290 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:47:47.785232   92290 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:47:47.793288   92290 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:47:47.793313   92290 start.go:495] detecting cgroup driver to use...
	I0916 10:47:47.793348   92290 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:47:47.793395   92290 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 10:47:47.806000   92290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 10:47:47.817616   92290 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:47:47.817676   92290 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:47:47.829387   92290 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:47:47.839835   92290 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:47:47.912792   92290 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:47:47.989105   92290 docker.go:233] disabling docker service ...
	I0916 10:47:47.989175   92290 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:47:48.000762   92290 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:47:48.011199   92290 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:47:48.084486   92290 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:47:48.164217   92290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:47:48.174380   92290 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:47:48.189229   92290 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 10:47:48.197881   92290 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:47:48.206778   92290 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:47:48.206830   92290 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:47:48.215561   92290 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:47:48.224160   92290 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:47:48.232731   92290 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:47:48.241173   92290 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:47:48.249094   92290 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:47:48.258062   92290 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:47:48.266929   92290 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:47:48.276116   92290 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:47:48.284317   92290 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:47:48.291982   92290 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:47:48.368197   92290 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 10:47:48.469023   92290 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 10:47:48.469085   92290 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 10:47:48.472579   92290 start.go:563] Will wait 60s for crictl version
	I0916 10:47:48.472634   92290 ssh_runner.go:195] Run: which crictl
	I0916 10:47:48.475710   92290 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:47:48.507120   92290 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 10:47:48.507188   92290 ssh_runner.go:195] Run: containerd --version
	I0916 10:47:48.528397   92290 ssh_runner.go:195] Run: containerd --version
	I0916 10:47:48.552009   92290 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 10:47:48.553189   92290 cli_runner.go:164] Run: docker network inspect ha-770465 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:47:48.569329   92290 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:47:48.572833   92290 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:47:48.583226   92290 kubeadm.go:883] updating cluster {Name:ha-770465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-770465 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false in
accel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:47:48.583367   92290 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:47:48.583411   92290 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:47:48.614654   92290 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 10:47:48.614693   92290 containerd.go:534] Images already preloaded, skipping extraction
	I0916 10:47:48.614755   92290 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:47:48.645783   92290 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 10:47:48.645808   92290 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:47:48.645820   92290 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I0916 10:47:48.645921   92290 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-770465 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-770465 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:47:48.645970   92290 ssh_runner.go:195] Run: sudo crictl info
	I0916 10:47:48.678309   92290 cni.go:84] Creating CNI manager for ""
	I0916 10:47:48.678333   92290 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0916 10:47:48.678349   92290 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:47:48.678369   92290 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-770465 NodeName:ha-770465 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:47:48.678497   92290 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-770465"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:47:48.678516   92290 kube-vip.go:115] generating kube-vip config ...
	I0916 10:47:48.678552   92290 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 10:47:48.690266   92290 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:47:48.690386   92290 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 10:47:48.690436   92290 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:47:48.698607   92290 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:47:48.698682   92290 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0916 10:47:48.706707   92290 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0916 10:47:48.722858   92290 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:47:48.738741   92290 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0916 10:47:48.754529   92290 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 10:47:48.770735   92290 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 10:47:48.773886   92290 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:47:48.783802   92290 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:47:48.864914   92290 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:47:48.877655   92290 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465 for IP: 192.168.49.2
	I0916 10:47:48.877672   92290 certs.go:194] generating shared ca certs ...
	I0916 10:47:48.877686   92290 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:47:48.877826   92290 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 10:47:48.877872   92290 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 10:47:48.877881   92290 certs.go:256] generating profile certs ...
	I0916 10:47:48.877949   92290 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.key
	I0916 10:47:48.877972   92290 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key.49c54ac7
	I0916 10:47:48.877990   92290 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt.49c54ac7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0916 10:47:49.002617   92290 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt.49c54ac7 ...
	I0916 10:47:49.002650   92290 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt.49c54ac7: {Name:mk426d337ce57c0b9434970ee7f05a78c6770187 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:47:49.002842   92290 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key.49c54ac7 ...
	I0916 10:47:49.002862   92290 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key.49c54ac7: {Name:mk422aa9ddf59c32ee33825b3a719298afc1e7fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:47:49.002956   92290 certs.go:381] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt.49c54ac7 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt
	I0916 10:47:49.003187   92290 certs.go:385] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key.49c54ac7 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key
	I0916 10:47:49.003366   92290 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.key
	I0916 10:47:49.003385   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:47:49.003402   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:47:49.003422   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:47:49.003439   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:47:49.003459   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:47:49.003480   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:47:49.003498   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:47:49.003515   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:47:49.003608   92290 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 10:47:49.003655   92290 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 10:47:49.003669   92290 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:47:49.003699   92290 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 10:47:49.003753   92290 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:47:49.003830   92290 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 10:47:49.003885   92290 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:47:49.003924   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /usr/share/ca-certificates/111892.pem
	I0916 10:47:49.003942   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:47:49.003958   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem -> /usr/share/ca-certificates/11189.pem
	I0916 10:47:49.004536   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:47:49.027229   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:47:49.049076   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:47:49.074049   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:47:49.096575   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0916 10:47:49.119276   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 10:47:49.143175   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:47:49.166158   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 10:47:49.188643   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 10:47:49.211102   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:47:49.233166   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 10:47:49.255034   92290 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:47:49.271405   92290 ssh_runner.go:195] Run: openssl version
	I0916 10:47:49.276425   92290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 10:47:49.285222   92290 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 10:47:49.288481   92290 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 10:47:49.288530   92290 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 10:47:49.294788   92290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:47:49.302772   92290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:47:49.311233   92290 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:47:49.314398   92290 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:47:49.314456   92290 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:47:49.320545   92290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:47:49.328450   92290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 10:47:49.336853   92290 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 10:47:49.339861   92290 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 10:47:49.339921   92290 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 10:47:49.346053   92290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 10:47:49.354095   92290 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:47:49.357390   92290 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 10:47:49.363952   92290 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 10:47:49.370457   92290 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 10:47:49.376559   92290 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 10:47:49.382757   92290 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 10:47:49.389041   92290 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 10:47:49.395319   92290 kubeadm.go:392] StartCluster: {Name:ha-770465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-770465 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inacc
el:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:47:49.395471   92290 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 10:47:49.395520   92290 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:47:49.428015   92290 cri.go:89] found id: "505568793f3574e96c0799007e1921a7d91c4ad3c8aeba5624a5d0c4a02e46d5"
	I0916 10:47:49.428033   92290 cri.go:89] found id: "120ff8a81efa1183e1409d1cdb8fa5e1e7c675ebb3d0f165783c5512f48e07ce"
	I0916 10:47:49.428037   92290 cri.go:89] found id: "ec0de017ccfa5917b48a621ba0257c01fb46d96654a8d2e3f173a41e811e0f0e"
	I0916 10:47:49.428040   92290 cri.go:89] found id: "b31c2d77265e3a87517539fba911addc87dcfa7cd4932f3fa5cfa6b294afd8aa"
	I0916 10:47:49.428043   92290 cri.go:89] found id: "15571e99ab074e3b158931e74a462086cc1bc9b84b6b39d511e64dbebca8dac3"
	I0916 10:47:49.428046   92290 cri.go:89] found id: "75391807e98390e5055c12f632996e1dc188ba32700573915b99ed477d23fb36"
	I0916 10:47:49.428049   92290 cri.go:89] found id: "8b022d1d912058b6aec308a7f6777b3f8fcb7b0b8c051be8ff2b7c53dc37450c"
	I0916 10:47:49.428051   92290 cri.go:89] found id: "fc07020cd48414dd7978cd32b7fffa3b3bd5d7f72b79b3aa49e4082dffedf8e3"
	I0916 10:47:49.428054   92290 cri.go:89] found id: "780f65ad6abab29bdde89c430c29bcd890f45aa17487c1bfd744c963df712f3d"
	I0916 10:47:49.428060   92290 cri.go:89] found id: "535bd4e938e3aeb6ecfbd02d81bf8fc060b9bb649a67b3f28d6b43d2c199e4ba"
	I0916 10:47:49.428065   92290 cri.go:89] found id: ""
	I0916 10:47:49.428105   92290 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0916 10:47:49.439764   92290 cri.go:116] JSON = null
	W0916 10:47:49.439808   92290 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 10
	I0916 10:47:49.439852   92290 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:47:49.447623   92290 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 10:47:49.447644   92290 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 10:47:49.447683   92290 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0916 10:47:49.455166   92290 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:47:49.455711   92290 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-770465" does not appear in /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:47:49.455881   92290 kubeconfig.go:62] /home/jenkins/minikube-integration/19651-3687/kubeconfig needs updating (will repair): [kubeconfig missing "ha-770465" cluster setting kubeconfig missing "ha-770465" context setting]
	I0916 10:47:49.456228   92290 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:47:49.456708   92290 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:47:49.457025   92290 kapi.go:59] client config for ha-770465: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:47:49.457501   92290 cert_rotation.go:140] Starting client certificate rotation controller
	I0916 10:47:49.457776   92290 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 10:47:49.465883   92290 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.49.2
	I0916 10:47:49.465908   92290 kubeadm.go:597] duration metric: took 18.257804ms to restartPrimaryControlPlane
	I0916 10:47:49.465916   92290 kubeadm.go:394] duration metric: took 70.64013ms to StartCluster
	I0916 10:47:49.465932   92290 settings.go:142] acquiring lock: {Name:mk5f140da961bb985c566f732d611f3e238f1f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:47:49.465991   92290 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:47:49.466514   92290 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:47:49.466717   92290 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 10:47:49.466736   92290 start.go:241] waiting for startup goroutines ...
	I0916 10:47:49.466742   92290 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 10:47:49.467040   92290 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:47:49.469971   92290 out.go:177] * Enabled addons: 
	I0916 10:47:49.471155   92290 addons.go:510] duration metric: took 4.410097ms for enable addons: enabled=[]
	I0916 10:47:49.471190   92290 start.go:246] waiting for cluster config update ...
	I0916 10:47:49.471206   92290 start.go:255] writing updated cluster config ...
	I0916 10:47:49.472854   92290 out.go:201] 
	I0916 10:47:49.474148   92290 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:47:49.474260   92290 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/config.json ...
	I0916 10:47:49.475611   92290 out.go:177] * Starting "ha-770465-m02" control-plane node in "ha-770465" cluster
	I0916 10:47:49.476613   92290 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 10:47:49.477986   92290 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:47:49.479390   92290 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:47:49.479414   92290 cache.go:56] Caching tarball of preloaded images
	I0916 10:47:49.479415   92290 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:47:49.479503   92290 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 10:47:49.479519   92290 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 10:47:49.479659   92290 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/config.json ...
	W0916 10:47:49.499190   92290 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:47:49.499207   92290 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:47:49.499267   92290 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:47:49.499280   92290 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:47:49.499284   92290 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:47:49.499291   92290 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:47:49.499298   92290 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:47:49.500342   92290 image.go:273] response: 
	I0916 10:47:49.548217   92290 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:47:49.548255   92290 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:47:49.548294   92290 start.go:360] acquireMachinesLock for ha-770465-m02: {Name:mk1ae0810eb0d80ca7ae9fe74f31de5324d2e214 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:47:49.548354   92290 start.go:364] duration metric: took 43.367µs to acquireMachinesLock for "ha-770465-m02"
	I0916 10:47:49.548373   92290 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:47:49.548381   92290 fix.go:54] fixHost starting: m02
	I0916 10:47:49.548593   92290 cli_runner.go:164] Run: docker container inspect ha-770465-m02 --format={{.State.Status}}
	I0916 10:47:49.565101   92290 fix.go:112] recreateIfNeeded on ha-770465-m02: state=Stopped err=<nil>
	W0916 10:47:49.565126   92290 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:47:49.566987   92290 out.go:177] * Restarting existing docker container for "ha-770465-m02" ...
	I0916 10:47:49.568403   92290 cli_runner.go:164] Run: docker start ha-770465-m02
	I0916 10:47:49.842764   92290 cli_runner.go:164] Run: docker container inspect ha-770465-m02 --format={{.State.Status}}
	I0916 10:47:49.862187   92290 kic.go:430] container "ha-770465-m02" state is running.
	I0916 10:47:49.862608   92290 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465-m02
	I0916 10:47:49.881829   92290 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/config.json ...
	I0916 10:47:49.882049   92290 machine.go:93] provisionDockerMachine start ...
	I0916 10:47:49.882098   92290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m02
	I0916 10:47:49.900549   92290 main.go:141] libmachine: Using SSH client type: native
	I0916 10:47:49.900747   92290 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0916 10:47:49.900761   92290 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:47:49.901401   92290 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41262->127.0.0.1:32818: read: connection reset by peer
	I0916 10:47:53.043209   92290 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-770465-m02
	
	I0916 10:47:53.043239   92290 ubuntu.go:169] provisioning hostname "ha-770465-m02"
	I0916 10:47:53.043301   92290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m02
	I0916 10:47:53.060993   92290 main.go:141] libmachine: Using SSH client type: native
	I0916 10:47:53.061181   92290 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0916 10:47:53.061200   92290 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-770465-m02 && echo "ha-770465-m02" | sudo tee /etc/hostname
	I0916 10:47:53.231127   92290 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-770465-m02
	
	I0916 10:47:53.231220   92290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m02
	I0916 10:47:53.248570   92290 main.go:141] libmachine: Using SSH client type: native
	I0916 10:47:53.248747   92290 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0916 10:47:53.248765   92290 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-770465-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-770465-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-770465-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:47:53.404032   92290 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:47:53.404069   92290 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 10:47:53.404100   92290 ubuntu.go:177] setting up certificates
	I0916 10:47:53.404114   92290 provision.go:84] configureAuth start
	I0916 10:47:53.404166   92290 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465-m02
	I0916 10:47:53.420931   92290 provision.go:143] copyHostCerts
	I0916 10:47:53.420974   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:47:53.421020   92290 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 10:47:53.421031   92290 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:47:53.421111   92290 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 10:47:53.421266   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:47:53.421302   92290 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 10:47:53.421314   92290 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:47:53.421380   92290 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 10:47:53.421489   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:47:53.421513   92290 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 10:47:53.421520   92290 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:47:53.421560   92290 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 10:47:53.421644   92290 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.ha-770465-m02 san=[127.0.0.1 192.168.49.3 ha-770465-m02 localhost minikube]
	I0916 10:47:53.483276   92290 provision.go:177] copyRemoteCerts
	I0916 10:47:53.483341   92290 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:47:53.483380   92290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m02
	I0916 10:47:53.500989   92290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m02/id_rsa Username:docker}
	I0916 10:47:53.595970   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:47:53.596021   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 10:47:53.617764   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:47:53.617818   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:47:53.639110   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:47:53.639173   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:47:53.660461   92290 provision.go:87] duration metric: took 256.332364ms to configureAuth
	I0916 10:47:53.660488   92290 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:47:53.660729   92290 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:47:53.660744   92290 machine.go:96] duration metric: took 3.77868237s to provisionDockerMachine
	I0916 10:47:53.660753   92290 start.go:293] postStartSetup for "ha-770465-m02" (driver="docker")
	I0916 10:47:53.660765   92290 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:47:53.660816   92290 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:47:53.660858   92290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m02
	I0916 10:47:53.678890   92290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m02/id_rsa Username:docker}
	I0916 10:47:53.777491   92290 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:47:53.780556   92290 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:47:53.780584   92290 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:47:53.780592   92290 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:47:53.780600   92290 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:47:53.780609   92290 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 10:47:53.780661   92290 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 10:47:53.780735   92290 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 10:47:53.780747   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /etc/ssl/certs/111892.pem
	I0916 10:47:53.780825   92290 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:47:53.788464   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:47:53.809184   92290 start.go:296] duration metric: took 148.41286ms for postStartSetup
	I0916 10:47:53.809279   92290 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:47:53.809325   92290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m02
	I0916 10:47:53.825705   92290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m02/id_rsa Username:docker}
	I0916 10:47:53.916521   92290 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:47:53.920624   92290 fix.go:56] duration metric: took 4.372236561s for fixHost
	I0916 10:47:53.920653   92290 start.go:83] releasing machines lock for "ha-770465-m02", held for 4.372286028s
	I0916 10:47:53.920720   92290 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465-m02
	I0916 10:47:53.938883   92290 out.go:177] * Found network options:
	I0916 10:47:53.940335   92290 out.go:177]   - NO_PROXY=192.168.49.2
	W0916 10:47:53.941685   92290 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:47:53.941719   92290 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 10:47:53.941780   92290 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:47:53.941815   92290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m02
	I0916 10:47:53.941859   92290 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:47:53.941920   92290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m02
	I0916 10:47:53.960457   92290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m02/id_rsa Username:docker}
	I0916 10:47:53.961064   92290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m02/id_rsa Username:docker}
	I0916 10:47:54.131721   92290 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 10:47:54.149434   92290 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:47:54.149510   92290 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:47:54.157855   92290 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:47:54.157882   92290 start.go:495] detecting cgroup driver to use...
	I0916 10:47:54.157919   92290 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:47:54.157976   92290 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 10:47:54.169931   92290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 10:47:54.180276   92290 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:47:54.180331   92290 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:47:54.191782   92290 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:47:54.201747   92290 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:47:54.287128   92290 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:47:54.371388   92290 docker.go:233] disabling docker service ...
	I0916 10:47:54.371457   92290 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:47:54.382897   92290 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:47:54.393477   92290 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:47:54.480500   92290 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:47:54.566791   92290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:47:54.577108   92290 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:47:54.591539   92290 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 10:47:54.600257   92290 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:47:54.609242   92290 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:47:54.609315   92290 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:47:54.618267   92290 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:47:54.627031   92290 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:47:54.636029   92290 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:47:54.644813   92290 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:47:54.653215   92290 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:47:54.662607   92290 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:47:54.672682   92290 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:47:54.682449   92290 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:47:54.691231   92290 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:47:54.700272   92290 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:47:54.795281   92290 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 10:47:55.021407   92290 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 10:47:55.021465   92290 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 10:47:55.025175   92290 start.go:563] Will wait 60s for crictl version
	I0916 10:47:55.025226   92290 ssh_runner.go:195] Run: which crictl
	I0916 10:47:55.028329   92290 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:47:55.061596   92290 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 10:47:55.061664   92290 ssh_runner.go:195] Run: containerd --version
	I0916 10:47:55.084654   92290 ssh_runner.go:195] Run: containerd --version
	I0916 10:47:55.108544   92290 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 10:47:55.110691   92290 out.go:177]   - env NO_PROXY=192.168.49.2
	I0916 10:47:55.111897   92290 cli_runner.go:164] Run: docker network inspect ha-770465 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:47:55.128059   92290 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:47:55.131545   92290 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:47:55.141534   92290 mustload.go:65] Loading cluster: ha-770465
	I0916 10:47:55.141778   92290 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:47:55.141994   92290 cli_runner.go:164] Run: docker container inspect ha-770465 --format={{.State.Status}}
	I0916 10:47:55.158474   92290 host.go:66] Checking if "ha-770465" exists ...
	I0916 10:47:55.158725   92290 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465 for IP: 192.168.49.3
	I0916 10:47:55.158737   92290 certs.go:194] generating shared ca certs ...
	I0916 10:47:55.158750   92290 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:47:55.158891   92290 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 10:47:55.158929   92290 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 10:47:55.158938   92290 certs.go:256] generating profile certs ...
	I0916 10:47:55.159021   92290 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.key
	I0916 10:47:55.159082   92290 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key.906775d8
	I0916 10:47:55.159129   92290 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.key
	I0916 10:47:55.159144   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:47:55.159166   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:47:55.159182   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:47:55.159201   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:47:55.159218   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:47:55.159232   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:47:55.159243   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:47:55.159254   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:47:55.159306   92290 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 10:47:55.159338   92290 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 10:47:55.159354   92290 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:47:55.159386   92290 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 10:47:55.159410   92290 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:47:55.159433   92290 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 10:47:55.159474   92290 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:47:55.159504   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:47:55.159517   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem -> /usr/share/ca-certificates/11189.pem
	I0916 10:47:55.159533   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /usr/share/ca-certificates/111892.pem
	I0916 10:47:55.159577   92290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:47:55.177206   92290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa Username:docker}
	I0916 10:47:55.264080   92290 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 10:47:55.267763   92290 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 10:47:55.279323   92290 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 10:47:55.282532   92290 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0916 10:47:55.293843   92290 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 10:47:55.297370   92290 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 10:47:55.309313   92290 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 10:47:55.313012   92290 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0916 10:47:55.324465   92290 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 10:47:55.327569   92290 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 10:47:55.338984   92290 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 10:47:55.342171   92290 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0916 10:47:55.353874   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:47:55.380503   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:47:55.405450   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:47:55.440192   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:47:55.468446   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0916 10:47:55.492960   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 10:47:55.516179   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:47:55.547168   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 10:47:55.572667   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:47:55.595706   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 10:47:55.618075   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 10:47:55.649856   92290 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 10:47:55.668467   92290 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0916 10:47:55.689218   92290 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 10:47:55.710249   92290 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0916 10:47:55.731283   92290 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 10:47:55.757010   92290 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0916 10:47:55.775404   92290 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 10:47:55.792187   92290 ssh_runner.go:195] Run: openssl version
	I0916 10:47:55.797113   92290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:47:55.806092   92290 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:47:55.809278   92290 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:47:55.809334   92290 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:47:55.815451   92290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:47:55.825431   92290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 10:47:55.839121   92290 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 10:47:55.843946   92290 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 10:47:55.844012   92290 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 10:47:55.853318   92290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 10:47:55.862684   92290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 10:47:55.871865   92290 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 10:47:55.875429   92290 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 10:47:55.875510   92290 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 10:47:55.881821   92290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:47:55.889926   92290 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:47:55.893140   92290 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 10:47:55.899075   92290 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 10:47:55.904998   92290 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 10:47:55.911028   92290 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 10:47:55.916931   92290 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 10:47:55.924402   92290 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 10:47:55.933479   92290 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.31.1 containerd true true} ...
	I0916 10:47:55.933617   92290 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-770465-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-770465 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:47:55.933661   92290 kube-vip.go:115] generating kube-vip config ...
	I0916 10:47:55.933714   92290 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 10:47:55.953628   92290 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:47:55.953706   92290 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 10:47:55.953767   92290 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:47:55.963093   92290 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:47:55.963162   92290 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 10:47:55.971935   92290 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0916 10:47:55.988535   92290 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:47:56.004256   92290 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 10:47:56.021253   92290 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 10:47:56.026783   92290 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:47:56.042668   92290 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:47:56.136519   92290 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:47:56.150256   92290 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 10:47:56.150802   92290 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:47:56.152716   92290 out.go:177] * Verifying Kubernetes components...
	I0916 10:47:56.153893   92290 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:47:56.259587   92290 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:47:56.271543   92290 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:47:56.271832   92290 kapi.go:59] client config for ha-770465: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 10:47:56.271896   92290 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 10:47:56.272131   92290 node_ready.go:35] waiting up to 6m0s for node "ha-770465-m02" to be "Ready" ...
	I0916 10:47:56.272223   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:47:56.272233   92290 round_trippers.go:469] Request Headers:
	I0916 10:47:56.272244   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:47:56.272253   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:47:58.323111   92290 round_trippers.go:574] Response Status: 200 OK in 2050 milliseconds
	I0916 10:47:58.324219   92290 node_ready.go:49] node "ha-770465-m02" has status "Ready":"True"
	I0916 10:47:58.324248   92290 node_ready.go:38] duration metric: took 2.052097906s for node "ha-770465-m02" to be "Ready" ...
	I0916 10:47:58.324266   92290 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:47:58.324341   92290 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 10:47:58.324360   92290 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 10:47:58.324430   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:47:58.324441   92290 round_trippers.go:469] Request Headers:
	I0916 10:47:58.324452   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:47:58.324465   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:47:58.428170   92290 round_trippers.go:574] Response Status: 200 OK in 103 milliseconds
	I0916 10:47:58.442478   92290 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace to be "Ready" ...
	I0916 10:47:58.442592   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:47:58.442604   92290 round_trippers.go:469] Request Headers:
	I0916 10:47:58.442614   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:47:58.442620   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:47:58.444903   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:47:58.445618   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:47:58.445640   92290 round_trippers.go:469] Request Headers:
	I0916 10:47:58.445651   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:47:58.445657   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:47:58.447340   92290 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:47:58.447796   92290 pod_ready.go:93] pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace has status "Ready":"True"
	I0916 10:47:58.447816   92290 pod_ready.go:82] duration metric: took 5.311089ms for pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace to be "Ready" ...
	I0916 10:47:58.447826   92290 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sbs22" in "kube-system" namespace to be "Ready" ...
	I0916 10:47:58.447897   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:47:58.447906   92290 round_trippers.go:469] Request Headers:
	I0916 10:47:58.447915   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:47:58.447919   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:47:58.449836   92290 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:47:58.450285   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:47:58.450299   92290 round_trippers.go:469] Request Headers:
	I0916 10:47:58.450307   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:47:58.450311   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:47:58.451905   92290 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:47:58.452270   92290 pod_ready.go:93] pod "coredns-7c65d6cfc9-sbs22" in "kube-system" namespace has status "Ready":"True"
	I0916 10:47:58.452287   92290 pod_ready.go:82] duration metric: took 4.450951ms for pod "coredns-7c65d6cfc9-sbs22" in "kube-system" namespace to be "Ready" ...
	I0916 10:47:58.452297   92290 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:47:58.452360   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465
	I0916 10:47:58.452368   92290 round_trippers.go:469] Request Headers:
	I0916 10:47:58.452375   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:47:58.452379   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:47:58.453962   92290 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:47:58.454401   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:47:58.454414   92290 round_trippers.go:469] Request Headers:
	I0916 10:47:58.454421   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:47:58.454426   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:47:58.456023   92290 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:47:58.456407   92290 pod_ready.go:93] pod "etcd-ha-770465" in "kube-system" namespace has status "Ready":"True"
	I0916 10:47:58.456430   92290 pod_ready.go:82] duration metric: took 4.123047ms for pod "etcd-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:47:58.456438   92290 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:47:58.456480   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m02
	I0916 10:47:58.456487   92290 round_trippers.go:469] Request Headers:
	I0916 10:47:58.456494   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:47:58.456498   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:47:58.458302   92290 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:47:58.458799   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:47:58.458814   92290 round_trippers.go:469] Request Headers:
	I0916 10:47:58.458820   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:47:58.458824   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:47:58.460466   92290 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:47:58.460950   92290 pod_ready.go:93] pod "etcd-ha-770465-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:47:58.460970   92290 pod_ready.go:82] duration metric: took 4.525034ms for pod "etcd-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:47:58.460981   92290 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:47:58.461100   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:47:58.461113   92290 round_trippers.go:469] Request Headers:
	I0916 10:47:58.461124   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:47:58.461132   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:47:58.462879   92290 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:47:58.524525   92290 request.go:632] Waited for 61.139987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:47:58.524619   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:47:58.524630   92290 round_trippers.go:469] Request Headers:
	I0916 10:47:58.524637   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:47:58.524641   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:47:58.527275   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:47:58.527695   92290 pod_ready.go:93] pod "etcd-ha-770465-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:47:58.527712   92290 pod_ready.go:82] duration metric: took 66.724139ms for pod "etcd-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:47:58.527765   92290 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:47:58.725177   92290 request.go:632] Waited for 197.339093ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465
	I0916 10:47:58.725254   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465
	I0916 10:47:58.725262   92290 round_trippers.go:469] Request Headers:
	I0916 10:47:58.725269   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:47:58.725287   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:47:58.727897   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:47:58.924905   92290 request.go:632] Waited for 196.385313ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:47:58.924954   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:47:58.924959   92290 round_trippers.go:469] Request Headers:
	I0916 10:47:58.924966   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:47:58.924970   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:47:58.927348   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:47:58.927873   92290 pod_ready.go:93] pod "kube-apiserver-ha-770465" in "kube-system" namespace has status "Ready":"True"
	I0916 10:47:58.927890   92290 pod_ready.go:82] duration metric: took 400.114485ms for pod "kube-apiserver-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:47:58.927900   92290 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:47:59.125115   92290 request.go:632] Waited for 197.120533ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m02
	I0916 10:47:59.125194   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m02
	I0916 10:47:59.125203   92290 round_trippers.go:469] Request Headers:
	I0916 10:47:59.125214   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:47:59.125222   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:47:59.128080   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:47:59.325183   92290 request.go:632] Waited for 196.361934ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:47:59.325250   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:47:59.325257   92290 round_trippers.go:469] Request Headers:
	I0916 10:47:59.325264   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:47:59.325268   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:47:59.327266   92290 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:47:59.327799   92290 pod_ready.go:93] pod "kube-apiserver-ha-770465-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:47:59.327819   92290 pod_ready.go:82] duration metric: took 399.911375ms for pod "kube-apiserver-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:47:59.327833   92290 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:47:59.524843   92290 request.go:632] Waited for 196.913996ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m03
	I0916 10:47:59.524899   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m03
	I0916 10:47:59.524904   92290 round_trippers.go:469] Request Headers:
	I0916 10:47:59.524911   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:47:59.524915   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:47:59.527639   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:47:59.724529   92290 request.go:632] Waited for 196.295558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:47:59.724597   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:47:59.724602   92290 round_trippers.go:469] Request Headers:
	I0916 10:47:59.724623   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:47:59.724626   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:47:59.727090   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:47:59.727636   92290 pod_ready.go:93] pod "kube-apiserver-ha-770465-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:47:59.727658   92290 pod_ready.go:82] duration metric: took 399.816477ms for pod "kube-apiserver-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:47:59.727671   92290 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:47:59.924730   92290 request.go:632] Waited for 196.985713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-770465
	I0916 10:47:59.924794   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-770465
	I0916 10:47:59.924800   92290 round_trippers.go:469] Request Headers:
	I0916 10:47:59.924807   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:47:59.924812   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:47:59.927452   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:00.125454   92290 request.go:632] Waited for 197.360242ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:00.125539   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:00.125548   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:00.125556   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:00.125562   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:00.134213   92290 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0916 10:48:00.134801   92290 pod_ready.go:93] pod "kube-controller-manager-ha-770465" in "kube-system" namespace has status "Ready":"True"
	I0916 10:48:00.134825   92290 pod_ready.go:82] duration metric: took 407.141562ms for pod "kube-controller-manager-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:00.134840   92290 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:00.324818   92290 request.go:632] Waited for 189.887086ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-770465-m02
	I0916 10:48:00.324892   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-770465-m02
	I0916 10:48:00.324899   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:00.324905   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:00.324911   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:00.327664   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:00.524834   92290 request.go:632] Waited for 196.412041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:48:00.524895   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:48:00.524901   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:00.524908   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:00.524912   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:00.527843   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:00.528286   92290 pod_ready.go:93] pod "kube-controller-manager-ha-770465-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:48:00.528307   92290 pod_ready.go:82] duration metric: took 393.457895ms for pod "kube-controller-manager-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:00.528317   92290 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:00.725457   92290 request.go:632] Waited for 197.042621ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-770465-m03
	I0916 10:48:00.725514   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-770465-m03
	I0916 10:48:00.725520   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:00.725527   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:00.725531   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:00.728292   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:00.925413   92290 request.go:632] Waited for 196.4469ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:48:00.925471   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:48:00.925476   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:00.925490   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:00.925493   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:00.928147   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:00.928612   92290 pod_ready.go:93] pod "kube-controller-manager-ha-770465-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:48:00.928631   92290 pod_ready.go:82] duration metric: took 400.307439ms for pod "kube-controller-manager-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:00.928642   92290 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4qgcs" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:01.124575   92290 request.go:632] Waited for 195.862641ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4qgcs
	I0916 10:48:01.124663   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4qgcs
	I0916 10:48:01.124671   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:01.124682   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:01.124690   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:01.130093   92290 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:48:01.325184   92290 request.go:632] Waited for 194.369495ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:48:01.325245   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:48:01.325251   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:01.325258   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:01.325261   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:01.327960   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:01.524591   92290 request.go:632] Waited for 95.197303ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4qgcs
	I0916 10:48:01.524660   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4qgcs
	I0916 10:48:01.524667   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:01.524679   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:01.524725   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:01.527963   92290 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:48:01.724856   92290 request.go:632] Waited for 196.345191ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:48:01.724929   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:48:01.724941   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:01.724951   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:01.724958   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:01.728304   92290 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:48:01.929564   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4qgcs
	I0916 10:48:01.929590   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:01.929601   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:01.929607   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:01.933067   92290 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:48:02.125188   92290 request.go:632] Waited for 191.362599ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:48:02.125238   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:48:02.125245   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:02.125254   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:02.125262   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:02.128136   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:02.429859   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4qgcs
	I0916 10:48:02.429883   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:02.429895   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:02.429900   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:02.432787   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:02.524509   92290 request.go:632] Waited for 91.118833ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:48:02.524561   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:48:02.524566   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:02.524573   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:02.524576   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:02.527460   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:02.528025   92290 pod_ready.go:93] pod "kube-proxy-4qgcs" in "kube-system" namespace has status "Ready":"True"
	I0916 10:48:02.528044   92290 pod_ready.go:82] duration metric: took 1.599397179s for pod "kube-proxy-4qgcs" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:02.528054   92290 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-78l2l" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:02.725508   92290 request.go:632] Waited for 197.388667ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-78l2l
	I0916 10:48:02.725585   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-78l2l
	I0916 10:48:02.725596   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:02.725605   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:02.725612   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:02.728462   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:02.925485   92290 request.go:632] Waited for 196.447469ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m04
	I0916 10:48:02.925541   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m04
	I0916 10:48:02.925547   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:02.925554   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:02.925558   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:02.928461   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:02.928932   92290 pod_ready.go:93] pod "kube-proxy-78l2l" in "kube-system" namespace has status "Ready":"True"
	I0916 10:48:02.928956   92290 pod_ready.go:82] duration metric: took 400.894095ms for pod "kube-proxy-78l2l" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:02.928969   92290 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gd2mt" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:03.125163   92290 request.go:632] Waited for 196.099091ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gd2mt
	I0916 10:48:03.125238   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gd2mt
	I0916 10:48:03.125251   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:03.125261   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:03.125266   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:03.127708   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:03.324856   92290 request.go:632] Waited for 196.38054ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:03.324995   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:03.325020   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:03.325057   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:03.325075   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:03.328412   92290 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:48:03.525098   92290 request.go:632] Waited for 95.168868ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gd2mt
	I0916 10:48:03.525220   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gd2mt
	I0916 10:48:03.525255   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:03.525283   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:03.525301   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:03.529605   92290 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:48:03.724811   92290 request.go:632] Waited for 194.325677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:03.724884   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:03.724893   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:03.724914   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:03.724929   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:03.731173   92290 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:48:03.929756   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gd2mt
	I0916 10:48:03.929787   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:03.929799   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:03.929806   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:03.934073   92290 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:48:04.124931   92290 request.go:632] Waited for 190.200604ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:04.125037   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:04.125051   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:04.125061   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:04.125073   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:04.130333   92290 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:48:04.429962   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gd2mt
	I0916 10:48:04.429984   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:04.429994   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:04.429999   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:04.432992   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:04.524920   92290 request.go:632] Waited for 91.327203ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:04.524991   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:04.524996   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:04.525005   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:04.525013   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:04.527783   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:04.528238   92290 pod_ready.go:93] pod "kube-proxy-gd2mt" in "kube-system" namespace has status "Ready":"True"
	I0916 10:48:04.528259   92290 pod_ready.go:82] duration metric: took 1.599279355s for pod "kube-proxy-gd2mt" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:04.528269   92290 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qlspc" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:04.724632   92290 request.go:632] Waited for 196.294139ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qlspc
	I0916 10:48:04.724713   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qlspc
	I0916 10:48:04.724725   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:04.724738   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:04.724746   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:04.727443   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:04.925449   92290 request.go:632] Waited for 197.329028ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:48:04.925498   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:48:04.925503   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:04.925510   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:04.925519   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:04.928077   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:04.928550   92290 pod_ready.go:93] pod "kube-proxy-qlspc" in "kube-system" namespace has status "Ready":"True"
	I0916 10:48:04.928567   92290 pod_ready.go:82] duration metric: took 400.291916ms for pod "kube-proxy-qlspc" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:04.928584   92290 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:05.124646   92290 request.go:632] Waited for 196.000272ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465
	I0916 10:48:05.124736   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465
	I0916 10:48:05.124778   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:05.124796   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:05.124804   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:05.127301   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:05.325385   92290 request.go:632] Waited for 197.389665ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:05.325447   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:05.325454   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:05.325467   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:05.325475   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:05.328259   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:05.524928   92290 request.go:632] Waited for 95.282643ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465
	I0916 10:48:05.524998   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465
	I0916 10:48:05.525006   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:05.525018   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:05.525027   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:05.527905   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:05.724980   92290 request.go:632] Waited for 196.402977ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:05.725051   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:05.725059   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:05.725069   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:05.725079   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:05.727991   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:05.929514   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465
	I0916 10:48:05.929535   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:05.929542   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:05.929546   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:05.932447   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:06.125363   92290 request.go:632] Waited for 192.369195ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:06.125435   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:06.125442   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:06.125451   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:06.125456   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:06.128148   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:06.429648   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465
	I0916 10:48:06.429669   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:06.429678   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:06.429683   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:06.432411   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:06.525414   92290 request.go:632] Waited for 92.318523ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:06.525493   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:06.525500   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:06.525508   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:06.525513   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:06.528257   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:06.528750   92290 pod_ready.go:93] pod "kube-scheduler-ha-770465" in "kube-system" namespace has status "Ready":"True"
	I0916 10:48:06.528769   92290 pod_ready.go:82] duration metric: took 1.600179419s for pod "kube-scheduler-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:06.528778   92290 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:06.725184   92290 request.go:632] Waited for 196.341997ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465-m02
	I0916 10:48:06.725286   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465-m02
	I0916 10:48:06.725295   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:06.725302   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:06.725307   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:06.728004   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:06.924945   92290 request.go:632] Waited for 196.369493ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:48:06.925008   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:48:06.925015   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:06.925022   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:06.925027   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:06.927768   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:06.928321   92290 pod_ready.go:93] pod "kube-scheduler-ha-770465-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:48:06.928348   92290 pod_ready.go:82] duration metric: took 399.562237ms for pod "kube-scheduler-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:06.928361   92290 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:07.125453   92290 request.go:632] Waited for 197.023407ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465-m03
	I0916 10:48:07.125532   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465-m03
	I0916 10:48:07.125544   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:07.125555   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:07.125561   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:07.128246   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:07.325254   92290 request.go:632] Waited for 196.335757ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:48:07.325326   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:48:07.325333   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:07.325343   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:07.325356   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:07.328088   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:07.328516   92290 pod_ready.go:93] pod "kube-scheduler-ha-770465-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:48:07.328534   92290 pod_ready.go:82] duration metric: took 400.163952ms for pod "kube-scheduler-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:07.328545   92290 pod_ready.go:39] duration metric: took 9.004262152s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:48:07.328558   92290 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:48:07.328609   92290 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:48:07.340002   92290 api_server.go:72] duration metric: took 11.189686818s to wait for apiserver process to appear ...
	I0916 10:48:07.340023   92290 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:48:07.340043   92290 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 10:48:07.343473   92290 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 10:48:07.343541   92290 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I0916 10:48:07.343554   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:07.343565   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:07.343573   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:07.344352   92290 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 10:48:07.344449   92290 api_server.go:141] control plane version: v1.31.1
	I0916 10:48:07.344467   92290 api_server.go:131] duration metric: took 4.438183ms to wait for apiserver health ...
	I0916 10:48:07.344480   92290 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:48:07.524941   92290 request.go:632] Waited for 180.392509ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:48:07.525027   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:48:07.525039   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:07.525049   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:07.525059   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:07.530196   92290 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:48:07.537508   92290 system_pods.go:59] 26 kube-system pods found
	I0916 10:48:07.537541   92290 system_pods.go:61] "coredns-7c65d6cfc9-9lw9q" [4d7cd19e-6f4d-44ce-a248-d0fdecdb9fe8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 10:48:07.537549   92290 system_pods.go:61] "coredns-7c65d6cfc9-sbs22" [89925692-76b4-481f-bac7-16f06bea792a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 10:48:07.537558   92290 system_pods.go:61] "etcd-ha-770465" [041e8a27-b6a3-4201-990f-c367da8c5eb5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0916 10:48:07.537564   92290 system_pods.go:61] "etcd-ha-770465-m02" [1f922c60-bf68-44df-aa8c-42dce50e4dac] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0916 10:48:07.537568   92290 system_pods.go:61] "etcd-ha-770465-m03" [876ca26f-ca18-47df-9bfb-302321d66136] Running
	I0916 10:48:07.537572   92290 system_pods.go:61] "kindnet-66kfj" [9cd2abf1-a30e-40f4-af5c-59d50e367b36] Running
	I0916 10:48:07.537575   92290 system_pods.go:61] "kindnet-bflwn" [59d75712-5683-4b1c-a6ef-2a669d75da7a] Running
	I0916 10:48:07.537579   92290 system_pods.go:61] "kindnet-grjh8" [98658171-1486-44a9-8a20-0d77ea019206] Running
	I0916 10:48:07.537582   92290 system_pods.go:61] "kindnet-kht59" [3124d723-6fc6-4dce-9ecf-bb40b4665208] Running
	I0916 10:48:07.537588   92290 system_pods.go:61] "kube-apiserver-ha-770465" [aa8ba784-661f-4074-b0d3-f98cddf8ce8c] Running
	I0916 10:48:07.537592   92290 system_pods.go:61] "kube-apiserver-ha-770465-m02" [7acdd845-890e-449a-b112-2efbc19a7ef5] Running
	I0916 10:48:07.537599   92290 system_pods.go:61] "kube-apiserver-ha-770465-m03" [8ab3c55a-d726-456c-803b-e13c1032b13a] Running
	I0916 10:48:07.537608   92290 system_pods.go:61] "kube-controller-manager-ha-770465" [f0a59ee1-bf5e-432d-a185-f9497c75ada4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0916 10:48:07.537620   92290 system_pods.go:61] "kube-controller-manager-ha-770465-m02" [55d7421b-5679-416d-a6ae-c36738c1ec32] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0916 10:48:07.537627   92290 system_pods.go:61] "kube-controller-manager-ha-770465-m03" [37613674-2ba4-4425-824a-d00d16ea7b62] Running
	I0916 10:48:07.537632   92290 system_pods.go:61] "kube-proxy-4qgcs" [0dbed632-c44c-4a17-8486-b328e2cc41d3] Running
	I0916 10:48:07.537635   92290 system_pods.go:61] "kube-proxy-78l2l" [2b7f1ea3-9b2d-46d4-aa98-951e1c246baa] Running
	I0916 10:48:07.537638   92290 system_pods.go:61] "kube-proxy-gd2mt" [fc3bf04d-635a-4264-883b-2fd72cac2e24] Running
	I0916 10:48:07.537641   92290 system_pods.go:61] "kube-proxy-qlspc" [703b3f60-1294-49fd-aab1-35474b771351] Running
	I0916 10:48:07.537646   92290 system_pods.go:61] "kube-scheduler-ha-770465" [85f95d4d-a2d4-45cf-8afe-be446333b9d0] Running
	I0916 10:48:07.537649   92290 system_pods.go:61] "kube-scheduler-ha-770465-m02" [ec894fe5-a992-49f2-a96f-caab3848f7dd] Running
	I0916 10:48:07.537652   92290 system_pods.go:61] "kube-scheduler-ha-770465-m03" [0c6e312a-4eca-4ca3-b0d9-d70f7bf5087c] Running
	I0916 10:48:07.537655   92290 system_pods.go:61] "kube-vip-ha-770465" [bf294b8a-9d09-473e-964e-b776614e2969] Running
	I0916 10:48:07.537659   92290 system_pods.go:61] "kube-vip-ha-770465-m02" [ffd7ee7b-efe1-4d2d-b2f8-5fd898de7dbf] Running
	I0916 10:48:07.537661   92290 system_pods.go:61] "kube-vip-ha-770465-m03" [408226b8-b640-4b83-bd7e-a61b37724197] Running
	I0916 10:48:07.537664   92290 system_pods.go:61] "storage-provisioner" [cf470925-4874-4744-8015-700e93ab924f] Running
	I0916 10:48:07.537670   92290 system_pods.go:74] duration metric: took 193.181756ms to wait for pod list to return data ...
	I0916 10:48:07.537679   92290 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:48:07.725090   92290 request.go:632] Waited for 187.343194ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:48:07.725141   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:48:07.725146   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:07.725153   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:07.725156   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:07.728327   92290 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:48:07.728535   92290 default_sa.go:45] found service account: "default"
	I0916 10:48:07.728550   92290 default_sa.go:55] duration metric: took 190.864952ms for default service account to be created ...
	I0916 10:48:07.728559   92290 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:48:07.924911   92290 request.go:632] Waited for 196.294925ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:48:07.924961   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:48:07.924969   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:07.924979   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:07.924986   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:07.929566   92290 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:48:07.936626   92290 system_pods.go:86] 26 kube-system pods found
	I0916 10:48:07.936658   92290 system_pods.go:89] "coredns-7c65d6cfc9-9lw9q" [4d7cd19e-6f4d-44ce-a248-d0fdecdb9fe8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 10:48:07.936665   92290 system_pods.go:89] "coredns-7c65d6cfc9-sbs22" [89925692-76b4-481f-bac7-16f06bea792a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 10:48:07.936674   92290 system_pods.go:89] "etcd-ha-770465" [041e8a27-b6a3-4201-990f-c367da8c5eb5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0916 10:48:07.936678   92290 system_pods.go:89] "etcd-ha-770465-m02" [1f922c60-bf68-44df-aa8c-42dce50e4dac] Running
	I0916 10:48:07.936682   92290 system_pods.go:89] "etcd-ha-770465-m03" [876ca26f-ca18-47df-9bfb-302321d66136] Running
	I0916 10:48:07.936686   92290 system_pods.go:89] "kindnet-66kfj" [9cd2abf1-a30e-40f4-af5c-59d50e367b36] Running
	I0916 10:48:07.936690   92290 system_pods.go:89] "kindnet-bflwn" [59d75712-5683-4b1c-a6ef-2a669d75da7a] Running
	I0916 10:48:07.936693   92290 system_pods.go:89] "kindnet-grjh8" [98658171-1486-44a9-8a20-0d77ea019206] Running
	I0916 10:48:07.936696   92290 system_pods.go:89] "kindnet-kht59" [3124d723-6fc6-4dce-9ecf-bb40b4665208] Running
	I0916 10:48:07.936700   92290 system_pods.go:89] "kube-apiserver-ha-770465" [aa8ba784-661f-4074-b0d3-f98cddf8ce8c] Running
	I0916 10:48:07.936704   92290 system_pods.go:89] "kube-apiserver-ha-770465-m02" [7acdd845-890e-449a-b112-2efbc19a7ef5] Running
	I0916 10:48:07.936707   92290 system_pods.go:89] "kube-apiserver-ha-770465-m03" [8ab3c55a-d726-456c-803b-e13c1032b13a] Running
	I0916 10:48:07.936713   92290 system_pods.go:89] "kube-controller-manager-ha-770465" [f0a59ee1-bf5e-432d-a185-f9497c75ada4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0916 10:48:07.936721   92290 system_pods.go:89] "kube-controller-manager-ha-770465-m02" [55d7421b-5679-416d-a6ae-c36738c1ec32] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0916 10:48:07.936727   92290 system_pods.go:89] "kube-controller-manager-ha-770465-m03" [37613674-2ba4-4425-824a-d00d16ea7b62] Running
	I0916 10:48:07.936734   92290 system_pods.go:89] "kube-proxy-4qgcs" [0dbed632-c44c-4a17-8486-b328e2cc41d3] Running
	I0916 10:48:07.936737   92290 system_pods.go:89] "kube-proxy-78l2l" [2b7f1ea3-9b2d-46d4-aa98-951e1c246baa] Running
	I0916 10:48:07.936743   92290 system_pods.go:89] "kube-proxy-gd2mt" [fc3bf04d-635a-4264-883b-2fd72cac2e24] Running
	I0916 10:48:07.936746   92290 system_pods.go:89] "kube-proxy-qlspc" [703b3f60-1294-49fd-aab1-35474b771351] Running
	I0916 10:48:07.936749   92290 system_pods.go:89] "kube-scheduler-ha-770465" [85f95d4d-a2d4-45cf-8afe-be446333b9d0] Running
	I0916 10:48:07.936753   92290 system_pods.go:89] "kube-scheduler-ha-770465-m02" [ec894fe5-a992-49f2-a96f-caab3848f7dd] Running
	I0916 10:48:07.936758   92290 system_pods.go:89] "kube-scheduler-ha-770465-m03" [0c6e312a-4eca-4ca3-b0d9-d70f7bf5087c] Running
	I0916 10:48:07.936765   92290 system_pods.go:89] "kube-vip-ha-770465" [bf294b8a-9d09-473e-964e-b776614e2969] Running
	I0916 10:48:07.936768   92290 system_pods.go:89] "kube-vip-ha-770465-m02" [ffd7ee7b-efe1-4d2d-b2f8-5fd898de7dbf] Running
	I0916 10:48:07.936771   92290 system_pods.go:89] "kube-vip-ha-770465-m03" [408226b8-b640-4b83-bd7e-a61b37724197] Running
	I0916 10:48:07.936774   92290 system_pods.go:89] "storage-provisioner" [cf470925-4874-4744-8015-700e93ab924f] Running
	I0916 10:48:07.936783   92290 system_pods.go:126] duration metric: took 208.219129ms to wait for k8s-apps to be running ...
	I0916 10:48:07.936792   92290 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:48:07.936830   92290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:48:07.948241   92290 system_svc.go:56] duration metric: took 11.440011ms WaitForService to wait for kubelet
	I0916 10:48:07.948273   92290 kubeadm.go:582] duration metric: took 11.797971462s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:48:07.948294   92290 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:48:08.124577   92290 request.go:632] Waited for 176.195626ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0916 10:48:08.124658   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0916 10:48:08.124665   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:08.124672   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:08.124679   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:08.127773   92290 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:48:08.128961   92290 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:48:08.128985   92290 node_conditions.go:123] node cpu capacity is 8
	I0916 10:48:08.128996   92290 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:48:08.129000   92290 node_conditions.go:123] node cpu capacity is 8
	I0916 10:48:08.129004   92290 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:48:08.129007   92290 node_conditions.go:123] node cpu capacity is 8
	I0916 10:48:08.129010   92290 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:48:08.129013   92290 node_conditions.go:123] node cpu capacity is 8
	I0916 10:48:08.129017   92290 node_conditions.go:105] duration metric: took 180.717938ms to run NodePressure ...
	I0916 10:48:08.129027   92290 start.go:241] waiting for startup goroutines ...
	I0916 10:48:08.129045   92290 start.go:255] writing updated cluster config ...
	I0916 10:48:08.131853   92290 out.go:201] 
	I0916 10:48:08.133542   92290 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:48:08.133672   92290 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/config.json ...
	I0916 10:48:08.135755   92290 out.go:177] * Starting "ha-770465-m03" control-plane node in "ha-770465" cluster
	I0916 10:48:08.137569   92290 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 10:48:08.139100   92290 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:48:08.140649   92290 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:48:08.140673   92290 cache.go:56] Caching tarball of preloaded images
	I0916 10:48:08.140755   92290 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:48:08.140769   92290 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 10:48:08.140777   92290 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 10:48:08.140878   92290 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/config.json ...
	W0916 10:48:08.160139   92290 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:48:08.160157   92290 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:48:08.160244   92290 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:48:08.160264   92290 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:48:08.160270   92290 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:48:08.160282   92290 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:48:08.160290   92290 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:48:08.161435   92290 image.go:273] response: 
	I0916 10:48:08.216839   92290 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:48:08.216874   92290 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:48:08.216904   92290 start.go:360] acquireMachinesLock for ha-770465-m03: {Name:mk5962b775140909e26682052ad5dc2dfc9dc910 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:48:08.216964   92290 start.go:364] duration metric: took 41.681µs to acquireMachinesLock for "ha-770465-m03"
	I0916 10:48:08.216981   92290 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:48:08.216985   92290 fix.go:54] fixHost starting: m03
	I0916 10:48:08.217197   92290 cli_runner.go:164] Run: docker container inspect ha-770465-m03 --format={{.State.Status}}
	I0916 10:48:08.234227   92290 fix.go:112] recreateIfNeeded on ha-770465-m03: state=Stopped err=<nil>
	W0916 10:48:08.234257   92290 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:48:08.236798   92290 out.go:177] * Restarting existing docker container for "ha-770465-m03" ...
	I0916 10:48:08.238374   92290 cli_runner.go:164] Run: docker start ha-770465-m03
	I0916 10:48:08.517199   92290 cli_runner.go:164] Run: docker container inspect ha-770465-m03 --format={{.State.Status}}
	I0916 10:48:08.535152   92290 kic.go:430] container "ha-770465-m03" state is running.
	I0916 10:48:08.535622   92290 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465-m03
	I0916 10:48:08.554301   92290 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/config.json ...
	I0916 10:48:08.554590   92290 machine.go:93] provisionDockerMachine start ...
	I0916 10:48:08.554781   92290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m03
	I0916 10:48:08.572998   92290 main.go:141] libmachine: Using SSH client type: native
	I0916 10:48:08.573166   92290 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0916 10:48:08.573178   92290 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:48:08.574202   92290 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53322->127.0.0.1:32823: read: connection reset by peer
	I0916 10:48:11.820550   92290 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-770465-m03
	
	I0916 10:48:11.820581   92290 ubuntu.go:169] provisioning hostname "ha-770465-m03"
	I0916 10:48:11.820651   92290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m03
	I0916 10:48:11.843911   92290 main.go:141] libmachine: Using SSH client type: native
	I0916 10:48:11.844146   92290 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0916 10:48:11.844171   92290 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-770465-m03 && echo "ha-770465-m03" | sudo tee /etc/hostname
	I0916 10:48:12.141329   92290 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-770465-m03
	
	I0916 10:48:12.141529   92290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m03
	I0916 10:48:12.164178   92290 main.go:141] libmachine: Using SSH client type: native
	I0916 10:48:12.164416   92290 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0916 10:48:12.164441   92290 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-770465-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-770465-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-770465-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:48:12.524688   92290 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:48:12.524719   92290 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 10:48:12.524739   92290 ubuntu.go:177] setting up certificates
	I0916 10:48:12.524749   92290 provision.go:84] configureAuth start
	I0916 10:48:12.524803   92290 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465-m03
	I0916 10:48:12.545235   92290 provision.go:143] copyHostCerts
	I0916 10:48:12.545274   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:48:12.545308   92290 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 10:48:12.545316   92290 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:48:12.545399   92290 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 10:48:12.545496   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:48:12.545520   92290 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 10:48:12.545525   92290 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:48:12.545559   92290 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 10:48:12.545615   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:48:12.545634   92290 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 10:48:12.545642   92290 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:48:12.545671   92290 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 10:48:12.545728   92290 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.ha-770465-m03 san=[127.0.0.1 192.168.49.4 ha-770465-m03 localhost minikube]
	I0916 10:48:12.716110   92290 provision.go:177] copyRemoteCerts
	I0916 10:48:12.716166   92290 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:48:12.716199   92290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m03
	I0916 10:48:12.740925   92290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m03/id_rsa Username:docker}
	I0916 10:48:12.930377   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:48:12.930450   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 10:48:12.957022   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:48:12.957097   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 10:48:13.040473   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:48:13.040540   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:48:13.132000   92290 provision.go:87] duration metric: took 607.236354ms to configureAuth
	I0916 10:48:13.132038   92290 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:48:13.132372   92290 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:48:13.132393   92290 machine.go:96] duration metric: took 4.577690057s to provisionDockerMachine
	I0916 10:48:13.132403   92290 start.go:293] postStartSetup for "ha-770465-m03" (driver="docker")
	I0916 10:48:13.132415   92290 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:48:13.132473   92290 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:48:13.132519   92290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m03
	I0916 10:48:13.151685   92290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m03/id_rsa Username:docker}
	I0916 10:48:13.325433   92290 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:48:13.328667   92290 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:48:13.328696   92290 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:48:13.328704   92290 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:48:13.328711   92290 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:48:13.328720   92290 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 10:48:13.328773   92290 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 10:48:13.328840   92290 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 10:48:13.328848   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /etc/ssl/certs/111892.pem
	I0916 10:48:13.328926   92290 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:48:13.338247   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:48:13.362050   92290 start.go:296] duration metric: took 229.63016ms for postStartSetup
	I0916 10:48:13.362137   92290 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:48:13.362191   92290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m03
	I0916 10:48:13.385487   92290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m03/id_rsa Username:docker}
	I0916 10:48:13.527692   92290 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:48:13.532229   92290 fix.go:56] duration metric: took 5.315237603s for fixHost
	I0916 10:48:13.532255   92290 start.go:83] releasing machines lock for "ha-770465-m03", held for 5.315281287s
	I0916 10:48:13.532330   92290 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465-m03
	I0916 10:48:13.552268   92290 out.go:177] * Found network options:
	I0916 10:48:13.553604   92290 out.go:177]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0916 10:48:13.554878   92290 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:48:13.554907   92290 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:48:13.554935   92290 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:48:13.554953   92290 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 10:48:13.555027   92290 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:48:13.555074   92290 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:48:13.555130   92290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m03
	I0916 10:48:13.555077   92290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m03
	I0916 10:48:13.573459   92290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m03/id_rsa Username:docker}
	I0916 10:48:13.574133   92290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m03/id_rsa Username:docker}
	I0916 10:48:13.834746   92290 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 10:48:14.057223   92290 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:48:14.057308   92290 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:48:14.066368   92290 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:48:14.066397   92290 start.go:495] detecting cgroup driver to use...
	I0916 10:48:14.066429   92290 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:48:14.066465   92290 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 10:48:14.078556   92290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 10:48:14.088734   92290 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:48:14.088790   92290 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:48:14.101268   92290 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:48:14.111932   92290 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:48:14.196999   92290 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:48:14.282615   92290 docker.go:233] disabling docker service ...
	I0916 10:48:14.282684   92290 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:48:14.294098   92290 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:48:14.304623   92290 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:48:14.393643   92290 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:48:14.479834   92290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:48:14.490348   92290 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:48:14.505198   92290 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 10:48:14.514070   92290 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:48:14.522769   92290 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:48:14.522839   92290 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:48:14.531914   92290 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:48:14.540975   92290 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:48:14.550178   92290 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:48:14.558663   92290 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:48:14.567579   92290 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:48:14.577361   92290 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:48:14.586720   92290 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:48:14.595876   92290 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:48:14.603384   92290 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:48:14.610904   92290 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:48:14.690952   92290 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 10:48:14.965117   92290 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 10:48:14.965188   92290 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 10:48:14.969614   92290 start.go:563] Will wait 60s for crictl version
	I0916 10:48:14.969685   92290 ssh_runner.go:195] Run: which crictl
	I0916 10:48:14.973535   92290 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:48:15.049014   92290 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 10:48:15.049088   92290 ssh_runner.go:195] Run: containerd --version
	I0916 10:48:15.075959   92290 ssh_runner.go:195] Run: containerd --version
	I0916 10:48:15.127834   92290 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 10:48:15.129039   92290 out.go:177]   - env NO_PROXY=192.168.49.2
	I0916 10:48:15.130859   92290 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0916 10:48:15.132163   92290 cli_runner.go:164] Run: docker network inspect ha-770465 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:48:15.149025   92290 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:48:15.152848   92290 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:48:15.163875   92290 mustload.go:65] Loading cluster: ha-770465
	I0916 10:48:15.164119   92290 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:48:15.164327   92290 cli_runner.go:164] Run: docker container inspect ha-770465 --format={{.State.Status}}
	I0916 10:48:15.182180   92290 host.go:66] Checking if "ha-770465" exists ...
	I0916 10:48:15.182428   92290 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465 for IP: 192.168.49.4
	I0916 10:48:15.182440   92290 certs.go:194] generating shared ca certs ...
	I0916 10:48:15.182453   92290 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:48:15.182561   92290 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 10:48:15.182622   92290 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 10:48:15.182637   92290 certs.go:256] generating profile certs ...
	I0916 10:48:15.182710   92290 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.key
	I0916 10:48:15.182771   92290 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key.0a02bdd9
	I0916 10:48:15.182806   92290 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.key
	I0916 10:48:15.182817   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:48:15.182829   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:48:15.182842   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:48:15.182855   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:48:15.182867   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:48:15.182879   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:48:15.182892   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:48:15.182904   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:48:15.182951   92290 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 10:48:15.182977   92290 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 10:48:15.182986   92290 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:48:15.183009   92290 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 10:48:15.183031   92290 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:48:15.183051   92290 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 10:48:15.183085   92290 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:48:15.183109   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /usr/share/ca-certificates/111892.pem
	I0916 10:48:15.183123   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:48:15.183134   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem -> /usr/share/ca-certificates/11189.pem
	I0916 10:48:15.183180   92290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:48:15.199943   92290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa Username:docker}
	I0916 10:48:15.288088   92290 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 10:48:15.291712   92290 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 10:48:15.303570   92290 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 10:48:15.306979   92290 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0916 10:48:15.318509   92290 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 10:48:15.321577   92290 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 10:48:15.333345   92290 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 10:48:15.336625   92290 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0916 10:48:15.348613   92290 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 10:48:15.351718   92290 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 10:48:15.363193   92290 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 10:48:15.366476   92290 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0916 10:48:15.379065   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:48:15.402380   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:48:15.425509   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:48:15.449140   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:48:15.471625   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0916 10:48:15.494501   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 10:48:15.518519   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:48:15.541024   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 10:48:15.563786   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 10:48:15.586420   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:48:15.609195   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 10:48:15.632936   92290 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 10:48:15.649679   92290 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0916 10:48:15.668052   92290 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 10:48:15.685636   92290 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0916 10:48:15.703362   92290 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 10:48:15.720665   92290 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0916 10:48:15.737593   92290 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 10:48:15.754584   92290 ssh_runner.go:195] Run: openssl version
	I0916 10:48:15.759833   92290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:48:15.769434   92290 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:48:15.773165   92290 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:48:15.773230   92290 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:48:15.779715   92290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:48:15.788240   92290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 10:48:15.797426   92290 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 10:48:15.801027   92290 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 10:48:15.801085   92290 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 10:48:15.807655   92290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 10:48:15.816239   92290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 10:48:15.825289   92290 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 10:48:15.829077   92290 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 10:48:15.829152   92290 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 10:48:15.837916   92290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:48:15.847075   92290 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:48:15.851329   92290 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 10:48:15.858081   92290 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 10:48:15.866428   92290 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 10:48:15.874123   92290 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 10:48:15.882176   92290 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 10:48:15.889716   92290 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 10:48:15.897711   92290 kubeadm.go:934] updating node {m03 192.168.49.4 8443 v1.31.1 containerd true true} ...
	I0916 10:48:15.897833   92290 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-770465-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-770465 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:48:15.897869   92290 kube-vip.go:115] generating kube-vip config ...
	I0916 10:48:15.897916   92290 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 10:48:15.927102   92290 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:48:15.927153   92290 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 10:48:15.927195   92290 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:48:15.938129   92290 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:48:15.938198   92290 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 10:48:15.949608   92290 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0916 10:48:15.967646   92290 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:48:15.985194   92290 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 10:48:16.001383   92290 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 10:48:16.004601   92290 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:48:16.020761   92290 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:48:16.109776   92290 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:48:16.120750   92290 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 10:48:16.121039   92290 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:48:16.123146   92290 out.go:177] * Verifying Kubernetes components...
	I0916 10:48:16.124359   92290 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:48:16.208284   92290 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:48:16.219647   92290 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:48:16.219948   92290 kapi.go:59] client config for ha-770465: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 10:48:16.220005   92290 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 10:48:16.220187   92290 node_ready.go:35] waiting up to 6m0s for node "ha-770465-m03" to be "Ready" ...
	I0916 10:48:16.220278   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:48:16.220288   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:16.220298   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:16.220304   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:16.222844   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:16.223381   92290 node_ready.go:49] node "ha-770465-m03" has status "Ready":"True"
	I0916 10:48:16.223469   92290 node_ready.go:38] duration metric: took 3.194428ms for node "ha-770465-m03" to be "Ready" ...
	I0916 10:48:16.223505   92290 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:48:16.223588   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:48:16.223598   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:16.223605   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:16.223611   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:16.228350   92290 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:48:16.237026   92290 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:16.237139   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:16.237152   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:16.237162   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:16.237167   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:16.239730   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:16.240373   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:16.240389   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:16.240397   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:16.240402   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:16.242622   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:16.737401   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:16.737421   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:16.737429   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:16.737433   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:16.740120   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:16.740728   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:16.740745   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:16.740752   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:16.740756   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:16.742794   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:17.237604   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:17.237624   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:17.237632   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:17.237636   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:17.240476   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:17.241170   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:17.241186   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:17.241196   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:17.241201   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:17.243403   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:17.738257   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:17.738278   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:17.738288   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:17.738292   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:17.740925   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:17.741514   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:17.741529   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:17.741539   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:17.741544   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:17.743857   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:18.237878   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:18.237896   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:18.237902   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:18.237913   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:18.240633   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:18.241242   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:18.241259   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:18.241266   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:18.241270   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:18.243318   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:18.243789   92290 pod_ready.go:103] pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace has status "Ready":"False"
	I0916 10:48:18.737742   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:18.737760   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:18.737768   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:18.737772   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:18.740720   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:18.741381   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:18.741396   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:18.741403   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:18.741407   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:18.743609   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:19.237443   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:19.237464   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:19.237472   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:19.237476   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:19.240280   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:19.240942   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:19.240957   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:19.240966   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:19.240972   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:19.243143   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:19.738110   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:19.738128   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:19.738136   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:19.738139   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:19.740970   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:19.741670   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:19.741686   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:19.741695   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:19.741701   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:19.743863   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:20.237708   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:20.237728   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:20.237735   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:20.237740   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:20.240289   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:20.240939   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:20.240957   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:20.240967   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:20.240976   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:20.243095   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:20.737955   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:20.737975   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:20.737983   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:20.737988   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:20.740676   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:20.741264   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:20.741283   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:20.741290   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:20.741294   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:20.743515   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:20.744052   92290 pod_ready.go:103] pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace has status "Ready":"False"
	I0916 10:48:21.237383   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:21.237403   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:21.237411   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:21.237419   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:21.240431   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:21.241165   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:21.241184   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:21.241194   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:21.241200   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:21.243485   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:21.737232   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:21.737252   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:21.737260   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:21.737263   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:21.740086   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:21.740811   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:21.740832   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:21.740843   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:21.740850   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:21.743074   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:22.237896   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:22.237916   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:22.237923   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:22.237928   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:22.240581   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:22.241288   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:22.241308   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:22.241319   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:22.241328   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:22.243468   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:22.737248   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:22.737273   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:22.737283   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:22.737288   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:22.740013   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:22.740871   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:22.740890   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:22.740901   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:22.740905   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:22.742872   92290 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:48:23.237942   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:23.237962   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:23.237970   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:23.237974   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:23.240451   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:23.241030   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:23.241046   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:23.241053   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:23.241056   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:23.243074   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:23.243493   92290 pod_ready.go:103] pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace has status "Ready":"False"
	I0916 10:48:23.737881   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:23.737902   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:23.737910   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:23.737914   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:23.740545   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:23.741070   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:23.741084   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:23.741091   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:23.741094   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:23.743242   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:24.237685   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:24.237706   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:24.237713   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:24.237719   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:24.240236   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:24.240920   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:24.240934   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:24.240942   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:24.240947   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:24.243179   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:24.738000   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:24.738020   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:24.738028   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:24.738031   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:24.740762   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:24.741336   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:24.741349   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:24.741356   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:24.741361   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:24.743386   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:25.238220   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:25.238240   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:25.238247   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:25.238251   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:25.240825   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:25.241457   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:25.241477   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:25.241485   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:25.241489   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:25.243821   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:25.244317   92290 pod_ready.go:103] pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace has status "Ready":"False"
	I0916 10:48:25.737588   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:25.737610   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:25.737618   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:25.737625   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:25.740499   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:25.741223   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:25.741244   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:25.741256   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:25.741261   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:25.743555   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:26.237307   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:26.237327   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:26.237335   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:26.237339   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:26.240004   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:26.240650   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:26.240664   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:26.240671   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:26.240676   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:26.242668   92290 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:48:26.737460   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:26.737478   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:26.737486   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:26.737489   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:26.740188   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:26.740747   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:26.740763   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:26.740772   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:26.740777   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:26.742971   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:27.237350   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:27.237374   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:27.237382   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:27.237386   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:27.239946   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:27.240603   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:27.240619   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:27.240625   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:27.240628   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:27.242898   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:27.737702   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:27.737724   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:27.737732   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:27.737736   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:27.740442   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:27.741277   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:27.741296   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:27.741312   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:27.741318   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:27.743445   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:27.743917   92290 pod_ready.go:103] pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace has status "Ready":"False"
	I0916 10:48:28.237340   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:28.237359   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:28.237366   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:28.237370   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:28.239908   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:28.240637   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:28.240655   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:28.240666   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:28.240672   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:28.242845   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:28.737652   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:28.737699   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:28.737709   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:28.737716   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:28.740386   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:28.740986   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:28.741004   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:28.741014   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:28.741018   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:28.743305   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:29.238102   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:29.238125   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:29.238136   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:29.238141   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:29.240765   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:29.241516   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:29.241534   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:29.241545   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:29.241552   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:29.244054   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:29.737982   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:29.738002   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:29.738009   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:29.738014   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:29.740871   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:29.741559   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:29.741575   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:29.741583   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:29.741587   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:29.743608   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:29.744044   92290 pod_ready.go:103] pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace has status "Ready":"False"
	I0916 10:48:30.237350   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:30.237372   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:30.237383   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:30.237389   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:30.239986   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:30.240643   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:30.240659   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:30.240665   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:30.240671   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:30.242961   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:30.737852   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:30.737897   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:30.737908   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:30.737915   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:30.740751   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:30.741437   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:30.741457   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:30.741467   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:30.741474   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:30.743808   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:31.237611   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:31.237636   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:31.237647   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:31.237652   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:31.240647   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:31.241288   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:31.241303   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:31.241310   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:31.241321   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:31.243382   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:31.738204   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:31.738224   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:31.738231   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:31.738234   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:31.741073   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:31.741717   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:31.741732   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:31.741740   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:31.741744   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:31.743948   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:31.744976   92290 pod_ready.go:103] pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace has status "Ready":"False"
	I0916 10:48:32.237257   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:32.237280   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:32.237288   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:32.237293   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:32.240165   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:32.240908   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:32.240924   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:32.240934   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:32.240943   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:32.243158   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:32.738179   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:32.738198   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:32.738206   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:32.738210   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:32.741046   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:32.741637   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:32.741653   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:32.741663   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:32.741669   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:32.743823   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:33.237956   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:33.237977   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:33.237985   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:33.237990   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:33.240663   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:33.241295   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:33.241312   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:33.241319   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:33.241323   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:33.243321   92290 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:48:33.737926   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:33.737947   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:33.737954   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:33.737958   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:33.740962   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:33.741641   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:33.741658   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:33.741665   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:33.741669   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:33.744002   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:34.238022   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:34.238042   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:34.238053   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:34.238058   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:34.240829   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:34.241418   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:34.241434   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:34.241441   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:34.241446   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:34.243522   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:34.244015   92290 pod_ready.go:103] pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace has status "Ready":"False"
	I0916 10:48:34.737217   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:34.737236   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:34.737244   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:34.737249   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:34.739959   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:34.740695   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:34.740712   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:34.740719   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:34.740723   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:34.742969   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:35.237908   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:35.237931   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:35.237939   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:35.237942   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:35.240931   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:35.241673   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:35.241689   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:35.241698   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:35.241702   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:35.243774   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:35.737584   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:35.737606   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:35.737614   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:35.737619   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:35.740458   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:35.741079   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:35.741095   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:35.741103   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:35.741107   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:35.743174   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:36.238010   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:36.238029   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:36.238036   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:36.238040   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:36.240452   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:36.241035   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:36.241050   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:36.241055   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:36.241059   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:36.243216   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:36.738117   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:36.738137   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:36.738146   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:36.738150   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:36.740938   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:36.741529   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:36.741544   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:36.741551   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:36.741555   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:36.743572   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:36.744019   92290 pod_ready.go:103] pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace has status "Ready":"False"
	I0916 10:48:37.237300   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:37.237323   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:37.237331   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:37.237334   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:37.240090   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:37.240770   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:37.240788   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:37.240799   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:37.240804   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:37.242951   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:37.737780   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:37.737799   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:37.737806   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:37.737810   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:37.740369   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:37.741000   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:37.741016   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:37.741024   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:37.741032   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:37.743185   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:38.238199   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:38.238217   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:38.238224   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:38.238228   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:38.240896   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:38.241534   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:38.241550   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:38.241557   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:38.241562   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:38.243687   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:38.737462   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:38.737482   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:38.737490   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:38.737493   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:38.740169   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:38.740800   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:38.740820   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:38.740832   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:38.740838   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:38.742890   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:39.237677   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:39.237695   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:39.237707   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:39.237712   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:39.240185   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:39.240844   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:39.240858   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:39.240865   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:39.240868   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:39.242937   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:39.243321   92290 pod_ready.go:103] pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace has status "Ready":"False"
	I0916 10:48:39.737971   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:39.737993   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:39.738005   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:39.738010   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:39.740626   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:39.741216   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:39.741231   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:39.741239   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:39.741244   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:39.743457   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:40.237310   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:40.237329   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:40.237337   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:40.237341   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:40.239842   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:40.240551   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:40.240566   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:40.240573   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:40.240577   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:40.242676   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:40.737497   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:40.737515   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:40.737523   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:40.737527   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:40.740377   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:40.740964   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:40.740980   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:40.740990   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:40.740993   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:40.743024   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:41.237754   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:41.237773   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:41.237781   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:41.237784   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:41.240488   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:41.241274   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:41.241291   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:41.241302   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:41.241308   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:41.243330   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:41.243797   92290 pod_ready.go:103] pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace has status "Ready":"False"
	I0916 10:48:41.738162   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:41.738185   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:41.738197   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:41.738202   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:41.741110   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:41.741835   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:41.741853   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:41.741864   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:41.741870   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:41.743900   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:42.237691   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:42.237716   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:42.237726   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:42.237732   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:42.242220   92290 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:48:42.242898   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:42.242914   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:42.242921   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:42.242928   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:42.245167   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:42.737985   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:42.738004   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:42.738012   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:42.738017   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:42.740733   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:42.741332   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:42.741353   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:42.741361   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:42.741365   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:42.743408   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:43.237566   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:43.237588   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:43.237605   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:43.237612   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:43.240224   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:43.240863   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:43.240880   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:43.240890   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:43.240895   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:43.242857   92290 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:48:43.738218   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:43.738239   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:43.738247   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:43.738251   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:43.740888   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:43.741452   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:43.741468   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:43.741475   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:43.741478   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:43.743601   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:43.744062   92290 pod_ready.go:93] pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace has status "Ready":"True"
	I0916 10:48:43.744082   92290 pod_ready.go:82] duration metric: took 27.507024971s for pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:43.744094   92290 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sbs22" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:43.744155   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:48:43.744164   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:43.744174   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:43.744181   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:43.746061   92290 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:48:43.746562   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:43.746573   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:43.746581   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:43.746586   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:43.748428   92290 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:48:43.748961   92290 pod_ready.go:93] pod "coredns-7c65d6cfc9-sbs22" in "kube-system" namespace has status "Ready":"True"
	I0916 10:48:43.748976   92290 pod_ready.go:82] duration metric: took 4.874482ms for pod "coredns-7c65d6cfc9-sbs22" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:43.748984   92290 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:43.749029   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465
	I0916 10:48:43.749037   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:43.749044   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:43.749049   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:43.750884   92290 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:48:43.751349   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:43.751364   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:43.751373   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:43.751377   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:43.753110   92290 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:48:43.753496   92290 pod_ready.go:93] pod "etcd-ha-770465" in "kube-system" namespace has status "Ready":"True"
	I0916 10:48:43.753511   92290 pod_ready.go:82] duration metric: took 4.521048ms for pod "etcd-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:43.753520   92290 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:43.753563   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m02
	I0916 10:48:43.753571   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:43.753578   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:43.753582   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:43.755455   92290 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:48:43.756019   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:48:43.756037   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:43.756044   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:43.756047   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:43.757874   92290 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:48:43.758237   92290 pod_ready.go:93] pod "etcd-ha-770465-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:48:43.758254   92290 pod_ready.go:82] duration metric: took 4.728563ms for pod "etcd-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:43.758263   92290 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:43.758316   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:48:43.758324   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:43.758333   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:43.758336   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:43.760434   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:43.760878   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:48:43.760890   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:43.760897   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:43.760901   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:43.762813   92290 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:48:43.763194   92290 pod_ready.go:93] pod "etcd-ha-770465-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:48:43.763210   92290 pod_ready.go:82] duration metric: took 4.940524ms for pod "etcd-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:43.763231   92290 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:43.938632   92290 request.go:632] Waited for 175.319335ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465
	I0916 10:48:43.938688   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465
	I0916 10:48:43.938695   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:43.938705   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:43.938711   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:43.941618   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:44.138648   92290 request.go:632] Waited for 196.384054ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:44.138720   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:44.138729   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:44.138739   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:44.138745   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:44.141400   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:44.141875   92290 pod_ready.go:93] pod "kube-apiserver-ha-770465" in "kube-system" namespace has status "Ready":"True"
	I0916 10:48:44.141892   92290 pod_ready.go:82] duration metric: took 378.653802ms for pod "kube-apiserver-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:44.141902   92290 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:44.339065   92290 request.go:632] Waited for 197.075671ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m02
	I0916 10:48:44.339124   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m02
	I0916 10:48:44.339131   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:44.339141   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:44.339155   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:44.341967   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:44.539034   92290 request.go:632] Waited for 196.354134ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:48:44.539124   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:48:44.539135   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:44.539145   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:44.539154   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:44.541789   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:44.542272   92290 pod_ready.go:93] pod "kube-apiserver-ha-770465-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:48:44.542289   92290 pod_ready.go:82] duration metric: took 400.380621ms for pod "kube-apiserver-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:44.542299   92290 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:44.738441   92290 request.go:632] Waited for 196.063589ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m03
	I0916 10:48:44.738520   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m03
	I0916 10:48:44.738532   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:44.738543   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:44.738552   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:44.741397   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:44.938272   92290 request.go:632] Waited for 196.265166ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:48:44.938321   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:48:44.938326   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:44.938336   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:44.938341   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:44.941272   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:44.941756   92290 pod_ready.go:93] pod "kube-apiserver-ha-770465-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:48:44.941778   92290 pod_ready.go:82] duration metric: took 399.473671ms for pod "kube-apiserver-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:44.941788   92290 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:45.138844   92290 request.go:632] Waited for 196.99352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-770465
	I0916 10:48:45.138906   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-770465
	I0916 10:48:45.138912   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:45.138920   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:45.138928   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:45.141577   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:45.338587   92290 request.go:632] Waited for 196.371552ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:45.338692   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:45.338701   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:45.338709   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:45.338718   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:45.341387   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:45.341892   92290 pod_ready.go:93] pod "kube-controller-manager-ha-770465" in "kube-system" namespace has status "Ready":"True"
	I0916 10:48:45.341909   92290 pod_ready.go:82] duration metric: took 400.115654ms for pod "kube-controller-manager-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:45.341919   92290 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:45.539078   92290 request.go:632] Waited for 197.079413ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-770465-m02
	I0916 10:48:45.539129   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-770465-m02
	I0916 10:48:45.539150   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:45.539159   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:45.539165   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:45.542022   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:45.738930   92290 request.go:632] Waited for 196.349044ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:48:45.738981   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:48:45.738985   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:45.738993   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:45.738999   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:45.741723   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:45.742197   92290 pod_ready.go:93] pod "kube-controller-manager-ha-770465-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:48:45.742216   92290 pod_ready.go:82] duration metric: took 400.289795ms for pod "kube-controller-manager-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:45.742227   92290 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:45.939242   92290 request.go:632] Waited for 196.948649ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-770465-m03
	I0916 10:48:45.939343   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-770465-m03
	I0916 10:48:45.939368   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:45.939383   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:45.939392   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:45.942226   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:46.139174   92290 request.go:632] Waited for 196.34455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:48:46.139227   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:48:46.139232   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:46.139240   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:46.139245   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:46.141869   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:46.142388   92290 pod_ready.go:93] pod "kube-controller-manager-ha-770465-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:48:46.142409   92290 pod_ready.go:82] duration metric: took 400.174402ms for pod "kube-controller-manager-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:46.142424   92290 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4qgcs" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:46.338540   92290 request.go:632] Waited for 196.042369ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4qgcs
	I0916 10:48:46.338625   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4qgcs
	I0916 10:48:46.338635   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:46.338644   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:46.338651   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:46.341456   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:46.538290   92290 request.go:632] Waited for 196.255862ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:48:46.538361   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:48:46.538369   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:46.538377   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:46.538381   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:46.540752   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:46.541190   92290 pod_ready.go:93] pod "kube-proxy-4qgcs" in "kube-system" namespace has status "Ready":"True"
	I0916 10:48:46.541206   92290 pod_ready.go:82] duration metric: took 398.771875ms for pod "kube-proxy-4qgcs" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:46.541215   92290 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-78l2l" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:46.738211   92290 request.go:632] Waited for 196.922207ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-78l2l
	I0916 10:48:46.738265   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-78l2l
	I0916 10:48:46.738270   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:46.738277   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:46.738280   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:46.741047   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:46.938974   92290 request.go:632] Waited for 197.349617ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m04
	I0916 10:48:46.939044   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m04
	I0916 10:48:46.939050   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:46.939059   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:46.939067   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:46.941704   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:46.942174   92290 pod_ready.go:98] node "ha-770465-m04" hosting pod "kube-proxy-78l2l" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-770465-m04" has status "Ready":"Unknown"
	I0916 10:48:46.942195   92290 pod_ready.go:82] duration metric: took 400.96995ms for pod "kube-proxy-78l2l" in "kube-system" namespace to be "Ready" ...
	E0916 10:48:46.942204   92290 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-770465-m04" hosting pod "kube-proxy-78l2l" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-770465-m04" has status "Ready":"Unknown"
	I0916 10:48:46.942213   92290 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gd2mt" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:47.138252   92290 request.go:632] Waited for 195.955086ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gd2mt
	I0916 10:48:47.138326   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gd2mt
	I0916 10:48:47.138332   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:47.138340   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:47.138345   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:47.141468   92290 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:48:47.338521   92290 request.go:632] Waited for 196.35977ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:47.338576   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:47.338582   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:47.338593   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:47.338606   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:47.341108   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:47.341529   92290 pod_ready.go:93] pod "kube-proxy-gd2mt" in "kube-system" namespace has status "Ready":"True"
	I0916 10:48:47.341546   92290 pod_ready.go:82] duration metric: took 399.32376ms for pod "kube-proxy-gd2mt" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:47.341556   92290 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qlspc" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:47.538627   92290 request.go:632] Waited for 196.999722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qlspc
	I0916 10:48:47.538712   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qlspc
	I0916 10:48:47.538722   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:47.538730   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:47.538734   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:47.541393   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:47.738370   92290 request.go:632] Waited for 196.283636ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:48:47.738447   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:48:47.738457   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:47.738465   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:47.738469   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:47.741232   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:47.741679   92290 pod_ready.go:93] pod "kube-proxy-qlspc" in "kube-system" namespace has status "Ready":"True"
	I0916 10:48:47.741696   92290 pod_ready.go:82] duration metric: took 400.134443ms for pod "kube-proxy-qlspc" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:47.741705   92290 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:47.938971   92290 request.go:632] Waited for 197.175032ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465
	I0916 10:48:47.939041   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465
	I0916 10:48:47.939048   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:47.939061   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:47.939070   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:47.942132   92290 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:48:48.139141   92290 request.go:632] Waited for 196.372594ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:48.139189   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:48.139194   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:48.139201   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:48.139205   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:48.141837   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:48.142340   92290 pod_ready.go:93] pod "kube-scheduler-ha-770465" in "kube-system" namespace has status "Ready":"True"
	I0916 10:48:48.142359   92290 pod_ready.go:82] duration metric: took 400.647288ms for pod "kube-scheduler-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:48.142375   92290 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:48.339297   92290 request.go:632] Waited for 196.829413ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465-m02
	I0916 10:48:48.339360   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465-m02
	I0916 10:48:48.339365   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:48.339381   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:48.339387   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:48.342159   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:48.539059   92290 request.go:632] Waited for 196.347168ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:48:48.539112   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:48:48.539117   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:48.539124   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:48.539128   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:48.541696   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:48.542137   92290 pod_ready.go:93] pod "kube-scheduler-ha-770465-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:48:48.542153   92290 pod_ready.go:82] duration metric: took 399.769678ms for pod "kube-scheduler-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:48.542165   92290 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:48.739257   92290 request.go:632] Waited for 197.010918ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465-m03
	I0916 10:48:48.739319   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465-m03
	I0916 10:48:48.739328   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:48.739336   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:48.739340   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:48.742113   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:48.938688   92290 request.go:632] Waited for 196.089785ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:48:48.938765   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:48:48.938776   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:48.938785   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:48.938795   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:48.942085   92290 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:48:48.942660   92290 pod_ready.go:93] pod "kube-scheduler-ha-770465-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:48:48.942685   92290 pod_ready.go:82] duration metric: took 400.512162ms for pod "kube-scheduler-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:48.942698   92290 pod_ready.go:39] duration metric: took 32.719177062s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:48:48.942719   92290 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:48:48.942779   92290 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:48:48.956063   92290 api_server.go:72] duration metric: took 32.835262528s to wait for apiserver process to appear ...
	I0916 10:48:48.956095   92290 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:48:48.956135   92290 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 10:48:48.960687   92290 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 10:48:48.960769   92290 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I0916 10:48:48.960779   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:48.960793   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:48.960803   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:48.961851   92290 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:48:48.961914   92290 api_server.go:141] control plane version: v1.31.1
	I0916 10:48:48.961928   92290 api_server.go:131] duration metric: took 5.826329ms to wait for apiserver health ...
	I0916 10:48:48.961936   92290 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:48:49.139238   92290 request.go:632] Waited for 177.223908ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:48:49.139313   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:48:49.139321   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:49.139331   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:49.139340   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:49.145011   92290 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:48:49.152580   92290 system_pods.go:59] 26 kube-system pods found
	I0916 10:48:49.152612   92290 system_pods.go:61] "coredns-7c65d6cfc9-9lw9q" [4d7cd19e-6f4d-44ce-a248-d0fdecdb9fe8] Running
	I0916 10:48:49.152620   92290 system_pods.go:61] "coredns-7c65d6cfc9-sbs22" [89925692-76b4-481f-bac7-16f06bea792a] Running
	I0916 10:48:49.152625   92290 system_pods.go:61] "etcd-ha-770465" [041e8a27-b6a3-4201-990f-c367da8c5eb5] Running
	I0916 10:48:49.152629   92290 system_pods.go:61] "etcd-ha-770465-m02" [1f922c60-bf68-44df-aa8c-42dce50e4dac] Running
	I0916 10:48:49.152633   92290 system_pods.go:61] "etcd-ha-770465-m03" [876ca26f-ca18-47df-9bfb-302321d66136] Running
	I0916 10:48:49.152636   92290 system_pods.go:61] "kindnet-66kfj" [9cd2abf1-a30e-40f4-af5c-59d50e367b36] Running
	I0916 10:48:49.152639   92290 system_pods.go:61] "kindnet-bflwn" [59d75712-5683-4b1c-a6ef-2a669d75da7a] Running
	I0916 10:48:49.152643   92290 system_pods.go:61] "kindnet-grjh8" [98658171-1486-44a9-8a20-0d77ea019206] Running
	I0916 10:48:49.152646   92290 system_pods.go:61] "kindnet-kht59" [3124d723-6fc6-4dce-9ecf-bb40b4665208] Running
	I0916 10:48:49.152649   92290 system_pods.go:61] "kube-apiserver-ha-770465" [aa8ba784-661f-4074-b0d3-f98cddf8ce8c] Running
	I0916 10:48:49.152652   92290 system_pods.go:61] "kube-apiserver-ha-770465-m02" [7acdd845-890e-449a-b112-2efbc19a7ef5] Running
	I0916 10:48:49.152655   92290 system_pods.go:61] "kube-apiserver-ha-770465-m03" [8ab3c55a-d726-456c-803b-e13c1032b13a] Running
	I0916 10:48:49.152659   92290 system_pods.go:61] "kube-controller-manager-ha-770465" [f0a59ee1-bf5e-432d-a185-f9497c75ada4] Running
	I0916 10:48:49.152663   92290 system_pods.go:61] "kube-controller-manager-ha-770465-m02" [55d7421b-5679-416d-a6ae-c36738c1ec32] Running
	I0916 10:48:49.152666   92290 system_pods.go:61] "kube-controller-manager-ha-770465-m03" [37613674-2ba4-4425-824a-d00d16ea7b62] Running
	I0916 10:48:49.152669   92290 system_pods.go:61] "kube-proxy-4qgcs" [0dbed632-c44c-4a17-8486-b328e2cc41d3] Running
	I0916 10:48:49.152673   92290 system_pods.go:61] "kube-proxy-78l2l" [2b7f1ea3-9b2d-46d4-aa98-951e1c246baa] Running
	I0916 10:48:49.152676   92290 system_pods.go:61] "kube-proxy-gd2mt" [fc3bf04d-635a-4264-883b-2fd72cac2e24] Running
	I0916 10:48:49.152681   92290 system_pods.go:61] "kube-proxy-qlspc" [703b3f60-1294-49fd-aab1-35474b771351] Running
	I0916 10:48:49.152686   92290 system_pods.go:61] "kube-scheduler-ha-770465" [85f95d4d-a2d4-45cf-8afe-be446333b9d0] Running
	I0916 10:48:49.152689   92290 system_pods.go:61] "kube-scheduler-ha-770465-m02" [ec894fe5-a992-49f2-a96f-caab3848f7dd] Running
	I0916 10:48:49.152692   92290 system_pods.go:61] "kube-scheduler-ha-770465-m03" [0c6e312a-4eca-4ca3-b0d9-d70f7bf5087c] Running
	I0916 10:48:49.152694   92290 system_pods.go:61] "kube-vip-ha-770465" [bf294b8a-9d09-473e-964e-b776614e2969] Running
	I0916 10:48:49.152697   92290 system_pods.go:61] "kube-vip-ha-770465-m02" [ffd7ee7b-efe1-4d2d-b2f8-5fd898de7dbf] Running
	I0916 10:48:49.152700   92290 system_pods.go:61] "kube-vip-ha-770465-m03" [408226b8-b640-4b83-bd7e-a61b37724197] Running
	I0916 10:48:49.152706   92290 system_pods.go:61] "storage-provisioner" [cf470925-4874-4744-8015-700e93ab924f] Running
	I0916 10:48:49.152711   92290 system_pods.go:74] duration metric: took 190.767588ms to wait for pod list to return data ...
	I0916 10:48:49.152721   92290 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:48:49.339141   92290 request.go:632] Waited for 186.350342ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:48:49.339191   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:48:49.339196   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:49.339203   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:49.339208   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:49.342349   92290 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:48:49.342477   92290 default_sa.go:45] found service account: "default"
	I0916 10:48:49.342497   92290 default_sa.go:55] duration metric: took 189.770215ms for default service account to be created ...
	I0916 10:48:49.342507   92290 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:48:49.539064   92290 request.go:632] Waited for 196.491406ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:48:49.539119   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:48:49.539125   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:49.539135   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:49.539139   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:49.544082   92290 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:48:49.551083   92290 system_pods.go:86] 26 kube-system pods found
	I0916 10:48:49.551117   92290 system_pods.go:89] "coredns-7c65d6cfc9-9lw9q" [4d7cd19e-6f4d-44ce-a248-d0fdecdb9fe8] Running
	I0916 10:48:49.551126   92290 system_pods.go:89] "coredns-7c65d6cfc9-sbs22" [89925692-76b4-481f-bac7-16f06bea792a] Running
	I0916 10:48:49.551131   92290 system_pods.go:89] "etcd-ha-770465" [041e8a27-b6a3-4201-990f-c367da8c5eb5] Running
	I0916 10:48:49.551137   92290 system_pods.go:89] "etcd-ha-770465-m02" [1f922c60-bf68-44df-aa8c-42dce50e4dac] Running
	I0916 10:48:49.551143   92290 system_pods.go:89] "etcd-ha-770465-m03" [876ca26f-ca18-47df-9bfb-302321d66136] Running
	I0916 10:48:49.551148   92290 system_pods.go:89] "kindnet-66kfj" [9cd2abf1-a30e-40f4-af5c-59d50e367b36] Running
	I0916 10:48:49.551153   92290 system_pods.go:89] "kindnet-bflwn" [59d75712-5683-4b1c-a6ef-2a669d75da7a] Running
	I0916 10:48:49.551159   92290 system_pods.go:89] "kindnet-grjh8" [98658171-1486-44a9-8a20-0d77ea019206] Running
	I0916 10:48:49.551164   92290 system_pods.go:89] "kindnet-kht59" [3124d723-6fc6-4dce-9ecf-bb40b4665208] Running
	I0916 10:48:49.551169   92290 system_pods.go:89] "kube-apiserver-ha-770465" [aa8ba784-661f-4074-b0d3-f98cddf8ce8c] Running
	I0916 10:48:49.551178   92290 system_pods.go:89] "kube-apiserver-ha-770465-m02" [7acdd845-890e-449a-b112-2efbc19a7ef5] Running
	I0916 10:48:49.551219   92290 system_pods.go:89] "kube-apiserver-ha-770465-m03" [8ab3c55a-d726-456c-803b-e13c1032b13a] Running
	I0916 10:48:49.551230   92290 system_pods.go:89] "kube-controller-manager-ha-770465" [f0a59ee1-bf5e-432d-a185-f9497c75ada4] Running
	I0916 10:48:49.551236   92290 system_pods.go:89] "kube-controller-manager-ha-770465-m02" [55d7421b-5679-416d-a6ae-c36738c1ec32] Running
	I0916 10:48:49.551243   92290 system_pods.go:89] "kube-controller-manager-ha-770465-m03" [37613674-2ba4-4425-824a-d00d16ea7b62] Running
	I0916 10:48:49.551247   92290 system_pods.go:89] "kube-proxy-4qgcs" [0dbed632-c44c-4a17-8486-b328e2cc41d3] Running
	I0916 10:48:49.551253   92290 system_pods.go:89] "kube-proxy-78l2l" [2b7f1ea3-9b2d-46d4-aa98-951e1c246baa] Running
	I0916 10:48:49.551258   92290 system_pods.go:89] "kube-proxy-gd2mt" [fc3bf04d-635a-4264-883b-2fd72cac2e24] Running
	I0916 10:48:49.551265   92290 system_pods.go:89] "kube-proxy-qlspc" [703b3f60-1294-49fd-aab1-35474b771351] Running
	I0916 10:48:49.551270   92290 system_pods.go:89] "kube-scheduler-ha-770465" [85f95d4d-a2d4-45cf-8afe-be446333b9d0] Running
	I0916 10:48:49.551276   92290 system_pods.go:89] "kube-scheduler-ha-770465-m02" [ec894fe5-a992-49f2-a96f-caab3848f7dd] Running
	I0916 10:48:49.551282   92290 system_pods.go:89] "kube-scheduler-ha-770465-m03" [0c6e312a-4eca-4ca3-b0d9-d70f7bf5087c] Running
	I0916 10:48:49.551298   92290 system_pods.go:89] "kube-vip-ha-770465" [bf294b8a-9d09-473e-964e-b776614e2969] Running
	I0916 10:48:49.551303   92290 system_pods.go:89] "kube-vip-ha-770465-m02" [ffd7ee7b-efe1-4d2d-b2f8-5fd898de7dbf] Running
	I0916 10:48:49.551308   92290 system_pods.go:89] "kube-vip-ha-770465-m03" [408226b8-b640-4b83-bd7e-a61b37724197] Running
	I0916 10:48:49.551313   92290 system_pods.go:89] "storage-provisioner" [cf470925-4874-4744-8015-700e93ab924f] Running
	I0916 10:48:49.551321   92290 system_pods.go:126] duration metric: took 208.808467ms to wait for k8s-apps to be running ...
	I0916 10:48:49.551333   92290 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:48:49.551390   92290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:48:49.564357   92290 system_svc.go:56] duration metric: took 13.006293ms WaitForService to wait for kubelet
	I0916 10:48:49.564385   92290 kubeadm.go:582] duration metric: took 33.443591033s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:48:49.564404   92290 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:48:49.738838   92290 request.go:632] Waited for 174.357975ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0916 10:48:49.738912   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0916 10:48:49.738921   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:49.738928   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:49.738934   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:49.742200   92290 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:48:49.743251   92290 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:48:49.743272   92290 node_conditions.go:123] node cpu capacity is 8
	I0916 10:48:49.743284   92290 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:48:49.743288   92290 node_conditions.go:123] node cpu capacity is 8
	I0916 10:48:49.743291   92290 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:48:49.743294   92290 node_conditions.go:123] node cpu capacity is 8
	I0916 10:48:49.743298   92290 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:48:49.743304   92290 node_conditions.go:123] node cpu capacity is 8
	I0916 10:48:49.743307   92290 node_conditions.go:105] duration metric: took 178.899403ms to run NodePressure ...
	I0916 10:48:49.743319   92290 start.go:241] waiting for startup goroutines ...
	I0916 10:48:49.743346   92290 start.go:255] writing updated cluster config ...
	I0916 10:48:49.745587   92290 out.go:201] 
	I0916 10:48:49.746997   92290 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:48:49.747097   92290 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/config.json ...
	I0916 10:48:49.748664   92290 out.go:177] * Starting "ha-770465-m04" worker node in "ha-770465" cluster
	I0916 10:48:49.749740   92290 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 10:48:49.750988   92290 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:48:49.752133   92290 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:48:49.752160   92290 cache.go:56] Caching tarball of preloaded images
	I0916 10:48:49.752161   92290 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:48:49.752271   92290 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 10:48:49.752285   92290 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 10:48:49.752416   92290 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/config.json ...
	W0916 10:48:49.772379   92290 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:48:49.772400   92290 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:48:49.772479   92290 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:48:49.772493   92290 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:48:49.772499   92290 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:48:49.772506   92290 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:48:49.772514   92290 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:48:49.773718   92290 image.go:273] response: 
	I0916 10:48:49.828837   92290 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:48:49.828875   92290 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:48:49.828913   92290 start.go:360] acquireMachinesLock for ha-770465-m04: {Name:mkc3281f68e01da8fba52f5dc70804d02e52876e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:48:49.828981   92290 start.go:364] duration metric: took 48.675µs to acquireMachinesLock for "ha-770465-m04"
	I0916 10:48:49.829005   92290 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:48:49.829012   92290 fix.go:54] fixHost starting: m04
	I0916 10:48:49.829252   92290 cli_runner.go:164] Run: docker container inspect ha-770465-m04 --format={{.State.Status}}
	I0916 10:48:49.847265   92290 fix.go:112] recreateIfNeeded on ha-770465-m04: state=Stopped err=<nil>
	W0916 10:48:49.847295   92290 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:48:49.849608   92290 out.go:177] * Restarting existing docker container for "ha-770465-m04" ...
	I0916 10:48:49.851233   92290 cli_runner.go:164] Run: docker start ha-770465-m04
	I0916 10:48:50.129208   92290 cli_runner.go:164] Run: docker container inspect ha-770465-m04 --format={{.State.Status}}
	I0916 10:48:50.148046   92290 kic.go:430] container "ha-770465-m04" state is running.
	I0916 10:48:50.148408   92290 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465-m04
	I0916 10:48:50.167098   92290 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/config.json ...
	I0916 10:48:50.167381   92290 machine.go:93] provisionDockerMachine start ...
	I0916 10:48:50.167450   92290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m04
	I0916 10:48:50.186528   92290 main.go:141] libmachine: Using SSH client type: native
	I0916 10:48:50.186763   92290 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I0916 10:48:50.186779   92290 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:48:50.187595   92290 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41110->127.0.0.1:32828: read: connection reset by peer
	I0916 10:48:53.319440   92290 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-770465-m04
	
	I0916 10:48:53.319469   92290 ubuntu.go:169] provisioning hostname "ha-770465-m04"
	I0916 10:48:53.319539   92290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m04
	I0916 10:48:53.345311   92290 main.go:141] libmachine: Using SSH client type: native
	I0916 10:48:53.345522   92290 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I0916 10:48:53.345545   92290 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-770465-m04 && echo "ha-770465-m04" | sudo tee /etc/hostname
	I0916 10:48:53.491173   92290 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-770465-m04
	
	I0916 10:48:53.491260   92290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m04
	I0916 10:48:53.508842   92290 main.go:141] libmachine: Using SSH client type: native
	I0916 10:48:53.509048   92290 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I0916 10:48:53.509073   92290 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-770465-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-770465-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-770465-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:48:53.647951   92290 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:48:53.647978   92290 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 10:48:53.647995   92290 ubuntu.go:177] setting up certificates
	I0916 10:48:53.648004   92290 provision.go:84] configureAuth start
	I0916 10:48:53.648051   92290 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465-m04
	I0916 10:48:53.665986   92290 provision.go:143] copyHostCerts
	I0916 10:48:53.666018   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:48:53.666068   92290 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 10:48:53.666078   92290 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:48:53.666143   92290 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 10:48:53.666248   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:48:53.666267   92290 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 10:48:53.666273   92290 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:48:53.666299   92290 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 10:48:53.666341   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:48:53.666357   92290 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 10:48:53.666363   92290 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:48:53.666390   92290 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 10:48:53.666437   92290 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.ha-770465-m04 san=[127.0.0.1 192.168.49.5 ha-770465-m04 localhost minikube]
	I0916 10:48:53.774138   92290 provision.go:177] copyRemoteCerts
	I0916 10:48:53.774193   92290 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:48:53.774227   92290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m04
	I0916 10:48:53.791511   92290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m04/id_rsa Username:docker}
	I0916 10:48:53.888281   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:48:53.888349   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 10:48:53.910092   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:48:53.910149   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:48:53.932234   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:48:53.932355   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:48:53.954793   92290 provision.go:87] duration metric: took 306.776883ms to configureAuth
	I0916 10:48:53.954817   92290 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:48:53.955024   92290 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:48:53.955038   92290 machine.go:96] duration metric: took 3.787643s to provisionDockerMachine
	I0916 10:48:53.955045   92290 start.go:293] postStartSetup for "ha-770465-m04" (driver="docker")
	I0916 10:48:53.955054   92290 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:48:53.955093   92290 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:48:53.955127   92290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m04
	I0916 10:48:53.973384   92290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m04/id_rsa Username:docker}
	I0916 10:48:54.072615   92290 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:48:54.075898   92290 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:48:54.075924   92290 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:48:54.075935   92290 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:48:54.075943   92290 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:48:54.075955   92290 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 10:48:54.076019   92290 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 10:48:54.076111   92290 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 10:48:54.076121   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /etc/ssl/certs/111892.pem
	I0916 10:48:54.076239   92290 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:48:54.084641   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:48:54.107297   92290 start.go:296] duration metric: took 152.237248ms for postStartSetup
	I0916 10:48:54.107393   92290 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:48:54.107488   92290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m04
	I0916 10:48:54.124693   92290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m04/id_rsa Username:docker}
	I0916 10:48:54.216606   92290 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:48:54.220902   92290 fix.go:56] duration metric: took 4.391884422s for fixHost
	I0916 10:48:54.220930   92290 start.go:83] releasing machines lock for "ha-770465-m04", held for 4.391937146s
	I0916 10:48:54.220999   92290 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465-m04
	I0916 10:48:54.240035   92290 out.go:177] * Found network options:
	I0916 10:48:54.241495   92290 out.go:177]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W0916 10:48:54.242754   92290 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:48:54.242773   92290 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:48:54.242781   92290 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:48:54.242799   92290 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:48:54.242810   92290 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:48:54.242826   92290 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 10:48:54.242889   92290 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:48:54.242935   92290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m04
	I0916 10:48:54.242944   92290 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:48:54.242991   92290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m04
	I0916 10:48:54.260812   92290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m04/id_rsa Username:docker}
	I0916 10:48:54.261325   92290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32828 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m04/id_rsa Username:docker}
	I0916 10:48:54.428872   92290 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 10:48:54.446844   92290 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:48:54.446945   92290 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:48:54.455409   92290 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:48:54.455437   92290 start.go:495] detecting cgroup driver to use...
	I0916 10:48:54.455472   92290 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:48:54.455530   92290 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 10:48:54.467295   92290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 10:48:54.478802   92290 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:48:54.478855   92290 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:48:54.491544   92290 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:48:54.502397   92290 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:48:54.582296   92290 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:48:54.662538   92290 docker.go:233] disabling docker service ...
	I0916 10:48:54.662608   92290 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:48:54.674993   92290 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:48:54.686211   92290 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:48:54.767161   92290 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:48:54.842265   92290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:48:54.853234   92290 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:48:54.869246   92290 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 10:48:54.879681   92290 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:48:54.889933   92290 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:48:54.889998   92290 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:48:54.899987   92290 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:48:54.909419   92290 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:48:54.918566   92290 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:48:54.928151   92290 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:48:54.938103   92290 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:48:54.948584   92290 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:48:54.958794   92290 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:48:54.969264   92290 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:48:54.977653   92290 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:48:54.986198   92290 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:48:55.068803   92290 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 10:48:55.186371   92290 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 10:48:55.186423   92290 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 10:48:55.189993   92290 start.go:563] Will wait 60s for crictl version
	I0916 10:48:55.190081   92290 ssh_runner.go:195] Run: which crictl
	I0916 10:48:55.193321   92290 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:48:55.230264   92290 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 10:48:55.230341   92290 ssh_runner.go:195] Run: containerd --version
	I0916 10:48:55.254716   92290 ssh_runner.go:195] Run: containerd --version
	I0916 10:48:55.278690   92290 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 10:48:55.280665   92290 out.go:177]   - env NO_PROXY=192.168.49.2
	I0916 10:48:55.282210   92290 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0916 10:48:55.283731   92290 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I0916 10:48:55.285083   92290 cli_runner.go:164] Run: docker network inspect ha-770465 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:48:55.302747   92290 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:48:55.306492   92290 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:48:55.317361   92290 mustload.go:65] Loading cluster: ha-770465
	I0916 10:48:55.317581   92290 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:48:55.317783   92290 cli_runner.go:164] Run: docker container inspect ha-770465 --format={{.State.Status}}
	I0916 10:48:55.335369   92290 host.go:66] Checking if "ha-770465" exists ...
	I0916 10:48:55.335703   92290 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465 for IP: 192.168.49.5
	I0916 10:48:55.335722   92290 certs.go:194] generating shared ca certs ...
	I0916 10:48:55.335810   92290 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:48:55.335949   92290 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 10:48:55.335992   92290 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 10:48:55.336010   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:48:55.336028   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:48:55.336045   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:48:55.336064   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:48:55.336134   92290 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 10:48:55.336176   92290 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 10:48:55.336187   92290 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:48:55.336220   92290 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 10:48:55.336252   92290 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:48:55.336281   92290 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 10:48:55.336342   92290 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:48:55.336380   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem -> /usr/share/ca-certificates/11189.pem
	I0916 10:48:55.336399   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /usr/share/ca-certificates/111892.pem
	I0916 10:48:55.336416   92290 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:48:55.336443   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:48:55.362024   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:48:55.387341   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:48:55.411947   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:48:55.435923   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 10:48:55.459820   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 10:48:55.483923   92290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:48:55.508583   92290 ssh_runner.go:195] Run: openssl version
	I0916 10:48:55.513897   92290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 10:48:55.522969   92290 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 10:48:55.526668   92290 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 10:48:55.526718   92290 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 10:48:55.534088   92290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:48:55.543218   92290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:48:55.552119   92290 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:48:55.555371   92290 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:48:55.555429   92290 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:48:55.561835   92290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:48:55.570205   92290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 10:48:55.579387   92290 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 10:48:55.582758   92290 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 10:48:55.582807   92290 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 10:48:55.589419   92290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 10:48:55.598076   92290 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:48:55.601311   92290 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:48:55.601350   92290 kubeadm.go:934] updating node {m04 192.168.49.5 0 v1.31.1  false true} ...
	I0916 10:48:55.601428   92290 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-770465-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-770465 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:48:55.601472   92290 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:48:55.609523   92290 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:48:55.609583   92290 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0916 10:48:55.618040   92290 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0916 10:48:55.634214   92290 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:48:55.651571   92290 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 10:48:55.654874   92290 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:48:55.665519   92290 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:48:55.751128   92290 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:48:55.762744   92290 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0916 10:48:55.762971   92290 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:48:55.765069   92290 out.go:177] * Verifying Kubernetes components...
	I0916 10:48:55.766748   92290 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:48:55.852550   92290 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:48:55.864215   92290 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:48:55.864539   92290 kapi.go:59] client config for ha-770465: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 10:48:55.864617   92290 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 10:48:55.864884   92290 node_ready.go:35] waiting up to 6m0s for node "ha-770465-m04" to be "Ready" ...
	I0916 10:48:55.865015   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m04
	I0916 10:48:55.865028   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:55.865038   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:55.865047   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:55.867609   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:56.365472   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m04
	I0916 10:48:56.365495   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:56.365506   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:56.365511   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:56.368216   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:56.866073   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m04
	I0916 10:48:56.866095   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:56.866105   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:56.866111   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:56.868714   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:57.365518   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m04
	I0916 10:48:57.365540   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:57.365549   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:57.365554   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:57.368247   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:57.865776   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m04
	I0916 10:48:57.865797   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:57.865806   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:57.865815   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:57.868206   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:57.868738   92290 node_ready.go:49] node "ha-770465-m04" has status "Ready":"True"
	I0916 10:48:57.868758   92290 node_ready.go:38] duration metric: took 2.003855226s for node "ha-770465-m04" to be "Ready" ...
	I0916 10:48:57.868768   92290 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:48:57.868847   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:48:57.868862   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:57.868873   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:57.868880   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:57.874301   92290 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:48:57.881278   92290 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:57.881361   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:48:57.881370   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:57.881381   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:57.881389   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:57.883780   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:57.884287   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:57.884301   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:57.884308   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:57.884314   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:57.886507   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:57.886989   92290 pod_ready.go:93] pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace has status "Ready":"True"
	I0916 10:48:57.887012   92290 pod_ready.go:82] duration metric: took 5.70844ms for pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:57.887027   92290 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sbs22" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:57.887099   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:48:57.887111   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:57.887121   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:57.887128   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:57.889308   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:57.889866   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:57.889878   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:57.889885   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:57.889889   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:57.891516   92290 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:48:57.891927   92290 pod_ready.go:93] pod "coredns-7c65d6cfc9-sbs22" in "kube-system" namespace has status "Ready":"True"
	I0916 10:48:57.891944   92290 pod_ready.go:82] duration metric: took 4.910453ms for pod "coredns-7c65d6cfc9-sbs22" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:57.891952   92290 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:57.892015   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465
	I0916 10:48:57.892026   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:57.892036   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:57.892043   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:57.893943   92290 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:48:57.894487   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:57.894503   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:57.894511   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:57.894515   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:57.896219   92290 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:48:57.896691   92290 pod_ready.go:93] pod "etcd-ha-770465" in "kube-system" namespace has status "Ready":"True"
	I0916 10:48:57.896710   92290 pod_ready.go:82] duration metric: took 4.750226ms for pod "etcd-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:57.896722   92290 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:57.896781   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m02
	I0916 10:48:57.896792   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:57.896801   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:57.896811   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:57.898558   92290 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:48:57.898999   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:48:57.899012   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:57.899019   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:57.899024   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:57.900802   92290 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:48:57.901217   92290 pod_ready.go:93] pod "etcd-ha-770465-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:48:57.901233   92290 pod_ready.go:82] duration metric: took 4.50169ms for pod "etcd-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:57.901242   92290 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:58.066446   92290 request.go:632] Waited for 165.143696ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:48:58.066500   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:48:58.066508   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:58.066516   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:58.066519   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:58.069401   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:58.266405   92290 request.go:632] Waited for 196.366574ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:48:58.266471   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:48:58.266486   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:58.266499   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:58.266508   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:58.269221   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:58.269747   92290 pod_ready.go:93] pod "etcd-ha-770465-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:48:58.269768   92290 pod_ready.go:82] duration metric: took 368.520451ms for pod "etcd-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:58.269788   92290 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:58.466820   92290 request.go:632] Waited for 196.946731ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465
	I0916 10:48:58.466876   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465
	I0916 10:48:58.466881   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:58.466887   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:58.466891   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:58.469720   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:58.665891   92290 request.go:632] Waited for 195.268613ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:58.665953   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:58.665960   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:58.665970   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:58.665976   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:58.668689   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:58.669167   92290 pod_ready.go:93] pod "kube-apiserver-ha-770465" in "kube-system" namespace has status "Ready":"True"
	I0916 10:48:58.669187   92290 pod_ready.go:82] duration metric: took 399.391342ms for pod "kube-apiserver-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:58.669201   92290 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:58.866178   92290 request.go:632] Waited for 196.874021ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m02
	I0916 10:48:58.866244   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m02
	I0916 10:48:58.866252   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:58.866266   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:58.866277   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:58.868863   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:59.065857   92290 request.go:632] Waited for 196.281767ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:48:59.065916   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:48:59.065921   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:59.065928   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:59.065932   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:59.068674   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:59.069303   92290 pod_ready.go:93] pod "kube-apiserver-ha-770465-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:48:59.069325   92290 pod_ready.go:82] duration metric: took 400.116ms for pod "kube-apiserver-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:59.069335   92290 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:59.266433   92290 request.go:632] Waited for 197.005241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m03
	I0916 10:48:59.266484   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m03
	I0916 10:48:59.266489   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:59.266497   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:59.266500   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:59.269432   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:59.466110   92290 request.go:632] Waited for 196.051101ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:48:59.466193   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:48:59.466205   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:59.466216   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:59.466225   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:59.468795   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:59.469364   92290 pod_ready.go:93] pod "kube-apiserver-ha-770465-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:48:59.469385   92290 pod_ready.go:82] duration metric: took 400.042149ms for pod "kube-apiserver-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:59.469399   92290 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:59.666582   92290 request.go:632] Waited for 197.077775ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-770465
	I0916 10:48:59.666680   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-770465
	I0916 10:48:59.666693   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:59.666703   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:59.666712   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:59.669930   92290 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:48:59.865964   92290 request.go:632] Waited for 195.279321ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:59.866050   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:48:59.866060   92290 round_trippers.go:469] Request Headers:
	I0916 10:48:59.866068   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:48:59.866074   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:48:59.869099   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:48:59.869703   92290 pod_ready.go:93] pod "kube-controller-manager-ha-770465" in "kube-system" namespace has status "Ready":"True"
	I0916 10:48:59.869728   92290 pod_ready.go:82] duration metric: took 400.320181ms for pod "kube-controller-manager-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:48:59.869744   92290 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:00.066723   92290 request.go:632] Waited for 196.896337ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-770465-m02
	I0916 10:49:00.066779   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-770465-m02
	I0916 10:49:00.066784   92290 round_trippers.go:469] Request Headers:
	I0916 10:49:00.066792   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:00.066798   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:00.069555   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:00.266558   92290 request.go:632] Waited for 196.385293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:49:00.266614   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:49:00.266620   92290 round_trippers.go:469] Request Headers:
	I0916 10:49:00.266626   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:00.266630   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:00.269186   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:00.269720   92290 pod_ready.go:93] pod "kube-controller-manager-ha-770465-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:49:00.269741   92290 pod_ready.go:82] duration metric: took 399.989809ms for pod "kube-controller-manager-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:00.269752   92290 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:00.466728   92290 request.go:632] Waited for 196.906547ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-770465-m03
	I0916 10:49:00.466802   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-770465-m03
	I0916 10:49:00.466812   92290 round_trippers.go:469] Request Headers:
	I0916 10:49:00.466818   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:00.466821   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:00.469559   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:00.666415   92290 request.go:632] Waited for 196.22988ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:49:00.666493   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:49:00.666499   92290 round_trippers.go:469] Request Headers:
	I0916 10:49:00.666507   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:00.666513   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:00.669232   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:00.669685   92290 pod_ready.go:93] pod "kube-controller-manager-ha-770465-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:49:00.669706   92290 pod_ready.go:82] duration metric: took 399.942842ms for pod "kube-controller-manager-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:00.669719   92290 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4qgcs" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:00.866637   92290 request.go:632] Waited for 196.828268ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4qgcs
	I0916 10:49:00.866695   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4qgcs
	I0916 10:49:00.866700   92290 round_trippers.go:469] Request Headers:
	I0916 10:49:00.866707   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:00.866711   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:00.869688   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:01.066564   92290 request.go:632] Waited for 196.344085ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:49:01.066643   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:49:01.066656   92290 round_trippers.go:469] Request Headers:
	I0916 10:49:01.066666   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:01.066675   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:01.069372   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:01.069822   92290 pod_ready.go:93] pod "kube-proxy-4qgcs" in "kube-system" namespace has status "Ready":"True"
	I0916 10:49:01.069840   92290 pod_ready.go:82] duration metric: took 400.113171ms for pod "kube-proxy-4qgcs" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:01.069849   92290 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-78l2l" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:01.265862   92290 request.go:632] Waited for 195.921163ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-78l2l
	I0916 10:49:01.265923   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-78l2l
	I0916 10:49:01.265931   92290 round_trippers.go:469] Request Headers:
	I0916 10:49:01.265941   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:01.265947   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:01.269004   92290 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:49:01.465996   92290 request.go:632] Waited for 196.294106ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m04
	I0916 10:49:01.466075   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m04
	I0916 10:49:01.466080   92290 round_trippers.go:469] Request Headers:
	I0916 10:49:01.466088   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:01.466095   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:01.469012   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:01.469488   92290 pod_ready.go:93] pod "kube-proxy-78l2l" in "kube-system" namespace has status "Ready":"True"
	I0916 10:49:01.469505   92290 pod_ready.go:82] duration metric: took 399.648946ms for pod "kube-proxy-78l2l" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:01.469515   92290 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gd2mt" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:01.666678   92290 request.go:632] Waited for 197.088992ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gd2mt
	I0916 10:49:01.666757   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gd2mt
	I0916 10:49:01.666763   92290 round_trippers.go:469] Request Headers:
	I0916 10:49:01.666770   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:01.666778   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:01.669528   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:01.866594   92290 request.go:632] Waited for 196.358524ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:49:01.866681   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:49:01.866693   92290 round_trippers.go:469] Request Headers:
	I0916 10:49:01.866704   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:01.866713   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:01.869574   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:01.870031   92290 pod_ready.go:93] pod "kube-proxy-gd2mt" in "kube-system" namespace has status "Ready":"True"
	I0916 10:49:01.870050   92290 pod_ready.go:82] duration metric: took 400.526477ms for pod "kube-proxy-gd2mt" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:01.870064   92290 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qlspc" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:02.066202   92290 request.go:632] Waited for 196.049396ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qlspc
	I0916 10:49:02.066255   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qlspc
	I0916 10:49:02.066262   92290 round_trippers.go:469] Request Headers:
	I0916 10:49:02.066272   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:02.066277   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:02.069075   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:02.265957   92290 request.go:632] Waited for 196.274584ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:49:02.266034   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:49:02.266042   92290 round_trippers.go:469] Request Headers:
	I0916 10:49:02.266050   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:02.266056   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:02.268729   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:02.269216   92290 pod_ready.go:93] pod "kube-proxy-qlspc" in "kube-system" namespace has status "Ready":"True"
	I0916 10:49:02.269240   92290 pod_ready.go:82] duration metric: took 399.168259ms for pod "kube-proxy-qlspc" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:02.269277   92290 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:02.466291   92290 request.go:632] Waited for 196.926271ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465
	I0916 10:49:02.466389   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465
	I0916 10:49:02.466396   92290 round_trippers.go:469] Request Headers:
	I0916 10:49:02.466404   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:02.466410   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:02.469032   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:02.665807   92290 request.go:632] Waited for 196.268396ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:49:02.665882   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:49:02.665890   92290 round_trippers.go:469] Request Headers:
	I0916 10:49:02.665898   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:02.665904   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:02.668639   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:02.669119   92290 pod_ready.go:93] pod "kube-scheduler-ha-770465" in "kube-system" namespace has status "Ready":"True"
	I0916 10:49:02.669146   92290 pod_ready.go:82] duration metric: took 399.856591ms for pod "kube-scheduler-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:02.669161   92290 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:02.866343   92290 request.go:632] Waited for 197.077285ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465-m02
	I0916 10:49:02.866397   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465-m02
	I0916 10:49:02.866403   92290 round_trippers.go:469] Request Headers:
	I0916 10:49:02.866410   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:02.866417   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:02.869098   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:03.066110   92290 request.go:632] Waited for 196.284324ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:49:03.066196   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:49:03.066208   92290 round_trippers.go:469] Request Headers:
	I0916 10:49:03.066220   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:03.066231   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:03.069042   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:03.069509   92290 pod_ready.go:93] pod "kube-scheduler-ha-770465-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:49:03.069526   92290 pod_ready.go:82] duration metric: took 400.356809ms for pod "kube-scheduler-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:03.069539   92290 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:03.266511   92290 request.go:632] Waited for 196.885062ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465-m03
	I0916 10:49:03.266579   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465-m03
	I0916 10:49:03.266589   92290 round_trippers.go:469] Request Headers:
	I0916 10:49:03.266598   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:03.266604   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:03.269205   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:03.466002   92290 request.go:632] Waited for 196.26883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:49:03.466074   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:49:03.466083   92290 round_trippers.go:469] Request Headers:
	I0916 10:49:03.466092   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:03.466102   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:03.468925   92290 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:49:03.469342   92290 pod_ready.go:93] pod "kube-scheduler-ha-770465-m03" in "kube-system" namespace has status "Ready":"True"
	I0916 10:49:03.469359   92290 pod_ready.go:82] duration metric: took 399.813561ms for pod "kube-scheduler-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:49:03.469370   92290 pod_ready.go:39] duration metric: took 5.600591946s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:49:03.469388   92290 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:49:03.469436   92290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:49:03.481581   92290 system_svc.go:56] duration metric: took 12.188844ms WaitForService to wait for kubelet
	I0916 10:49:03.481610   92290 kubeadm.go:582] duration metric: took 7.718825504s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:49:03.481626   92290 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:49:03.665970   92290 request.go:632] Waited for 184.262092ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0916 10:49:03.666025   92290 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0916 10:49:03.666035   92290 round_trippers.go:469] Request Headers:
	I0916 10:49:03.666045   92290 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:49:03.666054   92290 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:49:03.669585   92290 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:49:03.670679   92290 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:49:03.670701   92290 node_conditions.go:123] node cpu capacity is 8
	I0916 10:49:03.670710   92290 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:49:03.670714   92290 node_conditions.go:123] node cpu capacity is 8
	I0916 10:49:03.670718   92290 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:49:03.670721   92290 node_conditions.go:123] node cpu capacity is 8
	I0916 10:49:03.670724   92290 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:49:03.670727   92290 node_conditions.go:123] node cpu capacity is 8
	I0916 10:49:03.670731   92290 node_conditions.go:105] duration metric: took 189.100239ms to run NodePressure ...
	I0916 10:49:03.670741   92290 start.go:241] waiting for startup goroutines ...
	I0916 10:49:03.670763   92290 start.go:255] writing updated cluster config ...
	I0916 10:49:03.671047   92290 ssh_runner.go:195] Run: rm -f paused
	I0916 10:49:03.677780   92290 out.go:177] * Done! kubectl is now configured to use "ha-770465" cluster and "default" namespace by default
	E0916 10:49:03.679351   92290 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	946241353e03d       6e38f40d628db       25 seconds ago       Running             storage-provisioner       2                   c3c1ad84b80d1       storage-provisioner
	0ee20b8c8789a       12968670680f4       About a minute ago   Running             kindnet-cni               1                   29b4d23a9d620       kindnet-grjh8
	81f453ca3f8d1       c69fa2e9cbf5f       About a minute ago   Running             coredns                   1                   0ea2d513d8370       coredns-7c65d6cfc9-9lw9q
	917ef16037c50       c69fa2e9cbf5f       About a minute ago   Running             coredns                   1                   d9ffbdccdd56c       coredns-7c65d6cfc9-sbs22
	8227f9c32d21c       8c811b4aec35f       About a minute ago   Running             busybox                   1                   4fee99e37559b       busybox-7dff88458-845rc
	c762b9bc541ee       6e38f40d628db       About a minute ago   Exited              storage-provisioner       1                   c3c1ad84b80d1       storage-provisioner
	de5c6dcf960e9       60c005f310ff3       About a minute ago   Running             kube-proxy                1                   2167f95b9241b       kube-proxy-gd2mt
	bcd02f03466d8       9aa1fad941575       About a minute ago   Running             kube-scheduler            1                   528f83f1c8d77       kube-scheduler-ha-770465
	4a562d336c170       175ffd71cce3d       About a minute ago   Running             kube-controller-manager   1                   ec32d7f38f4f8       kube-controller-manager-ha-770465
	f7cfb57f60029       38af8ddebf499       About a minute ago   Running             kube-vip                  0                   3b11ee3ddac2f       kube-vip-ha-770465
	e87832bf428c0       2e96e5913fc06       About a minute ago   Running             etcd                      1                   4dc0fb6f28527       etcd-ha-770465
	b715d9632d76b       6bab7719df100       About a minute ago   Running             kube-apiserver            1                   ba20d64a5ab26       kube-apiserver-ha-770465
	e01ca3a0115c5       8c811b4aec35f       3 minutes ago        Exited              busybox                   0                   55f666e26fe6c       busybox-7dff88458-845rc
	505568793f357       c69fa2e9cbf5f       4 minutes ago        Exited              coredns                   0                   1fd35ed82463b       coredns-7c65d6cfc9-sbs22
	120ff8a81efa1       c69fa2e9cbf5f       4 minutes ago        Exited              coredns                   0                   be59c99f1c75f       coredns-7c65d6cfc9-9lw9q
	b31c2d77265e3       12968670680f4       4 minutes ago        Exited              kindnet-cni               0                   3fc06a79ff69e       kindnet-grjh8
	15571e99ab074       60c005f310ff3       4 minutes ago        Exited              kube-proxy                0                   21353a9cca68d       kube-proxy-gd2mt
	8b022d1d91205       2e96e5913fc06       5 minutes ago        Exited              etcd                      0                   1e24ae4d4e2d8       etcd-ha-770465
	fc07020cd4841       9aa1fad941575       5 minutes ago        Exited              kube-scheduler            0                   d47515013434a       kube-scheduler-ha-770465
	780f65ad6abab       175ffd71cce3d       5 minutes ago        Exited              kube-controller-manager   0                   51746ddbcbea1       kube-controller-manager-ha-770465
	535bd4e938e3a       6bab7719df100       5 minutes ago        Exited              kube-apiserver            0                   53fe88679ccf5       kube-apiserver-ha-770465
	
	
	==> containerd <==
	Sep 16 10:48:33 ha-770465 containerd[596]: time="2024-09-16T10:48:33.694240260Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 16 10:48:34 ha-770465 containerd[596]: time="2024-09-16T10:48:34.151471599Z" level=info msg="RemoveContainer for \"ec0de017ccfa5917b48a621ba0257c01fb46d96654a8d2e3f173a41e811e0f0e\""
	Sep 16 10:48:34 ha-770465 containerd[596]: time="2024-09-16T10:48:34.156466737Z" level=info msg="RemoveContainer for \"ec0de017ccfa5917b48a621ba0257c01fb46d96654a8d2e3f173a41e811e0f0e\" returns successfully"
	Sep 16 10:48:47 ha-770465 containerd[596]: time="2024-09-16T10:48:47.960228063Z" level=info msg="CreateContainer within sandbox \"c3c1ad84b80d11292f4cbee11f2b4cdf55236813b5755847e4a0852afe5f3456\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:2,}"
	Sep 16 10:48:47 ha-770465 containerd[596]: time="2024-09-16T10:48:47.974018379Z" level=info msg="CreateContainer within sandbox \"c3c1ad84b80d11292f4cbee11f2b4cdf55236813b5755847e4a0852afe5f3456\" for &ContainerMetadata{Name:storage-provisioner,Attempt:2,} returns container id \"946241353e03d16a05ed42c23006cfa465d022f50c7580d1bec22425ee59a4ac\""
	Sep 16 10:48:47 ha-770465 containerd[596]: time="2024-09-16T10:48:47.974603022Z" level=info msg="StartContainer for \"946241353e03d16a05ed42c23006cfa465d022f50c7580d1bec22425ee59a4ac\""
	Sep 16 10:48:48 ha-770465 containerd[596]: time="2024-09-16T10:48:48.019050318Z" level=info msg="StartContainer for \"946241353e03d16a05ed42c23006cfa465d022f50c7580d1bec22425ee59a4ac\" returns successfully"
	Sep 16 10:48:48 ha-770465 containerd[596]: time="2024-09-16T10:48:48.944691471Z" level=info msg="RemoveContainer for \"75391807e98390e5055c12f632996e1dc188ba32700573915b99ed477d23fb36\""
	Sep 16 10:48:48 ha-770465 containerd[596]: time="2024-09-16T10:48:48.949932362Z" level=info msg="RemoveContainer for \"75391807e98390e5055c12f632996e1dc188ba32700573915b99ed477d23fb36\" returns successfully"
	Sep 16 10:48:48 ha-770465 containerd[596]: time="2024-09-16T10:48:48.951688580Z" level=info msg="StopPodSandbox for \"bbeb0c20f306900f9522d4778e486cd6db5a5a7cb2045b50e6690213605e41f3\""
	Sep 16 10:48:48 ha-770465 containerd[596]: time="2024-09-16T10:48:48.951825837Z" level=info msg="TearDown network for sandbox \"bbeb0c20f306900f9522d4778e486cd6db5a5a7cb2045b50e6690213605e41f3\" successfully"
	Sep 16 10:48:48 ha-770465 containerd[596]: time="2024-09-16T10:48:48.951840204Z" level=info msg="StopPodSandbox for \"bbeb0c20f306900f9522d4778e486cd6db5a5a7cb2045b50e6690213605e41f3\" returns successfully"
	Sep 16 10:48:48 ha-770465 containerd[596]: time="2024-09-16T10:48:48.952312884Z" level=info msg="RemovePodSandbox for \"bbeb0c20f306900f9522d4778e486cd6db5a5a7cb2045b50e6690213605e41f3\""
	Sep 16 10:48:48 ha-770465 containerd[596]: time="2024-09-16T10:48:48.952357448Z" level=info msg="Forcibly stopping sandbox \"bbeb0c20f306900f9522d4778e486cd6db5a5a7cb2045b50e6690213605e41f3\""
	Sep 16 10:48:48 ha-770465 containerd[596]: time="2024-09-16T10:48:48.952429151Z" level=info msg="TearDown network for sandbox \"bbeb0c20f306900f9522d4778e486cd6db5a5a7cb2045b50e6690213605e41f3\" successfully"
	Sep 16 10:48:48 ha-770465 containerd[596]: time="2024-09-16T10:48:48.956705364Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bbeb0c20f306900f9522d4778e486cd6db5a5a7cb2045b50e6690213605e41f3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 16 10:48:48 ha-770465 containerd[596]: time="2024-09-16T10:48:48.956836699Z" level=info msg="RemovePodSandbox \"bbeb0c20f306900f9522d4778e486cd6db5a5a7cb2045b50e6690213605e41f3\" returns successfully"
	Sep 16 10:48:48 ha-770465 containerd[596]: time="2024-09-16T10:48:48.957398365Z" level=info msg="StopPodSandbox for \"f2ec4aec1e0b2f544419cd3b2e450831fb150e387410de1a2355c7eea6e5795e\""
	Sep 16 10:48:48 ha-770465 containerd[596]: time="2024-09-16T10:48:48.957501411Z" level=info msg="TearDown network for sandbox \"f2ec4aec1e0b2f544419cd3b2e450831fb150e387410de1a2355c7eea6e5795e\" successfully"
	Sep 16 10:48:48 ha-770465 containerd[596]: time="2024-09-16T10:48:48.957514943Z" level=info msg="StopPodSandbox for \"f2ec4aec1e0b2f544419cd3b2e450831fb150e387410de1a2355c7eea6e5795e\" returns successfully"
	Sep 16 10:48:48 ha-770465 containerd[596]: time="2024-09-16T10:48:48.957867599Z" level=info msg="RemovePodSandbox for \"f2ec4aec1e0b2f544419cd3b2e450831fb150e387410de1a2355c7eea6e5795e\""
	Sep 16 10:48:48 ha-770465 containerd[596]: time="2024-09-16T10:48:48.957902463Z" level=info msg="Forcibly stopping sandbox \"f2ec4aec1e0b2f544419cd3b2e450831fb150e387410de1a2355c7eea6e5795e\""
	Sep 16 10:48:48 ha-770465 containerd[596]: time="2024-09-16T10:48:48.957989408Z" level=info msg="TearDown network for sandbox \"f2ec4aec1e0b2f544419cd3b2e450831fb150e387410de1a2355c7eea6e5795e\" successfully"
	Sep 16 10:48:48 ha-770465 containerd[596]: time="2024-09-16T10:48:48.962987859Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f2ec4aec1e0b2f544419cd3b2e450831fb150e387410de1a2355c7eea6e5795e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 16 10:48:48 ha-770465 containerd[596]: time="2024-09-16T10:48:48.963082202Z" level=info msg="RemovePodSandbox \"f2ec4aec1e0b2f544419cd3b2e450831fb150e387410de1a2355c7eea6e5795e\" returns successfully"
	
	
	==> coredns [120ff8a81efa1183e1409d1cdb8fa5e1e7c675ebb3d0f165783c5512f48e07ce] <==
	[INFO] 127.0.0.1:47401 - 6102 "HINFO IN 7552043894687877427.7409354771220060933. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009655762s
	[INFO] 10.244.2.2:41874 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000284968s
	[INFO] 10.244.2.2:43872 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000938288s
	[INFO] 10.244.1.2:52261 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161563s
	[INFO] 10.244.1.2:56357 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.001567449s
	[INFO] 10.244.1.2:42838 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000111184s
	[INFO] 10.244.1.2:53654 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001745214s
	[INFO] 10.244.0.4:53747 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.011812399s
	[INFO] 10.244.2.2:58497 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001353637s
	[INFO] 10.244.2.2:44119 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000158419s
	[INFO] 10.244.1.2:54873 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000164329s
	[INFO] 10.244.1.2:44900 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001619482s
	[INFO] 10.244.1.2:52029 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000070813s
	[INFO] 10.244.0.4:56319 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144654s
	[INFO] 10.244.0.4:58425 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0002097s
	[INFO] 10.244.2.2:50531 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000233084s
	[INFO] 10.244.1.2:57721 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200098s
	[INFO] 10.244.1.2:47494 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147603s
	[INFO] 10.244.1.2:55948 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104458s
	[INFO] 10.244.1.2:41737 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000105046s
	[INFO] 10.244.0.4:56889 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184697s
	[INFO] 10.244.0.4:58113 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000142403s
	[INFO] 10.244.2.2:46838 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000183592s
	[INFO] 10.244.2.2:57080 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000106185s
	[INFO] 10.244.1.2:47643 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000156174s
	
	
	==> coredns [505568793f3574e96c0799007e1921a7d91c4ad3c8aeba5624a5d0c4a02e46d5] <==
	[INFO] 10.244.0.4:52021 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000136338s
	[INFO] 10.244.0.4:55747 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000112985s
	[INFO] 10.244.2.2:51737 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184765s
	[INFO] 10.244.2.2:53734 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001929846s
	[INFO] 10.244.2.2:48077 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000125445s
	[INFO] 10.244.2.2:56941 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000093993s
	[INFO] 10.244.2.2:53593 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00010639s
	[INFO] 10.244.2.2:53291 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.0000714s
	[INFO] 10.244.1.2:54655 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000185177s
	[INFO] 10.244.1.2:48932 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002062451s
	[INFO] 10.244.1.2:41866 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000103063s
	[INFO] 10.244.1.2:51846 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000082591s
	[INFO] 10.244.1.2:55756 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000087775s
	[INFO] 10.244.0.4:55553 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000098067s
	[INFO] 10.244.0.4:54433 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008689s
	[INFO] 10.244.2.2:46677 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00019083s
	[INFO] 10.244.2.2:33741 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073821s
	[INFO] 10.244.2.2:54300 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000115863s
	[INFO] 10.244.0.4:41373 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000182332s
	[INFO] 10.244.0.4:46249 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000174562s
	[INFO] 10.244.2.2:53722 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000107299s
	[INFO] 10.244.2.2:37649 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000141192s
	[INFO] 10.244.1.2:47658 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000179545s
	[INFO] 10.244.1.2:40089 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000124796s
	[INFO] 10.244.1.2:58475 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000130146s
	
	
	==> coredns [81f453ca3f8d171840aacc686ad19952955400177043021ed6f8e79531037bec] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:54507 - 15852 "HINFO IN 7992863260379517052.3058821443598282648. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00991266s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2002612651]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 10:48:03.956) (total time: 30000ms):
	Trace[2002612651]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (10:48:33.957)
	Trace[2002612651]: [30.000909037s] [30.000909037s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[912886218]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 10:48:03.956) (total time: 30000ms):
	Trace[912886218]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (10:48:33.957)
	Trace[912886218]: [30.000626226s] [30.000626226s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1231085643]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 10:48:03.956) (total time: 30001ms):
	Trace[1231085643]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (10:48:33.957)
	Trace[1231085643]: [30.001617592s] [30.001617592s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [917ef16037c509fa5bcfbad0bd3aae289f62731b5435ad933b59b707dbe0320e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44724 - 9916 "HINFO IN 5396320650353980330.3094598020936758036. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011058339s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1325993115]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 10:48:03.956) (total time: 30000ms):
	Trace[1325993115]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (10:48:33.957)
	Trace[1325993115]: [30.000839358s] [30.000839358s] END
	[INFO] plugin/kubernetes: Trace[1505747015]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 10:48:03.956) (total time: 30001ms):
	Trace[1505747015]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (10:48:33.957)
	Trace[1505747015]: [30.001035282s] [30.001035282s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[38045809]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 10:48:03.956) (total time: 30001ms):
	Trace[38045809]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (10:48:33.957)
	Trace[38045809]: [30.001098584s] [30.001098584s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-770465
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-770465
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-770465
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_44_20_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:44:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-770465
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:49:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:48:02 +0000   Mon, 16 Sep 2024 10:44:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:48:02 +0000   Mon, 16 Sep 2024 10:44:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:48:02 +0000   Mon, 16 Sep 2024 10:44:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:48:02 +0000   Mon, 16 Sep 2024 10:44:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-770465
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 c9d6594d7abd4b08884d937988d7952e
	  System UUID:                f3656390-934b-423a-8190-9f78053eddee
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-845rc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m39s
	  kube-system                 coredns-7c65d6cfc9-9lw9q             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m50s
	  kube-system                 coredns-7c65d6cfc9-sbs22             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m50s
	  kube-system                 etcd-ha-770465                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m57s
	  kube-system                 kindnet-grjh8                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m50s
	  kube-system                 kube-apiserver-ha-770465             250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 kube-controller-manager-ha-770465    200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 kube-proxy-gd2mt                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 kube-scheduler-ha-770465             100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 kube-vip-ha-770465                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 69s                kube-proxy       
	  Normal   Starting                 4m48s              kube-proxy       
	  Normal   NodeHasSufficientPID     4m55s              kubelet          Node ha-770465 status is now: NodeHasSufficientPID
	  Normal   Starting                 4m55s              kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m55s              kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  4m55s              kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  4m55s              kubelet          Node ha-770465 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m55s              kubelet          Node ha-770465 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           4m51s              node-controller  Node ha-770465 event: Registered Node ha-770465 in Controller
	  Normal   RegisteredNode           4m29s              node-controller  Node ha-770465 event: Registered Node ha-770465 in Controller
	  Normal   RegisteredNode           3m54s              node-controller  Node ha-770465 event: Registered Node ha-770465 in Controller
	  Normal   RegisteredNode           2m7s               node-controller  Node ha-770465 event: Registered Node ha-770465 in Controller
	  Warning  CgroupV1                 86s                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 86s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  86s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  85s (x8 over 86s)  kubelet          Node ha-770465 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    85s (x7 over 86s)  kubelet          Node ha-770465 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     85s (x7 over 86s)  kubelet          Node ha-770465 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           73s                node-controller  Node ha-770465 event: Registered Node ha-770465 in Controller
	  Normal   RegisteredNode           73s                node-controller  Node ha-770465 event: Registered Node ha-770465 in Controller
	  Normal   RegisteredNode           57s                node-controller  Node ha-770465 event: Registered Node ha-770465 in Controller
	
	
	Name:               ha-770465-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-770465-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-770465
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_44_39_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:44:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-770465-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:49:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:48:05 +0000   Mon, 16 Sep 2024 10:44:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:48:05 +0000   Mon, 16 Sep 2024 10:44:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:48:05 +0000   Mon, 16 Sep 2024 10:44:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:48:05 +0000   Mon, 16 Sep 2024 10:44:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-770465-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 838d2a4bb9b54268a0af6d8a87d15a0c
	  System UUID:                0ec75a9b-7a96-466a-872e-476404dc1e5d
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-klfw4                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m39s
	  kube-system                 etcd-ha-770465-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m35s
	  kube-system                 kindnet-kht59                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m37s
	  kube-system                 kube-apiserver-ha-770465-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 kube-controller-manager-ha-770465-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m35s
	  kube-system                 kube-proxy-4qgcs                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m37s
	  kube-system                 kube-scheduler-ha-770465-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m29s
	  kube-system                 kube-vip-ha-770465-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 72s                    kube-proxy       
	  Normal   Starting                 4m33s                  kube-proxy       
	  Normal   NodeHasNoDiskPressure    4m37s (x7 over 4m37s)  kubelet          Node ha-770465-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m37s (x7 over 4m37s)  kubelet          Node ha-770465-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  4m37s (x8 over 4m37s)  kubelet          Node ha-770465-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  4m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           4m36s                  node-controller  Node ha-770465-m02 event: Registered Node ha-770465-m02 in Controller
	  Normal   RegisteredNode           4m29s                  node-controller  Node ha-770465-m02 event: Registered Node ha-770465-m02 in Controller
	  Normal   RegisteredNode           3m54s                  node-controller  Node ha-770465-m02 event: Registered Node ha-770465-m02 in Controller
	  Normal   NodeAllocatableEnforced  2m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     2m14s (x7 over 2m14s)  kubelet          Node ha-770465-m02 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m14s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m14s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  2m14s (x8 over 2m14s)  kubelet          Node ha-770465-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m14s (x7 over 2m14s)  kubelet          Node ha-770465-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           2m7s                   node-controller  Node ha-770465-m02 event: Registered Node ha-770465-m02 in Controller
	  Normal   Starting                 83s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 83s                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  83s (x8 over 83s)      kubelet          Node ha-770465-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    83s (x7 over 83s)      kubelet          Node ha-770465-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     83s (x7 over 83s)      kubelet          Node ha-770465-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  83s                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           73s                    node-controller  Node ha-770465-m02 event: Registered Node ha-770465-m02 in Controller
	  Normal   RegisteredNode           73s                    node-controller  Node ha-770465-m02 event: Registered Node ha-770465-m02 in Controller
	  Normal   RegisteredNode           57s                    node-controller  Node ha-770465-m02 event: Registered Node ha-770465-m02 in Controller
	
	
	Name:               ha-770465-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-770465-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-770465
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_46_20_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:46:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-770465-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:49:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:48:57 +0000   Mon, 16 Sep 2024 10:48:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:48:57 +0000   Mon, 16 Sep 2024 10:48:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:48:57 +0000   Mon, 16 Sep 2024 10:48:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:48:57 +0000   Mon, 16 Sep 2024 10:48:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-770465-m04
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 f69dd43257b54b02808522afdaf86275
	  System UUID:                82d9765a-9474-4a2c-ae78-19bbbf1ab150
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hjjqt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 kindnet-bflwn              100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m53s
	  kube-system                 kube-proxy-78l2l           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%)  100m (1%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 14s                    kube-proxy       
	  Normal   Starting                 2m52s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  2m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m55s (x2 over 2m55s)  kubelet          Node ha-770465-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m55s (x2 over 2m55s)  kubelet          Node ha-770465-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m55s (x2 over 2m55s)  kubelet          Node ha-770465-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m54s                  node-controller  Node ha-770465-m04 event: Registered Node ha-770465-m04 in Controller
	  Normal   RegisteredNode           2m54s                  node-controller  Node ha-770465-m04 event: Registered Node ha-770465-m04 in Controller
	  Normal   NodeReady                2m54s                  kubelet          Node ha-770465-m04 status is now: NodeReady
	  Normal   RegisteredNode           2m51s                  node-controller  Node ha-770465-m04 event: Registered Node ha-770465-m04 in Controller
	  Normal   RegisteredNode           2m7s                   node-controller  Node ha-770465-m04 event: Registered Node ha-770465-m04 in Controller
	  Normal   RegisteredNode           73s                    node-controller  Node ha-770465-m04 event: Registered Node ha-770465-m04 in Controller
	  Normal   RegisteredNode           73s                    node-controller  Node ha-770465-m04 event: Registered Node ha-770465-m04 in Controller
	  Normal   RegisteredNode           57s                    node-controller  Node ha-770465-m04 event: Registered Node ha-770465-m04 in Controller
	  Normal   NodeNotReady             33s                    node-controller  Node ha-770465-m04 status is now: NodeNotReady
	  Normal   Starting                 23s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 23s                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  23s                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  17s (x8 over 23s)      kubelet          Node ha-770465-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    17s (x7 over 23s)      kubelet          Node ha-770465-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     17s (x7 over 23s)      kubelet          Node ha-770465-m04 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[  +1.014861] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-c95c64bb41bd
	[  +0.000007] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-c95c64bb41bd
	[  +0.000001] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +0.003977] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-c95c64bb41bd
	[  +0.000006] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-c95c64bb41bd
	[  +0.000002] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +6.043454] net_ratelimit: 7 callbacks suppressed
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-c95c64bb41bd
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-c95c64bb41bd
	[  +0.000002] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +0.000004] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-c95c64bb41bd
	[  +0.000002] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-c95c64bb41bd
	[  +0.000002] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +0.003936] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-c95c64bb41bd
	[  +0.000006] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +8.187283] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-c95c64bb41bd
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-c95c64bb41bd
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-c95c64bb41bd
	[  +0.000004] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +0.000026] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +0.000008] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	
	
	==> etcd [8b022d1d912058b6aec308a7f6777b3f8fcb7b0b8c051be8ff2b7c53dc37450c] <==
	{"level":"warn","ts":"2024-09-16T10:46:59.500032Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"f23d31ee9f17f736","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T10:46:59.500089Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"f23d31ee9f17f736","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"info","ts":"2024-09-16T10:47:01.741007Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"f23d31ee9f17f736"}
	{"level":"info","ts":"2024-09-16T10:47:01.741475Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"f23d31ee9f17f736"}
	{"level":"info","ts":"2024-09-16T10:47:01.743867Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"f23d31ee9f17f736"}
	{"level":"info","ts":"2024-09-16T10:47:01.750769Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"aec36adc501070cc","to":"f23d31ee9f17f736","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-16T10:47:01.750833Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"f23d31ee9f17f736"}
	{"level":"info","ts":"2024-09-16T10:47:01.753898Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"aec36adc501070cc","to":"f23d31ee9f17f736","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-16T10:47:01.753939Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"f23d31ee9f17f736"}
	{"level":"warn","ts":"2024-09-16T10:47:18.513345Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"1849ecf187a2b8dd","error":"unexpected EOF"}
	{"level":"warn","ts":"2024-09-16T10:47:18.513435Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"1849ecf187a2b8dd","error":"failed to read 1849ecf187a2b8dd on stream MsgApp v2 (unexpected EOF)"}
	{"level":"warn","ts":"2024-09-16T10:47:18.513335Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"1849ecf187a2b8dd","error":"unexpected EOF"}
	{"level":"warn","ts":"2024-09-16T10:47:18.601784Z","caller":"rafthttp/stream.go:223","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"1849ecf187a2b8dd"}
	{"level":"warn","ts":"2024-09-16T10:47:18.610252Z","caller":"rafthttp/stream.go:223","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"1849ecf187a2b8dd"}
	{"level":"warn","ts":"2024-09-16T10:47:19.518727Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"1849ecf187a2b8dd","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T10:47:19.518774Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"1849ecf187a2b8dd","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T10:47:23.519464Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"1849ecf187a2b8dd","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T10:47:23.519516Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"1849ecf187a2b8dd","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T10:47:27.521011Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"1849ecf187a2b8dd","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T10:47:27.521066Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"1849ecf187a2b8dd","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-09-16T10:47:30.340941Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"f23d31ee9f17f736","error":"unexpected EOF"}
	{"level":"warn","ts":"2024-09-16T10:47:30.341007Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"f23d31ee9f17f736","error":"failed to read f23d31ee9f17f736 on stream Message (unexpected EOF)"}
	{"level":"warn","ts":"2024-09-16T10:47:30.340939Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"f23d31ee9f17f736","error":"unexpected EOF"}
	{"level":"warn","ts":"2024-09-16T10:47:30.545071Z","caller":"rafthttp/stream.go:223","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"f23d31ee9f17f736"}
	{"level":"warn","ts":"2024-09-16T10:47:30.918742Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128031932284353946,"retry-timeout":"500ms"}
	
	
	==> etcd [e87832bf428c0d5daf61e53f57c9813ace0d2d4a7ba9c30b2fee46730d2c6de1] <==
	{"level":"info","ts":"2024-09-16T10:48:10.943889Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"1849ecf187a2b8dd"}
	{"level":"info","ts":"2024-09-16T10:48:10.947835Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"aec36adc501070cc","to":"1849ecf187a2b8dd","stream-type":"stream Message"}
	{"level":"info","ts":"2024-09-16T10:48:10.947873Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"1849ecf187a2b8dd"}
	{"level":"info","ts":"2024-09-16T10:48:10.952658Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"aec36adc501070cc","to":"1849ecf187a2b8dd","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-09-16T10:48:10.952694Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"1849ecf187a2b8dd"}
	{"level":"warn","ts":"2024-09-16T10:48:11.133209Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"1849ecf187a2b8dd","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-09-16T10:48:11.133233Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"1849ecf187a2b8dd","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-09-16T10:49:07.533756Z","caller":"embed/config_logging.go:170","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:42612","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-09-16T10:49:07.544297Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892 17455162631699035958)"}
	{"level":"info","ts":"2024-09-16T10:49:07.545450Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"1849ecf187a2b8dd","removed-remote-peer-urls":["https://192.168.49.4:2380"]}
	{"level":"info","ts":"2024-09-16T10:49:07.545508Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"1849ecf187a2b8dd"}
	{"level":"warn","ts":"2024-09-16T10:49:07.546088Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"1849ecf187a2b8dd"}
	{"level":"info","ts":"2024-09-16T10:49:07.546122Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"1849ecf187a2b8dd"}
	{"level":"warn","ts":"2024-09-16T10:49:07.546228Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"1849ecf187a2b8dd"}
	{"level":"info","ts":"2024-09-16T10:49:07.546257Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"1849ecf187a2b8dd"}
	{"level":"info","ts":"2024-09-16T10:49:07.546346Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"1849ecf187a2b8dd"}
	{"level":"warn","ts":"2024-09-16T10:49:07.546452Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"1849ecf187a2b8dd","error":"context canceled"}
	{"level":"warn","ts":"2024-09-16T10:49:07.546489Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"1849ecf187a2b8dd","error":"failed to read 1849ecf187a2b8dd on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-09-16T10:49:07.546511Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"1849ecf187a2b8dd"}
	{"level":"warn","ts":"2024-09-16T10:49:07.546588Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"1849ecf187a2b8dd","error":"context canceled"}
	{"level":"info","ts":"2024-09-16T10:49:07.546610Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"1849ecf187a2b8dd"}
	{"level":"info","ts":"2024-09-16T10:49:07.546622Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"1849ecf187a2b8dd"}
	{"level":"info","ts":"2024-09-16T10:49:07.546648Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"1849ecf187a2b8dd"}
	{"level":"warn","ts":"2024-09-16T10:49:07.559001Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"aec36adc501070cc","remote-peer-id-stream-handler":"aec36adc501070cc","remote-peer-id-from":"1849ecf187a2b8dd"}
	{"level":"warn","ts":"2024-09-16T10:49:07.559713Z","caller":"embed/config_logging.go:170","msg":"rejected connection on peer endpoint","remote-addr":"192.168.49.4:33934","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:49:14 up 31 min,  0 users,  load average: 2.44, 1.61, 0.95
	Linux ha-770465 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [0ee20b8c8789adc13129c1dd9bbf0e03680faaa7a1039ad42d97dbdae47213fd] <==
	I0916 10:48:34.541285       1 main.go:299] handling current node
	I0916 10:48:44.540937       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0916 10:48:44.540980       1 main.go:322] Node ha-770465-m02 has CIDR [10.244.1.0/24] 
	I0916 10:48:44.541128       1 main.go:295] Handling node with IPs: map[192.168.49.4:{}]
	I0916 10:48:44.541144       1 main.go:322] Node ha-770465-m03 has CIDR [10.244.2.0/24] 
	I0916 10:48:44.541208       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0916 10:48:44.541230       1 main.go:322] Node ha-770465-m04 has CIDR [10.244.3.0/24] 
	I0916 10:48:44.541294       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:48:44.541307       1 main.go:299] handling current node
	I0916 10:48:54.544511       1 main.go:295] Handling node with IPs: map[192.168.49.4:{}]
	I0916 10:48:54.544562       1 main.go:322] Node ha-770465-m03 has CIDR [10.244.2.0/24] 
	I0916 10:48:54.544723       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0916 10:48:54.544746       1 main.go:322] Node ha-770465-m04 has CIDR [10.244.3.0/24] 
	I0916 10:48:54.544809       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:48:54.544817       1 main.go:299] handling current node
	I0916 10:48:54.544831       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0916 10:48:54.544837       1 main.go:322] Node ha-770465-m02 has CIDR [10.244.1.0/24] 
	I0916 10:49:04.540717       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0916 10:49:04.540761       1 main.go:322] Node ha-770465-m02 has CIDR [10.244.1.0/24] 
	I0916 10:49:04.540893       1 main.go:295] Handling node with IPs: map[192.168.49.4:{}]
	I0916 10:49:04.540902       1 main.go:322] Node ha-770465-m03 has CIDR [10.244.2.0/24] 
	I0916 10:49:04.540937       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0916 10:49:04.540944       1 main.go:322] Node ha-770465-m04 has CIDR [10.244.3.0/24] 
	I0916 10:49:04.540976       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:49:04.540983       1 main.go:299] handling current node
	
	
	==> kindnet [b31c2d77265e3a87517539fba911addc87dcfa7cd4932f3fa5cfa6b294afd8aa] <==
	I0916 10:46:55.754591       1 main.go:322] Node ha-770465-m04 has CIDR [10.244.3.0/24] 
	I0916 10:47:05.756725       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:47:05.756759       1 main.go:299] handling current node
	I0916 10:47:05.756775       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0916 10:47:05.756779       1 main.go:322] Node ha-770465-m02 has CIDR [10.244.1.0/24] 
	I0916 10:47:05.756919       1 main.go:295] Handling node with IPs: map[192.168.49.4:{}]
	I0916 10:47:05.756932       1 main.go:322] Node ha-770465-m03 has CIDR [10.244.2.0/24] 
	I0916 10:47:05.756992       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0916 10:47:05.757000       1 main.go:322] Node ha-770465-m04 has CIDR [10.244.3.0/24] 
	I0916 10:47:15.752944       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:47:15.752997       1 main.go:299] handling current node
	I0916 10:47:15.753015       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0916 10:47:15.753023       1 main.go:322] Node ha-770465-m02 has CIDR [10.244.1.0/24] 
	I0916 10:47:15.753185       1 main.go:295] Handling node with IPs: map[192.168.49.4:{}]
	I0916 10:47:15.753204       1 main.go:322] Node ha-770465-m03 has CIDR [10.244.2.0/24] 
	I0916 10:47:15.753255       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0916 10:47:15.753269       1 main.go:322] Node ha-770465-m04 has CIDR [10.244.3.0/24] 
	I0916 10:47:25.753040       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:47:25.753073       1 main.go:299] handling current node
	I0916 10:47:25.753091       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0916 10:47:25.753097       1 main.go:322] Node ha-770465-m02 has CIDR [10.244.1.0/24] 
	I0916 10:47:25.753299       1 main.go:295] Handling node with IPs: map[192.168.49.4:{}]
	I0916 10:47:25.753315       1 main.go:322] Node ha-770465-m03 has CIDR [10.244.2.0/24] 
	I0916 10:47:25.753390       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0916 10:47:25.753400       1 main.go:322] Node ha-770465-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [535bd4e938e3aeb6ecfbd02d81bf8fc060b9bb649a67b3f28d6b43d2c199e4ba] <==
	I0916 10:44:17.977097       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:44:17.981779       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:44:18.429026       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:44:19.732485       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:44:19.743980       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 10:44:19.753201       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:44:24.080680       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0916 10:44:24.080680       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0916 10:44:24.180774       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0916 10:46:04.259087       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:54782: use of closed network connection
	E0916 10:46:04.412401       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:54796: use of closed network connection
	E0916 10:46:04.568563       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:54808: use of closed network connection
	E0916 10:46:04.740761       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:54822: use of closed network connection
	E0916 10:46:04.905896       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:54836: use of closed network connection
	E0916 10:46:05.060982       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:54858: use of closed network connection
	E0916 10:46:05.228361       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:54878: use of closed network connection
	E0916 10:46:05.380406       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:54894: use of closed network connection
	E0916 10:46:05.547512       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:54904: use of closed network connection
	E0916 10:46:05.822889       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:54930: use of closed network connection
	E0916 10:46:05.978196       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:54942: use of closed network connection
	E0916 10:46:06.125590       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:54966: use of closed network connection
	E0916 10:46:06.271367       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:54990: use of closed network connection
	E0916 10:46:06.417557       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:55004: use of closed network connection
	E0916 10:46:06.561545       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:55012: use of closed network connection
	W0916 10:46:57.980256       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.4]
	
	
	==> kube-apiserver [b715d9632d76bec5b9249626c0e047c8c7d8720a8f0f370d24d64c3acc85d01d] <==
	I0916 10:47:58.325209       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0916 10:47:58.325296       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0916 10:47:58.342845       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:47:58.343038       1 policy_source.go:224] refreshing policies
	I0916 10:47:58.344599       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:47:58.419927       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 10:47:58.420046       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:47:58.420145       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:47:58.421437       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:47:58.421464       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:47:58.421185       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 10:47:58.421201       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 10:47:58.421993       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:47:58.420099       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:47:58.422206       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:47:58.422215       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:47:58.422223       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:47:58.433404       1 shared_informer.go:320] Caches are synced for node_authorizer
	W0916 10:47:58.433677       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.4]
	I0916 10:47:58.436071       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:47:58.438166       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 10:47:58.441983       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0916 10:47:58.444052       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0916 10:47:59.281027       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0916 10:47:59.555302       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3 192.168.49.4]
	
	
	==> kube-controller-manager [4a562d336c1706e425c8ce858242155970a39095a512cf3b2064ce89d4f54369] <==
	I0916 10:48:43.292330       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"5a99e4e7-454d-48ca-8c88-14bcdda1194b", APIVersion:"v1", ResourceVersion:"251", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-tw8rq EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-tw8rq": the object has been modified; please apply your changes to the latest version and try again
	I0916 10:48:43.320173       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="47.571407ms"
	I0916 10:48:43.322908       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-tw8rq EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-tw8rq\": the object has been modified; please apply your changes to the latest version and try again"
	I0916 10:48:43.324084       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"5a99e4e7-454d-48ca-8c88-14bcdda1194b", APIVersion:"v1", ResourceVersion:"251", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-tw8rq EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-tw8rq": the object has been modified; please apply your changes to the latest version and try again
	I0916 10:48:43.354698       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="34.404985ms"
	I0916 10:48:43.354892       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="135.793µs"
	I0916 10:48:46.754272       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m04"
	I0916 10:48:57.698245       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-770465-m04"
	I0916 10:48:57.698462       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m04"
	I0916 10:48:57.707954       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m04"
	I0916 10:49:01.688748       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m04"
	I0916 10:49:04.291789       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m03"
	I0916 10:49:04.302952       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m03"
	I0916 10:49:04.345976       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.238623ms"
	I0916 10:49:04.398190       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="52.160869ms"
	I0916 10:49:04.409819       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.573808ms"
	I0916 10:49:04.410279       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="97.503µs"
	I0916 10:49:06.474706       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="68.139µs"
	I0916 10:49:07.079368       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="79.471µs"
	I0916 10:49:07.083495       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="58.63µs"
	I0916 10:49:07.422446       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="19.388254ms"
	I0916 10:49:07.422581       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="75.3µs"
	I0916 10:49:08.519926       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-770465-m04"
	I0916 10:49:08.520048       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m03"
	E0916 10:49:08.543492       1 garbagecollector.go:399] "Unhandled Error" err="error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"coordination.k8s.io/v1\", Kind:\"Lease\", Name:\"ha-770465-m03\", UID:\"11a2730d-1724-4ba7-9d9a-9d2e6b786df0\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"kube-node-lease\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32
{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Node\", Name:\"ha-770465-m03\", UID:\"277c51be-ae79-4a39-b8dc-63020200f29c\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: leases.coordination.k8s.io \"ha-770465-m03\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [780f65ad6abab29bdde89c430c29bcd890f45aa17487c1bfd744c963df712f3d] <==
	I0916 10:45:38.814650       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.1µs"
	I0916 10:45:41.523632       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="61.768µs"
	I0916 10:45:42.638341       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m03"
	I0916 10:45:52.041669       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465"
	I0916 10:46:03.827357       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.858169ms"
	I0916 10:46:03.827464       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="59.444µs"
	I0916 10:46:08.674888       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m02"
	I0916 10:46:13.096265       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m03"
	E0916 10:46:19.274244       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-8wfr5 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-8wfr5\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0916 10:46:19.399508       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-770465-m04\" does not exist"
	I0916 10:46:19.440331       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-770465-m04" podCIDRs=["10.244.3.0/24"]
	I0916 10:46:19.440377       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m04"
	I0916 10:46:19.440419       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m04"
	I0916 10:46:19.874388       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m04"
	I0916 10:46:20.121826       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m04"
	I0916 10:46:20.188037       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m04"
	I0916 10:46:20.657563       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-770465-m04"
	I0916 10:46:20.657871       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m04"
	I0916 10:46:20.671968       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m04"
	I0916 10:46:23.179728       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-770465-m04"
	I0916 10:46:23.180106       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m04"
	I0916 10:47:13.362215       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m02"
	I0916 10:47:16.706698       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.294723ms"
	I0916 10:47:16.706831       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="58.985µs"
	I0916 10:47:17.699106       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="73.336µs"
	
	
	==> kube-proxy [15571e99ab074e3b158931e74a462086cc1bc9b84b6b39d511e64dbebca8dac3] <==
	I0916 10:44:25.058145       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:44:25.228881       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:44:25.228958       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:44:25.251975       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:44:25.252031       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:44:25.255017       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:44:25.255521       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:44:25.255550       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:44:25.256997       1 config.go:199] "Starting service config controller"
	I0916 10:44:25.257209       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:44:25.257043       1 config.go:328] "Starting node config controller"
	I0916 10:44:25.257490       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:44:25.257086       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:44:25.257634       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:44:25.357729       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:44:25.357756       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:44:25.360110       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [de5c6dcf960e9561503f4b0b4b3900a6a55e051755584f47521977a698ad11bb] <==
	I0916 10:48:03.746816       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:48:04.024990       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:48:04.025076       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:48:04.045298       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:48:04.045353       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:48:04.047216       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:48:04.047660       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:48:04.047689       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:48:04.048772       1 config.go:199] "Starting service config controller"
	I0916 10:48:04.048791       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:48:04.048824       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:48:04.048826       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:48:04.048889       1 config.go:328] "Starting node config controller"
	I0916 10:48:04.048897       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:48:04.149460       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:48:04.149502       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:48:04.149599       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [bcd02f03466d85592977b36046584eb0eb24d4040a9a28d2400852992bb02a91] <==
	I0916 10:47:57.021055       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:47:58.320429       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:47:58.320473       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 10:47:58.320485       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:47:58.320494       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:47:58.344793       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:47:58.344842       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:47:58.347539       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:47:58.347685       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:47:58.347706       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:47:58.347718       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:47:58.448378       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [fc07020cd48414dd7978cd32b7fffa3b3bd5d7f72b79b3aa49e4082dffedf8e3] <==
	W0916 10:44:17.534480       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 10:44:17.534534       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:44:17.605947       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:44:17.605995       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:44:17.659989       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 10:44:17.660035       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0916 10:44:17.672435       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 10:44:17.672475       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 10:44:20.730788       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0916 10:45:11.758548       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-sb96x\": pod kube-proxy-sb96x is already assigned to node \"ha-770465-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-sb96x" node="ha-770465-m03"
	E0916 10:45:11.758691       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-sb96x\": pod kube-proxy-sb96x is already assigned to node \"ha-770465-m03\"" pod="kube-system/kube-proxy-sb96x"
	E0916 10:45:35.573275       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-klfw4\": pod busybox-7dff88458-klfw4 is already assigned to node \"ha-770465-m02\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-klfw4" node="ha-770465-m02"
	E0916 10:45:35.573342       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 1f91390f-bdef-4a3b-a8bc-e717d87dee4b(default/busybox-7dff88458-klfw4) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-klfw4"
	E0916 10:45:35.573361       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-klfw4\": pod busybox-7dff88458-klfw4 is already assigned to node \"ha-770465-m02\"" pod="default/busybox-7dff88458-klfw4"
	I0916 10:45:35.573394       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-klfw4" node="ha-770465-m02"
	E0916 10:46:21.389563       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-tw9dw\": pod kindnet-tw9dw is already assigned to node \"ha-770465-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-tw9dw" node="ha-770465-m04"
	E0916 10:46:21.389661       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 211d67ad-c4dc-498b-9ce1-aa4f469a1a54(kube-system/kindnet-tw9dw) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-tw9dw"
	E0916 10:46:21.389685       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-tw9dw\": pod kindnet-tw9dw is already assigned to node \"ha-770465-m04\"" pod="kube-system/kindnet-tw9dw"
	I0916 10:46:21.389710       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-tw9dw" node="ha-770465-m04"
	E0916 10:46:21.390586       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-bflwn\": pod kindnet-bflwn is already assigned to node \"ha-770465-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-bflwn" node="ha-770465-m04"
	E0916 10:46:21.390625       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 59d75712-5683-4b1c-a6ef-2a669d75da7a(kube-system/kindnet-bflwn) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-bflwn"
	E0916 10:46:21.390641       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-bflwn\": pod kindnet-bflwn is already assigned to node \"ha-770465-m04\"" pod="kube-system/kindnet-bflwn"
	I0916 10:46:21.390663       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-bflwn" node="ha-770465-m04"
	E0916 10:46:21.422131       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-vkdfk\": pod kindnet-vkdfk is already assigned to node \"ha-770465-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-vkdfk" node="ha-770465-m04"
	E0916 10:46:21.422653       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-vkdfk\": pod kindnet-vkdfk is already assigned to node \"ha-770465-m04\"" pod="kube-system/kindnet-vkdfk"
	
	
	==> kubelet <==
	Sep 16 10:48:02 ha-770465 kubelet[736]: I0916 10:48:02.960240     736 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3d973b3fd718c954a82bac99c258942" path="/var/lib/kubelet/pods/d3d973b3fd718c954a82bac99c258942/volumes"
	Sep 16 10:48:02 ha-770465 kubelet[736]: I0916 10:48:02.984254     736 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-vip-ha-770465"
	Sep 16 10:48:03 ha-770465 kubelet[736]: I0916 10:48:03.043071     736 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 16 10:48:03 ha-770465 kubelet[736]: I0916 10:48:03.050260     736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc3bf04d-635a-4264-883b-2fd72cac2e24-lib-modules\") pod \"kube-proxy-gd2mt\" (UID: \"fc3bf04d-635a-4264-883b-2fd72cac2e24\") " pod="kube-system/kube-proxy-gd2mt"
	Sep 16 10:48:03 ha-770465 kubelet[736]: I0916 10:48:03.050309     736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/98658171-1486-44a9-8a20-0d77ea019206-cni-cfg\") pod \"kindnet-grjh8\" (UID: \"98658171-1486-44a9-8a20-0d77ea019206\") " pod="kube-system/kindnet-grjh8"
	Sep 16 10:48:03 ha-770465 kubelet[736]: I0916 10:48:03.050350     736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98658171-1486-44a9-8a20-0d77ea019206-xtables-lock\") pod \"kindnet-grjh8\" (UID: \"98658171-1486-44a9-8a20-0d77ea019206\") " pod="kube-system/kindnet-grjh8"
	Sep 16 10:48:03 ha-770465 kubelet[736]: I0916 10:48:03.050385     736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc3bf04d-635a-4264-883b-2fd72cac2e24-xtables-lock\") pod \"kube-proxy-gd2mt\" (UID: \"fc3bf04d-635a-4264-883b-2fd72cac2e24\") " pod="kube-system/kube-proxy-gd2mt"
	Sep 16 10:48:03 ha-770465 kubelet[736]: I0916 10:48:03.050641     736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/cf470925-4874-4744-8015-700e93ab924f-tmp\") pod \"storage-provisioner\" (UID: \"cf470925-4874-4744-8015-700e93ab924f\") " pod="kube-system/storage-provisioner"
	Sep 16 10:48:03 ha-770465 kubelet[736]: I0916 10:48:03.050865     736 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98658171-1486-44a9-8a20-0d77ea019206-lib-modules\") pod \"kindnet-grjh8\" (UID: \"98658171-1486-44a9-8a20-0d77ea019206\") " pod="kube-system/kindnet-grjh8"
	Sep 16 10:48:03 ha-770465 kubelet[736]: I0916 10:48:03.054539     736 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-770465" podStartSLOduration=1.054518499 podStartE2EDuration="1.054518499s" podCreationTimestamp="2024-09-16 10:48:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:48:03.054228947 +0000 UTC m=+14.174399621" watchObservedRunningTime="2024-09-16 10:48:03.054518499 +0000 UTC m=+14.174689172"
	Sep 16 10:48:03 ha-770465 kubelet[736]: I0916 10:48:03.058903     736 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-vip-ha-770465" podUID="8fe17ef3-e8ea-42a9-bfd4-5f556d0a3f77"
	Sep 16 10:48:03 ha-770465 kubelet[736]: I0916 10:48:03.060997     736 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Sep 16 10:48:09 ha-770465 kubelet[736]: E0916 10:48:09.006501     736 summary_sys_containers.go:51] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Sep 16 10:48:09 ha-770465 kubelet[736]: E0916 10:48:09.006544     736 helpers.go:854] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal="allocatableMemory.available"
	Sep 16 10:48:19 ha-770465 kubelet[736]: E0916 10:48:19.029190     736 summary_sys_containers.go:51] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Sep 16 10:48:19 ha-770465 kubelet[736]: E0916 10:48:19.029241     736 helpers.go:854] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal="allocatableMemory.available"
	Sep 16 10:48:29 ha-770465 kubelet[736]: E0916 10:48:29.049029     736 summary_sys_containers.go:51] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Sep 16 10:48:29 ha-770465 kubelet[736]: E0916 10:48:29.049070     736 helpers.go:854] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal="allocatableMemory.available"
	Sep 16 10:48:34 ha-770465 kubelet[736]: I0916 10:48:34.150217     736 scope.go:117] "RemoveContainer" containerID="ec0de017ccfa5917b48a621ba0257c01fb46d96654a8d2e3f173a41e811e0f0e"
	Sep 16 10:48:34 ha-770465 kubelet[736]: I0916 10:48:34.150524     736 scope.go:117] "RemoveContainer" containerID="c762b9bc541ee22b438c83ea602de8196f520a5b546fbcd0af20be9daa33a98c"
	Sep 16 10:48:34 ha-770465 kubelet[736]: E0916 10:48:34.150730     736 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(cf470925-4874-4744-8015-700e93ab924f)\"" pod="kube-system/storage-provisioner" podUID="cf470925-4874-4744-8015-700e93ab924f"
	Sep 16 10:48:39 ha-770465 kubelet[736]: E0916 10:48:39.069728     736 summary_sys_containers.go:51] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Sep 16 10:48:39 ha-770465 kubelet[736]: E0916 10:48:39.069768     736 helpers.go:854] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal="allocatableMemory.available"
	Sep 16 10:48:47 ha-770465 kubelet[736]: I0916 10:48:47.957875     736 scope.go:117] "RemoveContainer" containerID="c762b9bc541ee22b438c83ea602de8196f520a5b546fbcd0af20be9daa33a98c"
	Sep 16 10:48:48 ha-770465 kubelet[736]: I0916 10:48:48.942882     736 scope.go:117] "RemoveContainer" containerID="75391807e98390e5055c12f632996e1dc188ba32700573915b99ed477d23fb36"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-770465 -n ha-770465
helpers_test.go:261: (dbg) Run:  kubectl --context ha-770465 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context ha-770465 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (488.164µs)
helpers_test.go:263: kubectl --context ha-770465 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (11.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (68.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-770465 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0916 10:49:52.116947   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-770465 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m5.909782826s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:584: (dbg) Non-zero exit: kubectl get nodes: fork/exec /usr/local/bin/kubectl: exec format error (502.958µs)
ha_test.go:586: failed to run kubectl get nodes. args "kubectl get nodes" : fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-770465
helpers_test.go:235: (dbg) docker inspect ha-770465:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c7d04b23d2abf966ca171fd8833a886574de3fcaa485d5ebf8d16c3f7eff5dbf",
	        "Created": "2024-09-16T10:44:02.535590959Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 105202,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:49:51.686583193Z",
	            "FinishedAt": "2024-09-16T10:49:50.93486955Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/c7d04b23d2abf966ca171fd8833a886574de3fcaa485d5ebf8d16c3f7eff5dbf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c7d04b23d2abf966ca171fd8833a886574de3fcaa485d5ebf8d16c3f7eff5dbf/hostname",
	        "HostsPath": "/var/lib/docker/containers/c7d04b23d2abf966ca171fd8833a886574de3fcaa485d5ebf8d16c3f7eff5dbf/hosts",
	        "LogPath": "/var/lib/docker/containers/c7d04b23d2abf966ca171fd8833a886574de3fcaa485d5ebf8d16c3f7eff5dbf/c7d04b23d2abf966ca171fd8833a886574de3fcaa485d5ebf8d16c3f7eff5dbf-json.log",
	        "Name": "/ha-770465",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-770465:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-770465",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7cf5268ac52be23fef4a82bd45f08106e1a207d0ea8d2788ec989044339c0bf7-init/diff:/var/lib/docker/overlay2/e391a31d00d345931d18da68f8a7d7338830f6d9b98fe203461225d03a65d594/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7cf5268ac52be23fef4a82bd45f08106e1a207d0ea8d2788ec989044339c0bf7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7cf5268ac52be23fef4a82bd45f08106e1a207d0ea8d2788ec989044339c0bf7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7cf5268ac52be23fef4a82bd45f08106e1a207d0ea8d2788ec989044339c0bf7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-770465",
	                "Source": "/var/lib/docker/volumes/ha-770465/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-770465",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-770465",
	                "name.minikube.sigs.k8s.io": "ha-770465",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "11da9a22b4e8e61fb7ffee1c4e3c4ec2847f3f0e583b18484408c870efb8f7c0",
	            "SandboxKey": "/var/run/docker/netns/11da9a22b4e8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32833"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32834"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32837"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32835"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32836"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-770465": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "c95c64bb41bdebd7017cdb4d495e3e500618752ab547ea09aa27d1cdaf23b64d",
	                    "EndpointID": "611709698ef6634b04ce9f3e9d3a3c7be6700b68ec24809263a9457d2883c74b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-770465",
	                        "c7d04b23d2ab"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-770465 -n ha-770465
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-770465 logs -n 25: (1.532007946s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-770465 cp ha-770465-m03:/home/docker/cp-test.txt                              | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | ha-770465-m04:/home/docker/cp-test_ha-770465-m03_ha-770465-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-770465 ssh -n                                                                 | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | ha-770465-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-770465 ssh -n ha-770465-m04 sudo cat                                          | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | /home/docker/cp-test_ha-770465-m03_ha-770465-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-770465 cp testdata/cp-test.txt                                                | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | ha-770465-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-770465 ssh -n                                                                 | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | ha-770465-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-770465 cp ha-770465-m04:/home/docker/cp-test.txt                              | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1340522930/001/cp-test_ha-770465-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-770465 ssh -n                                                                 | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | ha-770465-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-770465 cp ha-770465-m04:/home/docker/cp-test.txt                              | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | ha-770465:/home/docker/cp-test_ha-770465-m04_ha-770465.txt                       |           |         |         |                     |                     |
	| ssh     | ha-770465 ssh -n                                                                 | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | ha-770465-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-770465 ssh -n ha-770465 sudo cat                                              | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | /home/docker/cp-test_ha-770465-m04_ha-770465.txt                                 |           |         |         |                     |                     |
	| cp      | ha-770465 cp ha-770465-m04:/home/docker/cp-test.txt                              | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | ha-770465-m02:/home/docker/cp-test_ha-770465-m04_ha-770465-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-770465 ssh -n                                                                 | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | ha-770465-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-770465 ssh -n ha-770465-m02 sudo cat                                          | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | /home/docker/cp-test_ha-770465-m04_ha-770465-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-770465 cp ha-770465-m04:/home/docker/cp-test.txt                              | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | ha-770465-m03:/home/docker/cp-test_ha-770465-m04_ha-770465-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-770465 ssh -n                                                                 | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | ha-770465-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-770465 ssh -n ha-770465-m03 sudo cat                                          | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | /home/docker/cp-test_ha-770465-m04_ha-770465-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-770465 node stop m02 -v=7                                                     | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:46 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-770465 node start m02 -v=7                                                    | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:46 UTC | 16 Sep 24 10:47 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-770465 -v=7                                                           | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:47 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-770465 -v=7                                                                | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:47 UTC | 16 Sep 24 10:47 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-770465 --wait=true -v=7                                                    | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:47 UTC | 16 Sep 24 10:49 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-770465                                                                | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:49 UTC |                     |
	| node    | ha-770465 node delete m03 -v=7                                                   | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:49 UTC | 16 Sep 24 10:49 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-770465 stop -v=7                                                              | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:49 UTC | 16 Sep 24 10:49 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-770465 --wait=true                                                         | ha-770465 | jenkins | v1.34.0 | 16 Sep 24 10:49 UTC | 16 Sep 24 10:50 UTC |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=docker                                                                  |           |         |         |                     |                     |
	|         | --container-runtime=containerd                                                   |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:49:51
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:49:51.342449  104899 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:49:51.342575  104899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:49:51.342583  104899 out.go:358] Setting ErrFile to fd 2...
	I0916 10:49:51.342589  104899 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:49:51.342769  104899 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 10:49:51.343312  104899 out.go:352] Setting JSON to false
	I0916 10:49:51.344235  104899 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1935,"bootTime":1726481856,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:49:51.344341  104899 start.go:139] virtualization: kvm guest
	I0916 10:49:51.346805  104899 out.go:177] * [ha-770465] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:49:51.348238  104899 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:49:51.348305  104899 notify.go:220] Checking for updates...
	I0916 10:49:51.350770  104899 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:49:51.352058  104899 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:49:51.353263  104899 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 10:49:51.354601  104899 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:49:51.355860  104899 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:49:51.357510  104899 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:49:51.358022  104899 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:49:51.380739  104899 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:49:51.380831  104899 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:49:51.428138  104899 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:42 SystemTime:2024-09-16 10:49:51.419077342 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:49:51.428240  104899 docker.go:318] overlay module found
	I0916 10:49:51.430117  104899 out.go:177] * Using the docker driver based on existing profile
	I0916 10:49:51.431513  104899 start.go:297] selected driver: docker
	I0916 10:49:51.431527  104899 start.go:901] validating driver "docker" against &{Name:ha-770465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-770465 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false
kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socket
VMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:49:51.431645  104899 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:49:51.431754  104899 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:49:51.477359  104899 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:42 SystemTime:2024-09-16 10:49:51.468642387 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:49:51.478085  104899 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:49:51.478109  104899 cni.go:84] Creating CNI manager for ""
	I0916 10:49:51.478201  104899 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0916 10:49:51.478277  104899 start.go:340] cluster config:
	{Name:ha-770465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-770465 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-se
rver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0916 10:49:51.480427  104899 out.go:177] * Starting "ha-770465" primary control-plane node in "ha-770465" cluster
	I0916 10:49:51.481858  104899 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 10:49:51.483325  104899 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:49:51.484739  104899 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:49:51.484775  104899 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4
	I0916 10:49:51.484791  104899 cache.go:56] Caching tarball of preloaded images
	I0916 10:49:51.484775  104899 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:49:51.484874  104899 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 10:49:51.484886  104899 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 10:49:51.485008  104899 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/config.json ...
	W0916 10:49:51.504930  104899 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:49:51.504962  104899 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:49:51.505053  104899 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:49:51.505070  104899 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:49:51.505076  104899 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:49:51.505087  104899 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:49:51.505097  104899 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:49:51.506231  104899 image.go:273] response: 
	I0916 10:49:51.564270  104899 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:49:51.564307  104899 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:49:51.564344  104899 start.go:360] acquireMachinesLock for ha-770465: {Name:mk79463d2cf034afd16e2c9f41174a568f4314aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:49:51.564415  104899 start.go:364] duration metric: took 51.456µs to acquireMachinesLock for "ha-770465"
	I0916 10:49:51.564434  104899 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:49:51.564438  104899 fix.go:54] fixHost starting: 
	I0916 10:49:51.564660  104899 cli_runner.go:164] Run: docker container inspect ha-770465 --format={{.State.Status}}
	I0916 10:49:51.581904  104899 fix.go:112] recreateIfNeeded on ha-770465: state=Stopped err=<nil>
	W0916 10:49:51.581936  104899 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:49:51.584248  104899 out.go:177] * Restarting existing docker container for "ha-770465" ...
	I0916 10:49:51.585690  104899 cli_runner.go:164] Run: docker start ha-770465
	I0916 10:49:51.849689  104899 cli_runner.go:164] Run: docker container inspect ha-770465 --format={{.State.Status}}
	I0916 10:49:51.868258  104899 kic.go:430] container "ha-770465" state is running.
	I0916 10:49:51.868722  104899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465
	I0916 10:49:51.888571  104899 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/config.json ...
	I0916 10:49:51.888861  104899 machine.go:93] provisionDockerMachine start ...
	I0916 10:49:51.888934  104899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:49:51.907529  104899 main.go:141] libmachine: Using SSH client type: native
	I0916 10:49:51.907890  104899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I0916 10:49:51.907914  104899 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:49:51.908598  104899 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39850->127.0.0.1:32833: read: connection reset by peer
	I0916 10:49:55.043190  104899 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-770465
	
	I0916 10:49:55.043220  104899 ubuntu.go:169] provisioning hostname "ha-770465"
	I0916 10:49:55.043289  104899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:49:55.061025  104899 main.go:141] libmachine: Using SSH client type: native
	I0916 10:49:55.061240  104899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I0916 10:49:55.061259  104899 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-770465 && echo "ha-770465" | sudo tee /etc/hostname
	I0916 10:49:55.207033  104899 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-770465
	
	I0916 10:49:55.207113  104899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:49:55.225728  104899 main.go:141] libmachine: Using SSH client type: native
	I0916 10:49:55.225942  104899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I0916 10:49:55.225965  104899 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-770465' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-770465/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-770465' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:49:55.360106  104899 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:49:55.360131  104899 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 10:49:55.360155  104899 ubuntu.go:177] setting up certificates
	I0916 10:49:55.360173  104899 provision.go:84] configureAuth start
	I0916 10:49:55.360228  104899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465
	I0916 10:49:55.377980  104899 provision.go:143] copyHostCerts
	I0916 10:49:55.378016  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:49:55.378045  104899 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 10:49:55.378053  104899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:49:55.378116  104899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 10:49:55.378204  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:49:55.378222  104899 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 10:49:55.378228  104899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:49:55.378258  104899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 10:49:55.378314  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:49:55.378330  104899 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 10:49:55.378336  104899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:49:55.378372  104899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 10:49:55.378436  104899 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.ha-770465 san=[127.0.0.1 192.168.49.2 ha-770465 localhost minikube]
	I0916 10:49:55.532902  104899 provision.go:177] copyRemoteCerts
	I0916 10:49:55.532972  104899 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:49:55.533003  104899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:49:55.551784  104899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa Username:docker}
	I0916 10:49:55.648657  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:49:55.648844  104899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 10:49:55.674262  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:49:55.674330  104899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0916 10:49:55.697949  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:49:55.698024  104899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:49:55.722406  104899 provision.go:87] duration metric: took 362.217847ms to configureAuth
	I0916 10:49:55.722437  104899 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:49:55.722655  104899 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:49:55.722667  104899 machine.go:96] duration metric: took 3.8337907s to provisionDockerMachine
	I0916 10:49:55.722674  104899 start.go:293] postStartSetup for "ha-770465" (driver="docker")
	I0916 10:49:55.722682  104899 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:49:55.722725  104899 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:49:55.722757  104899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:49:55.740607  104899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa Username:docker}
	I0916 10:49:55.836824  104899 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:49:55.840117  104899 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:49:55.840163  104899 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:49:55.840175  104899 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:49:55.840183  104899 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:49:55.840198  104899 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 10:49:55.840267  104899 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 10:49:55.840429  104899 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 10:49:55.840446  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /etc/ssl/certs/111892.pem
	I0916 10:49:55.840566  104899 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:49:55.848890  104899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:49:55.873145  104899 start.go:296] duration metric: took 150.437116ms for postStartSetup
	I0916 10:49:55.873246  104899 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:49:55.873412  104899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:49:55.891341  104899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa Username:docker}
	I0916 10:49:55.984667  104899 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:49:55.989229  104899 fix.go:56] duration metric: took 4.424781943s for fixHost
	I0916 10:49:55.989256  104899 start.go:83] releasing machines lock for "ha-770465", held for 4.424829752s
	I0916 10:49:55.989324  104899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465
	I0916 10:49:56.006553  104899 ssh_runner.go:195] Run: cat /version.json
	I0916 10:49:56.006598  104899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:49:56.006645  104899 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:49:56.006720  104899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:49:56.025165  104899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa Username:docker}
	I0916 10:49:56.025162  104899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa Username:docker}
	I0916 10:49:56.115473  104899 ssh_runner.go:195] Run: systemctl --version
	I0916 10:49:56.198282  104899 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:49:56.202699  104899 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 10:49:56.220396  104899 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:49:56.220468  104899 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:49:56.228993  104899 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:49:56.229029  104899 start.go:495] detecting cgroup driver to use...
	I0916 10:49:56.229066  104899 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:49:56.229118  104899 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 10:49:56.242042  104899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 10:49:56.253155  104899 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:49:56.253210  104899 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:49:56.265477  104899 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:49:56.276516  104899 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:49:56.356823  104899 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:49:56.429473  104899 docker.go:233] disabling docker service ...
	I0916 10:49:56.429542  104899 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:49:56.441049  104899 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:49:56.451295  104899 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:49:56.525265  104899 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:49:56.600897  104899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:49:56.611446  104899 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:49:56.626651  104899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 10:49:56.635856  104899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:49:56.645139  104899 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:49:56.645210  104899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:49:56.654711  104899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:49:56.664155  104899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:49:56.673760  104899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:49:56.683358  104899 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:49:56.692172  104899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:49:56.701487  104899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:49:56.710586  104899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:49:56.719634  104899 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:49:56.727331  104899 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:49:56.735123  104899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:49:56.805617  104899 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 10:49:56.916520  104899 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 10:49:56.916600  104899 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 10:49:56.920298  104899 start.go:563] Will wait 60s for crictl version
	I0916 10:49:56.920361  104899 ssh_runner.go:195] Run: which crictl
	I0916 10:49:56.923487  104899 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:49:56.956777  104899 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 10:49:56.956830  104899 ssh_runner.go:195] Run: containerd --version
	I0916 10:49:56.979617  104899 ssh_runner.go:195] Run: containerd --version
	I0916 10:49:57.004550  104899 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 10:49:57.006522  104899 cli_runner.go:164] Run: docker network inspect ha-770465 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:49:57.023550  104899 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:49:57.027150  104899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:49:57.038212  104899 kubeadm.go:883] updating cluster {Name:ha-770465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-770465 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflo
w:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:49:57.038378  104899 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:49:57.038424  104899 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:49:57.070842  104899 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 10:49:57.070870  104899 containerd.go:534] Images already preloaded, skipping extraction
	I0916 10:49:57.070927  104899 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:49:57.103678  104899 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 10:49:57.103701  104899 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:49:57.103708  104899 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I0916 10:49:57.103835  104899 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-770465 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-770465 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:49:57.103885  104899 ssh_runner.go:195] Run: sudo crictl info
	I0916 10:49:57.137097  104899 cni.go:84] Creating CNI manager for ""
	I0916 10:49:57.137118  104899 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0916 10:49:57.137126  104899 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:49:57.137145  104899 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-770465 NodeName:ha-770465 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:49:57.137272  104899 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "ha-770465"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:49:57.137289  104899 kube-vip.go:115] generating kube-vip config ...
	I0916 10:49:57.137323  104899 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 10:49:57.148842  104899 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:49:57.148949  104899 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 10:49:57.148998  104899 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:49:57.157438  104899 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:49:57.157512  104899 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0916 10:49:57.165876  104899 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0916 10:49:57.183043  104899 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:49:57.199990  104899 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0916 10:49:57.216658  104899 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 10:49:57.233389  104899 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 10:49:57.236798  104899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:49:57.247650  104899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:49:57.326895  104899 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:49:57.339876  104899 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465 for IP: 192.168.49.2
	I0916 10:49:57.339896  104899 certs.go:194] generating shared ca certs ...
	I0916 10:49:57.339910  104899 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:49:57.340034  104899 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 10:49:57.340068  104899 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 10:49:57.340078  104899 certs.go:256] generating profile certs ...
	I0916 10:49:57.340152  104899 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.key
	I0916 10:49:57.340175  104899 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key.8d1b9566
	I0916 10:49:57.340194  104899 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt.8d1b9566 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0916 10:49:57.550656  104899 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt.8d1b9566 ...
	I0916 10:49:57.550688  104899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt.8d1b9566: {Name:mk5cf4c273aab73833788c003eb0043520f5a8f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:49:57.550879  104899 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key.8d1b9566 ...
	I0916 10:49:57.550895  104899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key.8d1b9566: {Name:mkc7eab916060067e98e921148fa213c1479eec1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:49:57.550987  104899 certs.go:381] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt.8d1b9566 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt
	I0916 10:49:57.551146  104899 certs.go:385] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key.8d1b9566 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key
	I0916 10:49:57.551283  104899 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.key
	I0916 10:49:57.551299  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:49:57.551311  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:49:57.551325  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:49:57.551338  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:49:57.551350  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:49:57.551362  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:49:57.551380  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:49:57.551392  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:49:57.551440  104899 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 10:49:57.551466  104899 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 10:49:57.551476  104899 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:49:57.551499  104899 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 10:49:57.551539  104899 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:49:57.551565  104899 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 10:49:57.551601  104899 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:49:57.551627  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem -> /usr/share/ca-certificates/11189.pem
	I0916 10:49:57.551639  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /usr/share/ca-certificates/111892.pem
	I0916 10:49:57.551652  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:57.552313  104899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:49:57.577591  104899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:49:57.622485  104899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:49:57.645743  104899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:49:57.669057  104899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0916 10:49:57.692524  104899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:49:57.715684  104899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:49:57.738646  104899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 10:49:57.762545  104899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 10:49:57.786332  104899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 10:49:57.808517  104899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:49:57.831076  104899 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:49:57.848231  104899 ssh_runner.go:195] Run: openssl version
	I0916 10:49:57.853552  104899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 10:49:57.862499  104899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 10:49:57.866038  104899 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 10:49:57.866116  104899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 10:49:57.872709  104899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:49:57.881404  104899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:49:57.890595  104899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:57.893886  104899 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:57.893938  104899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:49:57.900320  104899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:49:57.908865  104899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 10:49:57.917862  104899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 10:49:57.921241  104899 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 10:49:57.921403  104899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 10:49:57.927832  104899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 10:49:57.936297  104899 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:49:57.939830  104899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 10:49:57.946174  104899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 10:49:57.952629  104899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 10:49:57.959088  104899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 10:49:57.965796  104899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 10:49:57.972762  104899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 10:49:57.979280  104899 kubeadm.go:392] StartCluster: {Name:ha-770465 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-770465 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:f
alse kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:49:57.979407  104899 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 10:49:57.979468  104899 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:49:58.013470  104899 cri.go:89] found id: "e8544648700d8bec5fa7afbe56e4e3c11679a6a6e90fa30c48c96d191ac1cdf2"
	I0916 10:49:58.013488  104899 cri.go:89] found id: "946241353e03d16a05ed42c23006cfa465d022f50c7580d1bec22425ee59a4ac"
	I0916 10:49:58.013492  104899 cri.go:89] found id: "0ee20b8c8789adc13129c1dd9bbf0e03680faaa7a1039ad42d97dbdae47213fd"
	I0916 10:49:58.013495  104899 cri.go:89] found id: "81f453ca3f8d171840aacc686ad19952955400177043021ed6f8e79531037bec"
	I0916 10:49:58.013498  104899 cri.go:89] found id: "917ef16037c509fa5bcfbad0bd3aae289f62731b5435ad933b59b707dbe0320e"
	I0916 10:49:58.013501  104899 cri.go:89] found id: "de5c6dcf960e9561503f4b0b4b3900a6a55e051755584f47521977a698ad11bb"
	I0916 10:49:58.013503  104899 cri.go:89] found id: "bcd02f03466d85592977b36046584eb0eb24d4040a9a28d2400852992bb02a91"
	I0916 10:49:58.013506  104899 cri.go:89] found id: "4a562d336c1706e425c8ce858242155970a39095a512cf3b2064ce89d4f54369"
	I0916 10:49:58.013508  104899 cri.go:89] found id: "e87832bf428c0d5daf61e53f57c9813ace0d2d4a7ba9c30b2fee46730d2c6de1"
	I0916 10:49:58.013513  104899 cri.go:89] found id: "b715d9632d76bec5b9249626c0e047c8c7d8720a8f0f370d24d64c3acc85d01d"
	I0916 10:49:58.013515  104899 cri.go:89] found id: ""
	I0916 10:49:58.013561  104899 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0916 10:49:58.025304  104899 cri.go:116] JSON = null
	W0916 10:49:58.025362  104899 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 10
	I0916 10:49:58.025428  104899 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:49:58.033640  104899 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 10:49:58.033661  104899 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 10:49:58.033716  104899 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0916 10:49:58.041810  104899 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:49:58.042199  104899 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-770465" does not appear in /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:49:58.042303  104899 kubeconfig.go:62] /home/jenkins/minikube-integration/19651-3687/kubeconfig needs updating (will repair): [kubeconfig missing "ha-770465" cluster setting kubeconfig missing "ha-770465" context setting]
	I0916 10:49:58.042598  104899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:49:58.042966  104899 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:49:58.043169  104899 kapi.go:59] client config for ha-770465: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:49:58.043555  104899 cert_rotation.go:140] Starting client certificate rotation controller
	I0916 10:49:58.043753  104899 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 10:49:58.052448  104899 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.49.2
	I0916 10:49:58.052471  104899 kubeadm.go:597] duration metric: took 18.804922ms to restartPrimaryControlPlane
	I0916 10:49:58.052482  104899 kubeadm.go:394] duration metric: took 73.213084ms to StartCluster
	I0916 10:49:58.052502  104899 settings.go:142] acquiring lock: {Name:mk5f140da961bb985c566f732d611f3e238f1f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:49:58.052581  104899 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:49:58.053274  104899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:49:58.053493  104899 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 10:49:58.053517  104899 start.go:241] waiting for startup goroutines ...
	I0916 10:49:58.053524  104899 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 10:49:58.053791  104899 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:49:58.058024  104899 out.go:177] * Enabled addons: 
	I0916 10:49:58.059581  104899 addons.go:510] duration metric: took 6.050916ms for enable addons: enabled=[]
	I0916 10:49:58.059621  104899 start.go:246] waiting for cluster config update ...
	I0916 10:49:58.059629  104899 start.go:255] writing updated cluster config ...
	I0916 10:49:58.061618  104899 out.go:201] 
	I0916 10:49:58.063465  104899 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:49:58.063554  104899 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/config.json ...
	I0916 10:49:58.065588  104899 out.go:177] * Starting "ha-770465-m02" control-plane node in "ha-770465" cluster
	I0916 10:49:58.066977  104899 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 10:49:58.068848  104899 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:49:58.070321  104899 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:49:58.070355  104899 cache.go:56] Caching tarball of preloaded images
	I0916 10:49:58.070422  104899 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:49:58.070446  104899 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 10:49:58.070456  104899 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 10:49:58.070562  104899 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/config.json ...
	W0916 10:49:58.090164  104899 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:49:58.090184  104899 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:49:58.090251  104899 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:49:58.090264  104899 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:49:58.090269  104899 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:49:58.090276  104899 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:49:58.090282  104899 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:49:58.091315  104899 image.go:273] response: 
	I0916 10:49:58.143612  104899 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:49:58.143655  104899 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:49:58.143690  104899 start.go:360] acquireMachinesLock for ha-770465-m02: {Name:mk1ae0810eb0d80ca7ae9fe74f31de5324d2e214 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:49:58.143788  104899 start.go:364] duration metric: took 75.384µs to acquireMachinesLock for "ha-770465-m02"
	I0916 10:49:58.143810  104899 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:49:58.143816  104899 fix.go:54] fixHost starting: m02
	I0916 10:49:58.144068  104899 cli_runner.go:164] Run: docker container inspect ha-770465-m02 --format={{.State.Status}}
	I0916 10:49:58.161306  104899 fix.go:112] recreateIfNeeded on ha-770465-m02: state=Stopped err=<nil>
	W0916 10:49:58.161332  104899 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:49:58.164806  104899 out.go:177] * Restarting existing docker container for "ha-770465-m02" ...
	I0916 10:49:58.166419  104899 cli_runner.go:164] Run: docker start ha-770465-m02
	I0916 10:49:58.442066  104899 cli_runner.go:164] Run: docker container inspect ha-770465-m02 --format={{.State.Status}}
	I0916 10:49:58.463247  104899 kic.go:430] container "ha-770465-m02" state is running.
	I0916 10:49:58.463594  104899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465-m02
	I0916 10:49:58.484671  104899 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/config.json ...
	I0916 10:49:58.484925  104899 machine.go:93] provisionDockerMachine start ...
	I0916 10:49:58.484998  104899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m02
	I0916 10:49:58.504348  104899 main.go:141] libmachine: Using SSH client type: native
	I0916 10:49:58.504558  104899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0916 10:49:58.504580  104899 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:49:58.505283  104899 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59412->127.0.0.1:32838: read: connection reset by peer
	I0916 10:50:01.699244  104899 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-770465-m02
	
	I0916 10:50:01.699268  104899 ubuntu.go:169] provisioning hostname "ha-770465-m02"
	I0916 10:50:01.699331  104899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m02
	I0916 10:50:01.716929  104899 main.go:141] libmachine: Using SSH client type: native
	I0916 10:50:01.717101  104899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0916 10:50:01.717113  104899 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-770465-m02 && echo "ha-770465-m02" | sudo tee /etc/hostname
	I0916 10:50:01.923214  104899 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-770465-m02
	
	I0916 10:50:01.923291  104899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m02
	I0916 10:50:01.940655  104899 main.go:141] libmachine: Using SSH client type: native
	I0916 10:50:01.940817  104899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0916 10:50:01.940832  104899 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-770465-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-770465-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-770465-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:50:02.071789  104899 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:50:02.071821  104899 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 10:50:02.071841  104899 ubuntu.go:177] setting up certificates
	I0916 10:50:02.071852  104899 provision.go:84] configureAuth start
	I0916 10:50:02.071904  104899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465-m02
	I0916 10:50:02.089071  104899 provision.go:143] copyHostCerts
	I0916 10:50:02.089118  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:50:02.089176  104899 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 10:50:02.089188  104899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:50:02.089266  104899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 10:50:02.089383  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:50:02.089416  104899 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 10:50:02.089424  104899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:50:02.089463  104899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 10:50:02.089540  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:50:02.089564  104899 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 10:50:02.089573  104899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:50:02.089614  104899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 10:50:02.089698  104899 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.ha-770465-m02 san=[127.0.0.1 192.168.49.3 ha-770465-m02 localhost minikube]
	I0916 10:50:02.210398  104899 provision.go:177] copyRemoteCerts
	I0916 10:50:02.210455  104899 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:50:02.210496  104899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m02
	I0916 10:50:02.227231  104899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m02/id_rsa Username:docker}
	I0916 10:50:02.320156  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:50:02.320234  104899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 10:50:02.341770  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:50:02.341851  104899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:50:02.366000  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:50:02.366065  104899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:50:02.387869  104899 provision.go:87] duration metric: took 316.004593ms to configureAuth
	I0916 10:50:02.387894  104899 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:50:02.388080  104899 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:50:02.388090  104899 machine.go:96] duration metric: took 3.90315102s to provisionDockerMachine
	I0916 10:50:02.388098  104899 start.go:293] postStartSetup for "ha-770465-m02" (driver="docker")
	I0916 10:50:02.388106  104899 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:50:02.388152  104899 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:50:02.388185  104899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m02
	I0916 10:50:02.405702  104899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m02/id_rsa Username:docker}
	I0916 10:50:02.501006  104899 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:50:02.504234  104899 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:50:02.504277  104899 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:50:02.504297  104899 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:50:02.504305  104899 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:50:02.504318  104899 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 10:50:02.504380  104899 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 10:50:02.504470  104899 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 10:50:02.504481  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /etc/ssl/certs/111892.pem
	I0916 10:50:02.504601  104899 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:50:02.512841  104899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:50:02.535105  104899 start.go:296] duration metric: took 146.993681ms for postStartSetup
	I0916 10:50:02.535183  104899 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:50:02.535226  104899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m02
	I0916 10:50:02.551966  104899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m02/id_rsa Username:docker}
	I0916 10:50:02.644950  104899 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:50:02.650523  104899 fix.go:56] duration metric: took 4.506702646s for fixHost
	I0916 10:50:02.650549  104899 start.go:83] releasing machines lock for "ha-770465-m02", held for 4.506747601s
	I0916 10:50:02.650613  104899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465-m02
	I0916 10:50:02.670371  104899 out.go:177] * Found network options:
	I0916 10:50:02.671897  104899 out.go:177]   - NO_PROXY=192.168.49.2
	W0916 10:50:02.673142  104899 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:50:02.673187  104899 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 10:50:02.673268  104899 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:50:02.673317  104899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m02
	I0916 10:50:02.673365  104899 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:50:02.673429  104899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m02
	I0916 10:50:02.691503  104899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m02/id_rsa Username:docker}
	I0916 10:50:02.692302  104899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m02/id_rsa Username:docker}
	I0916 10:50:02.857510  104899 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 10:50:02.875527  104899 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:50:02.875605  104899 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:50:02.883911  104899 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:50:02.883930  104899 start.go:495] detecting cgroup driver to use...
	I0916 10:50:02.883959  104899 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:50:02.883994  104899 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 10:50:02.894817  104899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 10:50:02.905230  104899 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:50:02.905277  104899 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:50:02.917564  104899 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:50:02.928231  104899 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:50:03.013534  104899 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:50:03.104428  104899 docker.go:233] disabling docker service ...
	I0916 10:50:03.104487  104899 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:50:03.116012  104899 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:50:03.126401  104899 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:50:03.216265  104899 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:50:03.308516  104899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:50:03.319497  104899 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:50:03.334387  104899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 10:50:03.343826  104899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:50:03.353167  104899 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:50:03.353221  104899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:50:03.362509  104899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:50:03.372048  104899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:50:03.381392  104899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:50:03.390724  104899 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:50:03.399640  104899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:50:03.409341  104899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:50:03.419214  104899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:50:03.428878  104899 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:50:03.437087  104899 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:50:03.445161  104899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:50:03.535529  104899 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 10:50:03.780615  104899 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 10:50:03.780690  104899 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 10:50:03.784300  104899 start.go:563] Will wait 60s for crictl version
	I0916 10:50:03.784365  104899 ssh_runner.go:195] Run: which crictl
	I0916 10:50:03.787726  104899 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:50:03.818059  104899 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 10:50:03.818120  104899 ssh_runner.go:195] Run: containerd --version
	I0916 10:50:03.843824  104899 ssh_runner.go:195] Run: containerd --version
	I0916 10:50:03.879245  104899 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 10:50:03.880758  104899 out.go:177]   - env NO_PROXY=192.168.49.2
	I0916 10:50:03.882052  104899 cli_runner.go:164] Run: docker network inspect ha-770465 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:50:03.900639  104899 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:50:03.904352  104899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:50:03.914630  104899 mustload.go:65] Loading cluster: ha-770465
	I0916 10:50:03.914829  104899 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:50:03.915029  104899 cli_runner.go:164] Run: docker container inspect ha-770465 --format={{.State.Status}}
	I0916 10:50:03.941706  104899 host.go:66] Checking if "ha-770465" exists ...
	I0916 10:50:03.942025  104899 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465 for IP: 192.168.49.3
	I0916 10:50:03.942038  104899 certs.go:194] generating shared ca certs ...
	I0916 10:50:03.942057  104899 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:50:03.942189  104899 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 10:50:03.942237  104899 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 10:50:03.942246  104899 certs.go:256] generating profile certs ...
	I0916 10:50:03.942356  104899 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.key
	I0916 10:50:03.942443  104899 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key.64a8388d
	I0916 10:50:03.942493  104899 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.key
	I0916 10:50:03.942504  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:50:03.942523  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:50:03.942541  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:50:03.942558  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:50:03.942574  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:50:03.942600  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:50:03.942619  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:50:03.942638  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:50:03.942718  104899 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 10:50:03.942760  104899 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 10:50:03.942769  104899 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:50:03.942801  104899 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 10:50:03.942836  104899 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:50:03.942879  104899 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 10:50:03.943033  104899 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:50:03.943097  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:50:03.943119  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem -> /usr/share/ca-certificates/11189.pem
	I0916 10:50:03.943139  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /usr/share/ca-certificates/111892.pem
	I0916 10:50:03.943219  104899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:50:03.963054  104899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32833 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa Username:docker}
	I0916 10:50:04.176090  104899 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0916 10:50:04.180712  104899 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0916 10:50:04.224034  104899 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0916 10:50:04.228938  104899 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0916 10:50:04.245352  104899 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0916 10:50:04.250131  104899 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0916 10:50:04.338941  104899 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0916 10:50:04.345835  104899 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0916 10:50:04.423828  104899 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0916 10:50:04.429110  104899 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0916 10:50:04.445699  104899 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0916 10:50:04.451239  104899 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0916 10:50:04.535462  104899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:50:04.560239  104899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:50:04.584109  104899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:50:04.605842  104899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:50:04.630489  104899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0916 10:50:04.663317  104899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:50:04.686845  104899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:50:04.709610  104899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 10:50:04.745177  104899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:50:04.772020  104899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 10:50:04.795162  104899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 10:50:04.817947  104899 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0916 10:50:04.834684  104899 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0916 10:50:04.851931  104899 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0916 10:50:04.873366  104899 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0916 10:50:04.890927  104899 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0916 10:50:04.908055  104899 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0916 10:50:04.924929  104899 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0916 10:50:04.942223  104899 ssh_runner.go:195] Run: openssl version
	I0916 10:50:04.947593  104899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:50:04.957226  104899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:50:04.960871  104899 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:50:04.960939  104899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:50:04.967557  104899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:50:04.976565  104899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 10:50:04.985805  104899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 10:50:04.989177  104899 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 10:50:04.989234  104899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 10:50:04.995826  104899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 10:50:05.004992  104899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 10:50:05.014943  104899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 10:50:05.018506  104899 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 10:50:05.018567  104899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 10:50:05.025176  104899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:50:05.034066  104899 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:50:05.037812  104899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 10:50:05.044601  104899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 10:50:05.051574  104899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 10:50:05.058448  104899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 10:50:05.066378  104899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 10:50:05.073545  104899 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 10:50:05.080791  104899 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.31.1 containerd true true} ...
	I0916 10:50:05.080910  104899 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-770465-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-770465 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:50:05.080944  104899 kube-vip.go:115] generating kube-vip config ...
	I0916 10:50:05.080990  104899 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0916 10:50:05.092957  104899 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:50:05.093026  104899 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0916 10:50:05.093082  104899 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:50:05.101391  104899 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:50:05.101462  104899 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0916 10:50:05.109699  104899 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0916 10:50:05.126895  104899 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:50:05.145403  104899 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0916 10:50:05.176431  104899 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 10:50:05.182634  104899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:50:05.194127  104899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:50:05.344691  104899 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:50:05.358389  104899 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 10:50:05.358876  104899 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:50:05.360866  104899 out.go:177] * Verifying Kubernetes components...
	I0916 10:50:05.362376  104899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:50:05.540010  104899 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:50:05.557776  104899 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:50:05.558101  104899 kapi.go:59] client config for ha-770465: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 10:50:05.558187  104899 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 10:50:05.558442  104899 node_ready.go:35] waiting up to 6m0s for node "ha-770465-m02" to be "Ready" ...
	I0916 10:50:05.558548  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:50:05.558561  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:05.558573  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:05.558584  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:06.765321  104899 round_trippers.go:574] Response Status: 200 OK in 1206 milliseconds
	I0916 10:50:06.766298  104899 node_ready.go:49] node "ha-770465-m02" has status "Ready":"True"
	I0916 10:50:06.766331  104899 node_ready.go:38] duration metric: took 1.207865737s for node "ha-770465-m02" to be "Ready" ...
	I0916 10:50:06.766345  104899 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:50:06.766501  104899 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 10:50:06.766564  104899 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 10:50:06.766654  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:50:06.766668  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:06.766679  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:06.766686  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:06.831549  104899 round_trippers.go:574] Response Status: 200 OK in 64 milliseconds
	I0916 10:50:06.842687  104899 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:06.842809  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:06.842825  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:06.842836  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:06.842848  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:06.845551  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:06.846150  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:06.846166  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:06.846175  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:06.846181  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:06.848448  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:06.848836  104899 pod_ready.go:93] pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:06.848853  104899 pod_ready.go:82] duration metric: took 6.134332ms for pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:06.848862  104899 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sbs22" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:06.848914  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:50:06.848921  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:06.848928  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:06.848932  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:06.851385  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:06.852211  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:06.852233  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:06.852245  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:06.852249  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:06.855335  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:06.855982  104899 pod_ready.go:93] pod "coredns-7c65d6cfc9-sbs22" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:06.856003  104899 pod_ready.go:82] duration metric: took 7.13522ms for pod "coredns-7c65d6cfc9-sbs22" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:06.856017  104899 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:06.856093  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465
	I0916 10:50:06.856105  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:06.856117  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:06.856125  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:06.858634  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:06.859199  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:06.859217  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:06.859230  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:06.859236  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:06.861529  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:06.861941  104899 pod_ready.go:93] pod "etcd-ha-770465" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:06.861957  104899 pod_ready.go:82] duration metric: took 5.933335ms for pod "etcd-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:06.861966  104899 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:06.862023  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m02
	I0916 10:50:06.862032  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:06.862042  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:06.862049  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:06.864330  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:06.864874  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:50:06.864889  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:06.864897  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:06.864901  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:06.866977  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:06.867368  104899 pod_ready.go:93] pod "etcd-ha-770465-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:06.867387  104899 pod_ready.go:82] duration metric: took 5.415406ms for pod "etcd-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:06.867397  104899 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:06.867454  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:50:06.867461  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:06.867468  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:06.867475  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:06.921728  104899 round_trippers.go:574] Response Status: 200 OK in 54 milliseconds
	I0916 10:50:06.967716  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:50:06.967774  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:06.967786  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:06.967794  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:06.970744  104899 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0916 10:50:06.970970  104899 pod_ready.go:98] node "ha-770465-m03" hosting pod "etcd-ha-770465-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-770465-m03": nodes "ha-770465-m03" not found
	I0916 10:50:06.970987  104899 pod_ready.go:82] duration metric: took 103.582415ms for pod "etcd-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	E0916 10:50:06.970996  104899 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-770465-m03" hosting pod "etcd-ha-770465-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-770465-m03": nodes "ha-770465-m03" not found
	I0916 10:50:06.971015  104899 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:07.167500  104899 request.go:632] Waited for 196.387748ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465
	I0916 10:50:07.167553  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465
	I0916 10:50:07.167558  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:07.167565  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:07.167570  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:07.170786  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:07.366683  104899 request.go:632] Waited for 195.270095ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:07.366738  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:07.366745  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:07.366755  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:07.366761  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:07.369715  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:07.370152  104899 pod_ready.go:93] pod "kube-apiserver-ha-770465" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:07.370173  104899 pod_ready.go:82] duration metric: took 399.145867ms for pod "kube-apiserver-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:07.370186  104899 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:07.567238  104899 request.go:632] Waited for 196.972137ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m02
	I0916 10:50:07.567338  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m02
	I0916 10:50:07.567346  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:07.567366  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:07.567371  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:07.571990  104899 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:50:07.766932  104899 request.go:632] Waited for 194.187017ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:50:07.767007  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:50:07.767022  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:07.767032  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:07.767038  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:07.773536  104899 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:50:07.773938  104899 pod_ready.go:93] pod "kube-apiserver-ha-770465-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:07.773955  104899 pod_ready.go:82] duration metric: took 403.758926ms for pod "kube-apiserver-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:07.773965  104899 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:07.967074  104899 request.go:632] Waited for 193.012074ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m03
	I0916 10:50:07.967128  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m03
	I0916 10:50:07.967134  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:07.967144  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:07.967150  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:07.969709  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:08.167301  104899 request.go:632] Waited for 196.809079ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:50:08.167380  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:50:08.167386  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:08.167394  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:08.167397  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:08.170307  104899 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0916 10:50:08.170435  104899 pod_ready.go:98] node "ha-770465-m03" hosting pod "kube-apiserver-ha-770465-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-770465-m03": nodes "ha-770465-m03" not found
	I0916 10:50:08.170458  104899 pod_ready.go:82] duration metric: took 396.48636ms for pod "kube-apiserver-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	E0916 10:50:08.170470  104899 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-770465-m03" hosting pod "kube-apiserver-ha-770465-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-770465-m03": nodes "ha-770465-m03" not found
	I0916 10:50:08.170479  104899 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:08.367720  104899 request.go:632] Waited for 197.147139ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-770465
	I0916 10:50:08.367840  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-770465
	I0916 10:50:08.367852  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:08.367864  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:08.367876  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:08.370942  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:08.566760  104899 request.go:632] Waited for 195.27208ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:08.566842  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:08.566854  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:08.566861  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:08.566865  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:08.569646  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:08.570069  104899 pod_ready.go:98] node "ha-770465" hosting pod "kube-controller-manager-ha-770465" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-770465" has status "Ready":"False"
	I0916 10:50:08.570090  104899 pod_ready.go:82] duration metric: took 399.601135ms for pod "kube-controller-manager-ha-770465" in "kube-system" namespace to be "Ready" ...
	E0916 10:50:08.570102  104899 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-770465" hosting pod "kube-controller-manager-ha-770465" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-770465" has status "Ready":"False"
	I0916 10:50:08.570111  104899 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:08.767078  104899 request.go:632] Waited for 196.904114ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-770465-m02
	I0916 10:50:08.767163  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-770465-m02
	I0916 10:50:08.767172  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:08.767180  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:08.767186  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:08.770632  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:08.967668  104899 request.go:632] Waited for 196.358337ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:50:08.967730  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:50:08.967745  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:08.967754  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:08.967763  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:08.970811  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:08.971301  104899 pod_ready.go:93] pod "kube-controller-manager-ha-770465-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:08.971321  104899 pod_ready.go:82] duration metric: took 401.203189ms for pod "kube-controller-manager-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:08.971332  104899 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:09.167311  104899 request.go:632] Waited for 195.881699ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-770465-m03
	I0916 10:50:09.167378  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-770465-m03
	I0916 10:50:09.167387  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:09.167400  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:09.167409  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:09.170872  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:09.366780  104899 request.go:632] Waited for 195.272333ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:50:09.366862  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:50:09.366872  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:09.366879  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:09.366885  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:09.369598  104899 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0916 10:50:09.369734  104899 pod_ready.go:98] node "ha-770465-m03" hosting pod "kube-controller-manager-ha-770465-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-770465-m03": nodes "ha-770465-m03" not found
	I0916 10:50:09.369754  104899 pod_ready.go:82] duration metric: took 398.414768ms for pod "kube-controller-manager-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	E0916 10:50:09.369769  104899 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-770465-m03" hosting pod "kube-controller-manager-ha-770465-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-770465-m03": nodes "ha-770465-m03" not found
	I0916 10:50:09.369781  104899 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4qgcs" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:09.566995  104899 request.go:632] Waited for 197.139068ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4qgcs
	I0916 10:50:09.567094  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4qgcs
	I0916 10:50:09.567106  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:09.567116  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:09.567125  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:09.570245  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:09.767247  104899 request.go:632] Waited for 196.334817ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:50:09.767306  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:50:09.767311  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:09.767317  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:09.767321  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:09.770305  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:09.770810  104899 pod_ready.go:93] pod "kube-proxy-4qgcs" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:09.770830  104899 pod_ready.go:82] duration metric: took 401.035307ms for pod "kube-proxy-4qgcs" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:09.770843  104899 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-78l2l" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:09.966987  104899 request.go:632] Waited for 196.050434ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-78l2l
	I0916 10:50:09.967040  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-78l2l
	I0916 10:50:09.967046  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:09.967053  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:09.967056  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:09.969832  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:10.166823  104899 request.go:632] Waited for 196.282821ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m04
	I0916 10:50:10.166893  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m04
	I0916 10:50:10.166900  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:10.166911  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:10.166916  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:10.169633  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:10.170118  104899 pod_ready.go:93] pod "kube-proxy-78l2l" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:10.170143  104899 pod_ready.go:82] duration metric: took 399.292348ms for pod "kube-proxy-78l2l" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:10.170156  104899 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gd2mt" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:10.367134  104899 request.go:632] Waited for 196.901839ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gd2mt
	I0916 10:50:10.367208  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gd2mt
	I0916 10:50:10.367213  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:10.367226  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:10.367231  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:10.370261  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:10.567275  104899 request.go:632] Waited for 196.345185ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:10.567332  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:10.567338  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:10.567345  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:10.567354  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:10.570218  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:10.570791  104899 pod_ready.go:98] node "ha-770465" hosting pod "kube-proxy-gd2mt" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-770465" has status "Ready":"False"
	I0916 10:50:10.570819  104899 pod_ready.go:82] duration metric: took 400.654519ms for pod "kube-proxy-gd2mt" in "kube-system" namespace to be "Ready" ...
	E0916 10:50:10.570833  104899 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-770465" hosting pod "kube-proxy-gd2mt" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-770465" has status "Ready":"False"
	I0916 10:50:10.570842  104899 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qlspc" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:10.767747  104899 request.go:632] Waited for 196.80934ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qlspc
	I0916 10:50:10.767820  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qlspc
	I0916 10:50:10.767827  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:10.767836  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:10.767843  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:10.770790  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:10.966710  104899 request.go:632] Waited for 195.282879ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:50:10.966783  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:50:10.966792  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:10.966799  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:10.966805  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:10.970304  104899 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0916 10:50:10.970457  104899 pod_ready.go:98] node "ha-770465-m03" hosting pod "kube-proxy-qlspc" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-770465-m03": nodes "ha-770465-m03" not found
	I0916 10:50:10.970475  104899 pod_ready.go:82] duration metric: took 399.623072ms for pod "kube-proxy-qlspc" in "kube-system" namespace to be "Ready" ...
	E0916 10:50:10.970491  104899 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-770465-m03" hosting pod "kube-proxy-qlspc" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-770465-m03": nodes "ha-770465-m03" not found
	I0916 10:50:10.970504  104899 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:11.166676  104899 request.go:632] Waited for 196.091416ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465
	I0916 10:50:11.166734  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465
	I0916 10:50:11.166739  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:11.166746  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:11.166752  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:11.169979  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:11.366940  104899 request.go:632] Waited for 196.440545ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:11.367031  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:11.367038  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:11.367048  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:11.367059  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:11.370127  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:11.370532  104899 pod_ready.go:98] node "ha-770465" hosting pod "kube-scheduler-ha-770465" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-770465" has status "Ready":"False"
	I0916 10:50:11.370555  104899 pod_ready.go:82] duration metric: took 400.036895ms for pod "kube-scheduler-ha-770465" in "kube-system" namespace to be "Ready" ...
	E0916 10:50:11.370567  104899 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-770465" hosting pod "kube-scheduler-ha-770465" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-770465" has status "Ready":"False"
	I0916 10:50:11.370576  104899 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:11.567689  104899 request.go:632] Waited for 197.031186ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465-m02
	I0916 10:50:11.567808  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465-m02
	I0916 10:50:11.567820  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:11.567831  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:11.567840  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:11.570746  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:11.767634  104899 request.go:632] Waited for 196.353582ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:50:11.767703  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:50:11.767715  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:11.767727  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:11.767762  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:11.770626  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:11.771089  104899 pod_ready.go:93] pod "kube-scheduler-ha-770465-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:11.771108  104899 pod_ready.go:82] duration metric: took 400.521924ms for pod "kube-scheduler-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:11.771118  104899 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:11.967184  104899 request.go:632] Waited for 196.002792ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465-m03
	I0916 10:50:11.967271  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465-m03
	I0916 10:50:11.967280  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:11.967288  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:11.967292  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:11.970475  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:12.167476  104899 request.go:632] Waited for 196.408046ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:50:12.167560  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m03
	I0916 10:50:12.167567  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:12.167575  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:12.167580  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:12.170825  104899 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0916 10:50:12.170981  104899 pod_ready.go:98] node "ha-770465-m03" hosting pod "kube-scheduler-ha-770465-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-770465-m03": nodes "ha-770465-m03" not found
	I0916 10:50:12.171003  104899 pod_ready.go:82] duration metric: took 399.877977ms for pod "kube-scheduler-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	E0916 10:50:12.171016  104899 pod_ready.go:67] WaitExtra: waitPodCondition: node "ha-770465-m03" hosting pod "kube-scheduler-ha-770465-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-770465-m03": nodes "ha-770465-m03" not found
	I0916 10:50:12.171031  104899 pod_ready.go:39] duration metric: took 5.404599059s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:50:12.171056  104899 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:50:12.171105  104899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:50:12.185290  104899 api_server.go:72] duration metric: took 6.826846342s to wait for apiserver process to appear ...
	I0916 10:50:12.185319  104899 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:50:12.185344  104899 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0916 10:50:12.194320  104899 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0916 10:50:12.194409  104899 round_trippers.go:463] GET https://192.168.49.2:8443/version
	I0916 10:50:12.194421  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:12.194433  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:12.194440  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:12.195233  104899 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 10:50:12.195346  104899 api_server.go:141] control plane version: v1.31.1
	I0916 10:50:12.195367  104899 api_server.go:131] duration metric: took 10.041372ms to wait for apiserver health ...
	I0916 10:50:12.195380  104899 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:50:12.366748  104899 request.go:632] Waited for 171.286786ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:50:12.366846  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:50:12.366857  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:12.366868  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:12.366877  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:12.372155  104899 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:50:12.379242  104899 system_pods.go:59] 26 kube-system pods found
	I0916 10:50:12.379290  104899 system_pods.go:61] "coredns-7c65d6cfc9-9lw9q" [4d7cd19e-6f4d-44ce-a248-d0fdecdb9fe8] Running
	I0916 10:50:12.379295  104899 system_pods.go:61] "coredns-7c65d6cfc9-sbs22" [89925692-76b4-481f-bac7-16f06bea792a] Running
	I0916 10:50:12.379299  104899 system_pods.go:61] "etcd-ha-770465" [041e8a27-b6a3-4201-990f-c367da8c5eb5] Running
	I0916 10:50:12.379303  104899 system_pods.go:61] "etcd-ha-770465-m02" [1f922c60-bf68-44df-aa8c-42dce50e4dac] Running
	I0916 10:50:12.379307  104899 system_pods.go:61] "etcd-ha-770465-m03" [876ca26f-ca18-47df-9bfb-302321d66136] Running
	I0916 10:50:12.379310  104899 system_pods.go:61] "kindnet-66kfj" [9cd2abf1-a30e-40f4-af5c-59d50e367b36] Running
	I0916 10:50:12.379313  104899 system_pods.go:61] "kindnet-bflwn" [59d75712-5683-4b1c-a6ef-2a669d75da7a] Running
	I0916 10:50:12.379317  104899 system_pods.go:61] "kindnet-grjh8" [98658171-1486-44a9-8a20-0d77ea019206] Running
	I0916 10:50:12.379321  104899 system_pods.go:61] "kindnet-kht59" [3124d723-6fc6-4dce-9ecf-bb40b4665208] Running
	I0916 10:50:12.379324  104899 system_pods.go:61] "kube-apiserver-ha-770465" [aa8ba784-661f-4074-b0d3-f98cddf8ce8c] Running
	I0916 10:50:12.379327  104899 system_pods.go:61] "kube-apiserver-ha-770465-m02" [7acdd845-890e-449a-b112-2efbc19a7ef5] Running
	I0916 10:50:12.379330  104899 system_pods.go:61] "kube-apiserver-ha-770465-m03" [8ab3c55a-d726-456c-803b-e13c1032b13a] Running
	I0916 10:50:12.379333  104899 system_pods.go:61] "kube-controller-manager-ha-770465" [f0a59ee1-bf5e-432d-a185-f9497c75ada4] Running
	I0916 10:50:12.379336  104899 system_pods.go:61] "kube-controller-manager-ha-770465-m02" [55d7421b-5679-416d-a6ae-c36738c1ec32] Running
	I0916 10:50:12.379339  104899 system_pods.go:61] "kube-controller-manager-ha-770465-m03" [37613674-2ba4-4425-824a-d00d16ea7b62] Running
	I0916 10:50:12.379342  104899 system_pods.go:61] "kube-proxy-4qgcs" [0dbed632-c44c-4a17-8486-b328e2cc41d3] Running
	I0916 10:50:12.379346  104899 system_pods.go:61] "kube-proxy-78l2l" [2b7f1ea3-9b2d-46d4-aa98-951e1c246baa] Running
	I0916 10:50:12.379349  104899 system_pods.go:61] "kube-proxy-gd2mt" [fc3bf04d-635a-4264-883b-2fd72cac2e24] Running
	I0916 10:50:12.379353  104899 system_pods.go:61] "kube-proxy-qlspc" [703b3f60-1294-49fd-aab1-35474b771351] Running
	I0916 10:50:12.379356  104899 system_pods.go:61] "kube-scheduler-ha-770465" [85f95d4d-a2d4-45cf-8afe-be446333b9d0] Running
	I0916 10:50:12.379366  104899 system_pods.go:61] "kube-scheduler-ha-770465-m02" [ec894fe5-a992-49f2-a96f-caab3848f7dd] Running
	I0916 10:50:12.379371  104899 system_pods.go:61] "kube-scheduler-ha-770465-m03" [0c6e312a-4eca-4ca3-b0d9-d70f7bf5087c] Running
	I0916 10:50:12.379375  104899 system_pods.go:61] "kube-vip-ha-770465" [bf294b8a-9d09-473e-964e-b776614e2969] Running
	I0916 10:50:12.379380  104899 system_pods.go:61] "kube-vip-ha-770465-m02" [ffd7ee7b-efe1-4d2d-b2f8-5fd898de7dbf] Running
	I0916 10:50:12.379383  104899 system_pods.go:61] "kube-vip-ha-770465-m03" [408226b8-b640-4b83-bd7e-a61b37724197] Running
	I0916 10:50:12.379389  104899 system_pods.go:61] "storage-provisioner" [cf470925-4874-4744-8015-700e93ab924f] Running
	I0916 10:50:12.379395  104899 system_pods.go:74] duration metric: took 184.007096ms to wait for pod list to return data ...
	I0916 10:50:12.379405  104899 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:50:12.566758  104899 request.go:632] Waited for 187.264475ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:50:12.566838  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:50:12.566858  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:12.566867  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:12.566880  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:12.621036  104899 round_trippers.go:574] Response Status: 200 OK in 54 milliseconds
	I0916 10:50:12.621405  104899 default_sa.go:45] found service account: "default"
	I0916 10:50:12.621427  104899 default_sa.go:55] duration metric: took 242.012815ms for default service account to be created ...
	I0916 10:50:12.621439  104899 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:50:12.766747  104899 request.go:632] Waited for 145.223423ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:50:12.766828  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:50:12.766843  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:12.766854  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:12.766860  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:12.773934  104899 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0916 10:50:12.784450  104899 system_pods.go:86] 26 kube-system pods found
	I0916 10:50:12.784494  104899 system_pods.go:89] "coredns-7c65d6cfc9-9lw9q" [4d7cd19e-6f4d-44ce-a248-d0fdecdb9fe8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 10:50:12.784506  104899 system_pods.go:89] "coredns-7c65d6cfc9-sbs22" [89925692-76b4-481f-bac7-16f06bea792a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 10:50:12.784520  104899 system_pods.go:89] "etcd-ha-770465" [041e8a27-b6a3-4201-990f-c367da8c5eb5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0916 10:50:12.784528  104899 system_pods.go:89] "etcd-ha-770465-m02" [1f922c60-bf68-44df-aa8c-42dce50e4dac] Running
	I0916 10:50:12.784539  104899 system_pods.go:89] "etcd-ha-770465-m03" [876ca26f-ca18-47df-9bfb-302321d66136] Running
	I0916 10:50:12.784544  104899 system_pods.go:89] "kindnet-66kfj" [9cd2abf1-a30e-40f4-af5c-59d50e367b36] Running
	I0916 10:50:12.784553  104899 system_pods.go:89] "kindnet-bflwn" [59d75712-5683-4b1c-a6ef-2a669d75da7a] Running
	I0916 10:50:12.784571  104899 system_pods.go:89] "kindnet-grjh8" [98658171-1486-44a9-8a20-0d77ea019206] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0916 10:50:12.784580  104899 system_pods.go:89] "kindnet-kht59" [3124d723-6fc6-4dce-9ecf-bb40b4665208] Running
	I0916 10:50:12.784587  104899 system_pods.go:89] "kube-apiserver-ha-770465" [aa8ba784-661f-4074-b0d3-f98cddf8ce8c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0916 10:50:12.784596  104899 system_pods.go:89] "kube-apiserver-ha-770465-m02" [7acdd845-890e-449a-b112-2efbc19a7ef5] Running
	I0916 10:50:12.784602  104899 system_pods.go:89] "kube-apiserver-ha-770465-m03" [8ab3c55a-d726-456c-803b-e13c1032b13a] Running
	I0916 10:50:12.784652  104899 system_pods.go:89] "kube-controller-manager-ha-770465" [f0a59ee1-bf5e-432d-a185-f9497c75ada4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0916 10:50:12.784664  104899 system_pods.go:89] "kube-controller-manager-ha-770465-m02" [55d7421b-5679-416d-a6ae-c36738c1ec32] Running
	I0916 10:50:12.784669  104899 system_pods.go:89] "kube-controller-manager-ha-770465-m03" [37613674-2ba4-4425-824a-d00d16ea7b62] Running
	I0916 10:50:12.784673  104899 system_pods.go:89] "kube-proxy-4qgcs" [0dbed632-c44c-4a17-8486-b328e2cc41d3] Running
	I0916 10:50:12.784681  104899 system_pods.go:89] "kube-proxy-78l2l" [2b7f1ea3-9b2d-46d4-aa98-951e1c246baa] Running
	I0916 10:50:12.784692  104899 system_pods.go:89] "kube-proxy-gd2mt" [fc3bf04d-635a-4264-883b-2fd72cac2e24] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0916 10:50:12.784701  104899 system_pods.go:89] "kube-proxy-qlspc" [703b3f60-1294-49fd-aab1-35474b771351] Running
	I0916 10:50:12.784714  104899 system_pods.go:89] "kube-scheduler-ha-770465" [85f95d4d-a2d4-45cf-8afe-be446333b9d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0916 10:50:12.784723  104899 system_pods.go:89] "kube-scheduler-ha-770465-m02" [ec894fe5-a992-49f2-a96f-caab3848f7dd] Running
	I0916 10:50:12.784732  104899 system_pods.go:89] "kube-scheduler-ha-770465-m03" [0c6e312a-4eca-4ca3-b0d9-d70f7bf5087c] Running
	I0916 10:50:12.784740  104899 system_pods.go:89] "kube-vip-ha-770465" [bf294b8a-9d09-473e-964e-b776614e2969] Running
	I0916 10:50:12.784748  104899 system_pods.go:89] "kube-vip-ha-770465-m02" [ffd7ee7b-efe1-4d2d-b2f8-5fd898de7dbf] Running
	I0916 10:50:12.784753  104899 system_pods.go:89] "kube-vip-ha-770465-m03" [408226b8-b640-4b83-bd7e-a61b37724197] Running
	I0916 10:50:12.784761  104899 system_pods.go:89] "storage-provisioner" [cf470925-4874-4744-8015-700e93ab924f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0916 10:50:12.784768  104899 system_pods.go:126] duration metric: took 163.323302ms to wait for k8s-apps to be running ...
	I0916 10:50:12.784778  104899 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:50:12.784818  104899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:50:12.796129  104899 system_svc.go:56] duration metric: took 11.34162ms WaitForService to wait for kubelet
	I0916 10:50:12.796173  104899 kubeadm.go:582] duration metric: took 7.437733457s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:50:12.796198  104899 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:50:12.967617  104899 request.go:632] Waited for 171.328946ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0916 10:50:12.967688  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0916 10:50:12.967695  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:12.967752  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:12.967762  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:13.026433  104899 round_trippers.go:574] Response Status: 200 OK in 58 milliseconds
	I0916 10:50:13.031232  104899 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:50:13.031291  104899 node_conditions.go:123] node cpu capacity is 8
	I0916 10:50:13.031307  104899 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:50:13.031312  104899 node_conditions.go:123] node cpu capacity is 8
	I0916 10:50:13.031318  104899 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:50:13.031323  104899 node_conditions.go:123] node cpu capacity is 8
	I0916 10:50:13.031328  104899 node_conditions.go:105] duration metric: took 235.121419ms to run NodePressure ...
	I0916 10:50:13.031343  104899 start.go:241] waiting for startup goroutines ...
	I0916 10:50:13.031386  104899 start.go:255] writing updated cluster config ...
	I0916 10:50:13.034095  104899 out.go:201] 
	I0916 10:50:13.036173  104899 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:50:13.036332  104899 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/config.json ...
	I0916 10:50:13.038706  104899 out.go:177] * Starting "ha-770465-m04" worker node in "ha-770465" cluster
	I0916 10:50:13.040488  104899 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 10:50:13.041923  104899 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:50:13.043458  104899 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:50:13.043491  104899 cache.go:56] Caching tarball of preloaded images
	I0916 10:50:13.043614  104899 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 10:50:13.043638  104899 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 10:50:13.043808  104899 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/config.json ...
	I0916 10:50:13.044049  104899 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	W0916 10:50:13.069290  104899 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:50:13.069315  104899 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:50:13.069405  104899 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:50:13.069430  104899 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:50:13.069438  104899 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:50:13.069452  104899 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:50:13.069460  104899 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:50:13.070582  104899 image.go:273] response: 
	I0916 10:50:13.126709  104899 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:50:13.126740  104899 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:50:13.126778  104899 start.go:360] acquireMachinesLock for ha-770465-m04: {Name:mkc3281f68e01da8fba52f5dc70804d02e52876e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:50:13.126840  104899 start.go:364] duration metric: took 38.822µs to acquireMachinesLock for "ha-770465-m04"
	I0916 10:50:13.126861  104899 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:50:13.126869  104899 fix.go:54] fixHost starting: m04
	I0916 10:50:13.127157  104899 cli_runner.go:164] Run: docker container inspect ha-770465-m04 --format={{.State.Status}}
	I0916 10:50:13.149767  104899 fix.go:112] recreateIfNeeded on ha-770465-m04: state=Stopped err=<nil>
	W0916 10:50:13.149800  104899 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:50:13.152211  104899 out.go:177] * Restarting existing docker container for "ha-770465-m04" ...
	I0916 10:50:13.153585  104899 cli_runner.go:164] Run: docker start ha-770465-m04
	I0916 10:50:13.473306  104899 cli_runner.go:164] Run: docker container inspect ha-770465-m04 --format={{.State.Status}}
	I0916 10:50:13.492669  104899 kic.go:430] container "ha-770465-m04" state is running.
	I0916 10:50:13.493039  104899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465-m04
	I0916 10:50:13.510274  104899 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/config.json ...
	I0916 10:50:13.510544  104899 machine.go:93] provisionDockerMachine start ...
	I0916 10:50:13.510612  104899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m04
	I0916 10:50:13.530630  104899 main.go:141] libmachine: Using SSH client type: native
	I0916 10:50:13.530848  104899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32843 <nil> <nil>}
	I0916 10:50:13.530867  104899 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:50:13.531596  104899 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47550->127.0.0.1:32843: read: connection reset by peer
	I0916 10:50:16.667336  104899 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-770465-m04
	
	I0916 10:50:16.667362  104899 ubuntu.go:169] provisioning hostname "ha-770465-m04"
	I0916 10:50:16.667413  104899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m04
	I0916 10:50:16.685216  104899 main.go:141] libmachine: Using SSH client type: native
	I0916 10:50:16.685401  104899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32843 <nil> <nil>}
	I0916 10:50:16.685413  104899 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-770465-m04 && echo "ha-770465-m04" | sudo tee /etc/hostname
	I0916 10:50:16.831496  104899 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-770465-m04
	
	I0916 10:50:16.831563  104899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m04
	I0916 10:50:16.848669  104899 main.go:141] libmachine: Using SSH client type: native
	I0916 10:50:16.848864  104899 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32843 <nil> <nil>}
	I0916 10:50:16.848881  104899 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-770465-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-770465-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-770465-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:50:16.983829  104899 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:50:16.983857  104899 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 10:50:16.983873  104899 ubuntu.go:177] setting up certificates
	I0916 10:50:16.983883  104899 provision.go:84] configureAuth start
	I0916 10:50:16.983928  104899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465-m04
	I0916 10:50:17.000888  104899 provision.go:143] copyHostCerts
	I0916 10:50:17.000930  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:50:17.000972  104899 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 10:50:17.000982  104899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:50:17.001050  104899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 10:50:17.001130  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:50:17.001155  104899 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 10:50:17.001162  104899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:50:17.001189  104899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 10:50:17.001235  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:50:17.001251  104899 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 10:50:17.001256  104899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:50:17.001285  104899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 10:50:17.001337  104899 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.ha-770465-m04 san=[127.0.0.1 192.168.49.5 ha-770465-m04 localhost minikube]
	I0916 10:50:17.105191  104899 provision.go:177] copyRemoteCerts
	I0916 10:50:17.105249  104899 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:50:17.105290  104899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m04
	I0916 10:50:17.124177  104899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32843 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m04/id_rsa Username:docker}
	I0916 10:50:17.226121  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:50:17.226187  104899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 10:50:17.252768  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:50:17.252841  104899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 10:50:17.280142  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:50:17.280225  104899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:50:17.306188  104899 provision.go:87] duration metric: took 322.291189ms to configureAuth
	I0916 10:50:17.306219  104899 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:50:17.306465  104899 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:50:17.306477  104899 machine.go:96] duration metric: took 3.79592007s to provisionDockerMachine
	I0916 10:50:17.306485  104899 start.go:293] postStartSetup for "ha-770465-m04" (driver="docker")
	I0916 10:50:17.306495  104899 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:50:17.306546  104899 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:50:17.306586  104899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m04
	I0916 10:50:17.323631  104899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32843 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m04/id_rsa Username:docker}
	I0916 10:50:17.420677  104899 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:50:17.423944  104899 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:50:17.423983  104899 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:50:17.423995  104899 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:50:17.424004  104899 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:50:17.424016  104899 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 10:50:17.424087  104899 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 10:50:17.424182  104899 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 10:50:17.424194  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /etc/ssl/certs/111892.pem
	I0916 10:50:17.424302  104899 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:50:17.432710  104899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:50:17.456881  104899 start.go:296] duration metric: took 150.380734ms for postStartSetup
	I0916 10:50:17.456978  104899 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:50:17.457024  104899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m04
	I0916 10:50:17.477354  104899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32843 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m04/id_rsa Username:docker}
	I0916 10:50:17.568470  104899 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:50:17.572948  104899 fix.go:56] duration metric: took 4.446073209s for fixHost
	I0916 10:50:17.572976  104899 start.go:83] releasing machines lock for "ha-770465-m04", held for 4.446124166s
	I0916 10:50:17.573044  104899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465-m04
	I0916 10:50:17.596980  104899 out.go:177] * Found network options:
	I0916 10:50:17.598873  104899 out.go:177]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0916 10:50:17.600156  104899 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:50:17.600177  104899 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:50:17.600197  104899 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:50:17.600205  104899 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 10:50:17.600269  104899 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:50:17.600317  104899 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:50:17.600367  104899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m04
	I0916 10:50:17.600322  104899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m04
	I0916 10:50:17.619617  104899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32843 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m04/id_rsa Username:docker}
	I0916 10:50:17.620288  104899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32843 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m04/id_rsa Username:docker}
	I0916 10:50:17.806471  104899 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 10:50:17.823606  104899 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:50:17.823672  104899 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:50:17.832283  104899 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:50:17.832307  104899 start.go:495] detecting cgroup driver to use...
	I0916 10:50:17.832341  104899 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:50:17.832386  104899 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 10:50:17.843858  104899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 10:50:17.854869  104899 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:50:17.854924  104899 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:50:17.867321  104899 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:50:17.878029  104899 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:50:17.956875  104899 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:50:18.055202  104899 docker.go:233] disabling docker service ...
	I0916 10:50:18.055264  104899 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:50:18.071182  104899 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:50:18.086773  104899 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:50:18.169473  104899 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:50:18.250662  104899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:50:18.262925  104899 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:50:18.281316  104899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 10:50:18.292258  104899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:50:18.302689  104899 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:50:18.302745  104899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:50:18.312037  104899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:50:18.323107  104899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:50:18.333297  104899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:50:18.343318  104899 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:50:18.351852  104899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:50:18.361409  104899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:50:18.370704  104899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:50:18.380405  104899 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:50:18.388338  104899 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:50:18.396674  104899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:50:18.479245  104899 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 10:50:18.594932  104899 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 10:50:18.594998  104899 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 10:50:18.598500  104899 start.go:563] Will wait 60s for crictl version
	I0916 10:50:18.598552  104899 ssh_runner.go:195] Run: which crictl
	I0916 10:50:18.601657  104899 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:50:18.634154  104899 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 10:50:18.634210  104899 ssh_runner.go:195] Run: containerd --version
	I0916 10:50:18.655112  104899 ssh_runner.go:195] Run: containerd --version
	I0916 10:50:18.679023  104899 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 10:50:18.680620  104899 out.go:177]   - env NO_PROXY=192.168.49.2
	I0916 10:50:18.681917  104899 out.go:177]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0916 10:50:18.683089  104899 cli_runner.go:164] Run: docker network inspect ha-770465 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:50:18.699404  104899 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0916 10:50:18.702925  104899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:50:18.713021  104899 mustload.go:65] Loading cluster: ha-770465
	I0916 10:50:18.713277  104899 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:50:18.713549  104899 cli_runner.go:164] Run: docker container inspect ha-770465 --format={{.State.Status}}
	I0916 10:50:18.730499  104899 host.go:66] Checking if "ha-770465" exists ...
	I0916 10:50:18.730780  104899 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465 for IP: 192.168.49.5
	I0916 10:50:18.730792  104899 certs.go:194] generating shared ca certs ...
	I0916 10:50:18.730804  104899 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:50:18.730931  104899 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 10:50:18.730970  104899 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 10:50:18.730982  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:50:18.730996  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:50:18.731008  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:50:18.731020  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:50:18.731069  104899 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 10:50:18.731096  104899 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 10:50:18.731108  104899 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:50:18.731138  104899 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 10:50:18.731161  104899 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:50:18.731193  104899 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 10:50:18.731242  104899 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:50:18.731268  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /usr/share/ca-certificates/111892.pem
	I0916 10:50:18.731282  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:50:18.731294  104899 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem -> /usr/share/ca-certificates/11189.pem
	I0916 10:50:18.731311  104899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:50:18.753438  104899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:50:18.777419  104899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:50:18.805799  104899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:50:18.842005  104899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 10:50:18.866795  104899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:50:18.902175  104899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 10:50:18.925400  104899 ssh_runner.go:195] Run: openssl version
	I0916 10:50:18.930665  104899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 10:50:18.939527  104899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 10:50:18.942876  104899 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 10:50:18.942929  104899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 10:50:18.950035  104899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:50:18.958602  104899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:50:18.967817  104899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:50:18.971423  104899 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:50:18.971491  104899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:50:18.978319  104899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:50:18.988175  104899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 10:50:18.999174  104899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 10:50:19.003581  104899 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 10:50:19.003663  104899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 10:50:19.010153  104899 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 10:50:19.018791  104899 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:50:19.022342  104899 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:50:19.022396  104899 kubeadm.go:934] updating node {m04 192.168.49.5 0 v1.31.1  false true} ...
	I0916 10:50:19.022490  104899 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-770465-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-770465 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:50:19.022553  104899 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:50:19.031489  104899 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:50:19.031548  104899 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0916 10:50:19.040027  104899 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0916 10:50:19.056612  104899 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:50:19.073929  104899 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0916 10:50:19.077328  104899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:50:19.088092  104899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:50:19.164672  104899 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:50:19.176022  104899 start.go:235] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0916 10:50:19.176202  104899 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:50:19.178483  104899 out.go:177] * Verifying Kubernetes components...
	I0916 10:50:19.180143  104899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:50:19.262710  104899 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:50:19.274189  104899 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:50:19.274452  104899 kapi.go:59] client config for ha-770465: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/ha-770465/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0916 10:50:19.274520  104899 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0916 10:50:19.274732  104899 node_ready.go:35] waiting up to 6m0s for node "ha-770465-m04" to be "Ready" ...
	I0916 10:50:19.274813  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m04
	I0916 10:50:19.274824  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:19.274835  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:19.274845  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:19.277934  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:19.278347  104899 node_ready.go:49] node "ha-770465-m04" has status "Ready":"True"
	I0916 10:50:19.278369  104899 node_ready.go:38] duration metric: took 3.618663ms for node "ha-770465-m04" to be "Ready" ...
	I0916 10:50:19.278380  104899 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:50:19.278446  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:50:19.278457  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:19.278467  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:19.278473  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:19.283150  104899 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:50:19.289876  104899 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:19.289984  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:19.289996  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:19.290006  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:19.290011  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:19.292886  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:19.293516  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:19.293532  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:19.293539  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:19.293543  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:19.296103  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:19.790957  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:19.790977  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:19.790985  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:19.790988  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:19.794011  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:19.794741  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:19.794755  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:19.794762  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:19.794766  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:19.797150  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:20.291026  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:20.291045  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:20.291054  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:20.291058  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:20.294002  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:20.294687  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:20.294703  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:20.294712  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:20.294716  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:20.296928  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:20.790798  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:20.790820  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:20.790827  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:20.790832  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:20.794198  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:20.794948  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:20.794969  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:20.794980  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:20.794985  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:20.797539  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:21.290163  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:21.290184  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:21.290192  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:21.290196  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:21.292995  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:21.293689  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:21.293703  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:21.293710  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:21.293713  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:21.296301  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:21.296749  104899 pod_ready.go:103] pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace has status "Ready":"False"
	I0916 10:50:21.790321  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:21.790345  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:21.790358  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:21.790366  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:21.793311  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:21.794029  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:21.794045  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:21.794055  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:21.794062  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:21.796569  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:22.290448  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:22.290473  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:22.290484  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:22.290489  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:22.293379  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:22.294005  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:22.294021  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:22.294030  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:22.294036  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:22.296454  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:22.790169  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:22.790190  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:22.790198  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:22.790202  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:22.793320  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:22.793974  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:22.793992  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:22.794000  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:22.794004  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:22.796489  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:23.290183  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:23.290205  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:23.290212  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:23.290216  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:23.293323  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:23.294007  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:23.294024  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:23.294032  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:23.294036  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:23.296539  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:23.296959  104899 pod_ready.go:103] pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace has status "Ready":"False"
	I0916 10:50:23.790158  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:23.790177  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:23.790184  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:23.790189  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:23.793407  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:23.793976  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:23.793991  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:23.793998  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:23.794003  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:23.796339  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:24.290139  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:24.290171  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:24.290182  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:24.290189  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:24.294519  104899 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:50:24.295221  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:24.295241  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:24.295251  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:24.295256  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:24.297922  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:24.790731  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:24.790749  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:24.790758  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:24.790761  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:24.793731  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:24.794339  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:24.794362  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:24.794369  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:24.794374  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:24.796775  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:25.290671  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:25.290699  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:25.290710  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:25.290717  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:25.294117  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:25.294802  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:25.294818  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:25.294828  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:25.294833  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:25.297729  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:25.298165  104899 pod_ready.go:103] pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace has status "Ready":"False"
	I0916 10:50:25.790996  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:25.791016  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:25.791026  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:25.791033  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:25.794102  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:25.794754  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:25.794769  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:25.794777  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:25.794781  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:25.797372  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:26.290084  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:26.290106  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:26.290116  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:26.290123  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:26.293139  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:26.293780  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:26.293797  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:26.293806  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:26.293813  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:26.296303  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:26.790281  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:26.790325  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:26.790334  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:26.790337  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:26.793649  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:26.794337  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:26.794351  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:26.794357  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:26.794361  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:26.796825  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:27.290553  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:27.290574  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:27.290583  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:27.290587  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:27.293521  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:27.294228  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:27.294249  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:27.294258  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:27.294262  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:27.296562  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:27.790305  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:27.790325  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:27.790334  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:27.790337  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:27.793261  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:27.793965  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:27.793982  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:27.793989  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:27.793992  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:27.796549  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:27.796956  104899 pod_ready.go:103] pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace has status "Ready":"False"
	I0916 10:50:28.290571  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:28.290599  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:28.290607  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:28.290611  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:28.293823  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:28.294635  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:28.294653  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:28.294663  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:28.294670  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:28.297100  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:28.790926  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:28.790945  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:28.790953  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:28.790956  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:28.794036  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:28.794770  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:28.794802  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:28.794813  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:28.794818  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:28.797464  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:29.290275  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:29.290310  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:29.290319  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:29.290324  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:29.293436  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:29.294044  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:29.294059  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:29.294068  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:29.294073  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:29.296595  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:29.790420  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:29.790443  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:29.790451  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:29.790454  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:29.793939  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:29.794582  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:29.794599  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:29.794606  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:29.794610  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:29.797303  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:29.797732  104899 pod_ready.go:103] pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace has status "Ready":"False"
	I0916 10:50:30.290165  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:30.290190  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:30.290200  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:30.290205  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:30.293467  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:30.294157  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:30.294174  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:30.294182  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:30.294185  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:30.297017  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:30.790868  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:30.790889  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:30.790896  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:30.790900  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:30.794084  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:30.794782  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:30.794799  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:30.794808  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:30.794812  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:30.797612  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:31.290256  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:31.290280  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:31.290290  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:31.290297  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:31.293492  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:31.294161  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:31.294178  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:31.294189  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:31.294194  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:31.296812  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:31.790995  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:31.791016  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:31.791024  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:31.791028  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:31.794096  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:31.794948  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:31.794969  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:31.794982  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:31.794987  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:31.797514  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:31.797949  104899 pod_ready.go:103] pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace has status "Ready":"False"
	I0916 10:50:32.290309  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:32.290334  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:32.290342  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:32.290346  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:32.293480  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:32.294110  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:32.294127  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:32.294135  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:32.294138  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:32.296670  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:32.790423  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:32.790441  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:32.790449  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:32.790452  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:32.793385  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:32.794089  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:32.794109  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:32.794120  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:32.794126  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:32.796409  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:33.290184  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:33.290204  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:33.290214  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:33.290219  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:33.293284  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:33.293958  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:33.293972  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:33.293979  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:33.293985  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:33.296419  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:33.790223  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:33.790246  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:33.790254  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:33.790257  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:33.793585  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:33.794233  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:33.794252  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:33.794259  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:33.794263  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:33.796742  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:34.290489  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:34.290510  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:34.290519  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:34.290524  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:34.294351  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:34.295115  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:34.295135  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:34.295146  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:34.295151  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:34.297699  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:34.298216  104899 pod_ready.go:103] pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace has status "Ready":"False"
	I0916 10:50:34.790495  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:34.790519  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:34.790529  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:34.790534  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:34.794073  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:34.794835  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:34.794854  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:34.794866  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:34.794871  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:34.797359  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:35.290188  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:35.290209  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:35.290218  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:35.290224  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:35.293361  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:35.294030  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:35.294047  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:35.294054  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:35.294058  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:35.298286  104899 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 10:50:35.791108  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:35.791132  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:35.791140  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:35.791145  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:35.794261  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:35.794934  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:35.794951  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:35.794958  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:35.794964  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:35.797525  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:36.290325  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:36.290352  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:36.290359  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:36.290364  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:36.293613  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:36.294219  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:36.294233  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:36.294241  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:36.294245  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:36.296534  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:36.790608  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:36.790634  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:36.790641  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:36.790645  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:36.793627  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:36.794358  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:36.794373  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:36.794380  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:36.794383  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:36.796747  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:36.797307  104899 pod_ready.go:103] pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace has status "Ready":"False"
	I0916 10:50:37.290572  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:37.290594  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:37.290604  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:37.290610  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:37.293751  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:37.294609  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:37.294630  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:37.294640  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:37.294647  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:37.296986  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:37.790831  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:37.790850  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:37.790858  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:37.790862  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:37.793949  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:37.795271  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:37.795331  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:37.795351  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:37.795366  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:37.798063  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:38.291042  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:38.291063  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:38.291070  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:38.291076  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:38.294090  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:38.294668  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:38.294683  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:38.294691  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:38.294695  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:38.297180  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:38.791036  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:38.791055  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:38.791063  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:38.791068  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:38.794227  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:38.794860  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:38.794876  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:38.794883  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:38.794887  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:38.797130  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:38.797549  104899 pod_ready.go:103] pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace has status "Ready":"False"
	I0916 10:50:39.290982  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:39.291005  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:39.291013  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:39.291020  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:39.294231  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:39.294787  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:39.294802  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:39.294812  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:39.294817  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:39.297120  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:39.790973  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:39.790996  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:39.791006  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:39.791011  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:39.794096  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:39.794759  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:39.794776  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:39.794783  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:39.794788  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:39.797160  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:40.291005  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:40.291025  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:40.291033  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:40.291037  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:40.293995  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:40.294613  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:40.294628  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:40.294636  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:40.294639  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:40.297143  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:40.790982  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:40.791002  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:40.791014  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:40.791018  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:40.794065  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:40.794722  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:40.794737  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:40.794745  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:40.794750  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:40.797117  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:41.290996  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:41.291017  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:41.291025  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:41.291029  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:41.293815  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:41.294456  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:41.294471  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:41.294481  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:41.294487  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:41.296853  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:41.297270  104899 pod_ready.go:103] pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace has status "Ready":"False"
	I0916 10:50:41.790975  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:41.790999  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:41.791008  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:41.791013  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:41.794127  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:41.794798  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:41.794819  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:41.794830  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:41.794837  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:41.797327  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:42.290142  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:42.290172  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:42.290182  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:42.290188  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:42.293178  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:42.293837  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:42.293852  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:42.293861  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:42.293868  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:42.296365  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:42.790055  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:42.790073  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:42.790081  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:42.790085  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:42.793021  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:42.793646  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:42.793663  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:42.793670  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:42.793673  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:42.796082  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:43.290885  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:43.290905  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:43.290912  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:43.290921  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:43.293595  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:43.294320  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:43.294350  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:43.294359  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:43.294364  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:43.296630  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:43.790419  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:43.790443  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:43.790451  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:43.790455  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:43.793452  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:43.794181  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:43.794197  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:43.794209  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:43.794218  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:43.796563  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:43.797049  104899 pod_ready.go:103] pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace has status "Ready":"False"
	I0916 10:50:44.290408  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:44.290433  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:44.290443  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:44.290449  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:44.293536  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:44.294302  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:44.294319  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:44.294325  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:44.294335  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:44.296516  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:44.790161  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:44.790183  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:44.790193  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:44.790199  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:44.793204  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:44.793921  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:44.793935  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:44.793942  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:44.793946  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:44.796393  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:45.290171  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:45.290194  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:45.290202  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:45.290208  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:45.293420  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:45.294118  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:45.294137  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:45.294147  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:45.294156  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:45.296823  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:45.790698  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:45.790719  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:45.790726  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:45.790730  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:45.793873  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:45.794747  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:45.794768  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:45.794779  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:45.794787  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:45.797436  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:45.797839  104899 pod_ready.go:103] pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace has status "Ready":"False"
	I0916 10:50:46.290217  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:46.290236  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:46.290244  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:46.290247  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:46.293235  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:46.293870  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:46.293883  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:46.293891  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:46.293897  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:46.296279  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:46.790195  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:46.790217  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:46.790225  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:46.790229  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:46.793395  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:46.794058  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:46.794074  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:46.794081  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:46.794084  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:46.796827  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:47.290790  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:47.290811  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:47.290819  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:47.290824  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:47.293947  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:47.294599  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:47.294617  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:47.294625  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:47.294629  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:47.297093  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:47.790434  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:47.790458  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:47.790468  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:47.790473  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:47.793651  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:47.794316  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:47.794335  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:47.794344  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:47.794350  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:47.796981  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:48.291049  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:48.291075  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:48.291086  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:48.291095  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:48.294374  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:48.294951  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:48.294967  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:48.294974  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:48.294979  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:48.297615  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:48.298054  104899 pod_ready.go:103] pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace has status "Ready":"False"
	I0916 10:50:48.790479  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:48.790505  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:48.790515  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:48.790520  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:48.793805  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:48.794435  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:48.794449  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:48.794457  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:48.794461  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:48.797072  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:49.290911  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:49.290932  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:49.290946  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:49.290949  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:49.294133  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:49.294811  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:49.294828  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:49.294836  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:49.294839  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:49.297377  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:49.790210  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:49.790243  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:49.790251  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:49.790255  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:49.793746  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:49.794354  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:49.794371  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:49.794378  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:49.794383  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:49.797317  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:50.290131  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:50.290152  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:50.290160  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:50.290164  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:50.293200  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:50.293788  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:50.293801  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:50.293809  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:50.293814  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:50.296252  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:50.790114  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:50.790134  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:50.790141  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:50.790145  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:50.793149  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:50.793781  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:50.793795  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:50.793801  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:50.793805  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:50.796355  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:50.796900  104899 pod_ready.go:103] pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace has status "Ready":"False"
	I0916 10:50:51.290147  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:51.290169  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:51.290177  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:51.290181  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:51.293289  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:51.293943  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:51.293961  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:51.293968  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:51.293971  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:51.296253  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:51.791059  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:51.791079  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:51.791088  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:51.791094  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:51.794071  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:51.794729  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:51.794756  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:51.794767  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:51.794771  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:51.797315  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:52.290177  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:52.290203  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:52.290214  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:52.290219  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:52.293475  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:52.294118  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:52.294134  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:52.294141  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:52.294145  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:52.296537  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:52.790211  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-9lw9q
	I0916 10:50:52.790245  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:52.790257  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:52.790263  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:52.795987  104899 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:50:52.796817  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:52.796887  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:52.796910  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:52.796924  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:52.807708  104899 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0916 10:50:52.808176  104899 pod_ready.go:93] pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:52.808195  104899 pod_ready.go:82] duration metric: took 33.518292569s for pod "coredns-7c65d6cfc9-9lw9q" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:52.808204  104899 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sbs22" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:52.808262  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-sbs22
	I0916 10:50:52.808270  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:52.808276  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:52.808281  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:52.810781  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:52.811394  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:52.811409  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:52.811419  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:52.811424  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:52.813562  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:52.813957  104899 pod_ready.go:93] pod "coredns-7c65d6cfc9-sbs22" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:52.813974  104899 pod_ready.go:82] duration metric: took 5.763274ms for pod "coredns-7c65d6cfc9-sbs22" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:52.813986  104899 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:52.814056  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465
	I0916 10:50:52.814065  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:52.814075  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:52.814083  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:52.816352  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:52.816862  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:52.816874  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:52.816882  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:52.816886  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:52.818755  104899 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:50:52.819097  104899 pod_ready.go:93] pod "etcd-ha-770465" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:52.819113  104899 pod_ready.go:82] duration metric: took 5.119925ms for pod "etcd-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:52.819126  104899 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:52.819183  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m02
	I0916 10:50:52.819192  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:52.819201  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:52.819208  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:52.821243  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:52.821763  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:50:52.821775  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:52.821783  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:52.821785  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:52.823901  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:52.824328  104899 pod_ready.go:93] pod "etcd-ha-770465-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:52.824344  104899 pod_ready.go:82] duration metric: took 5.21153ms for pod "etcd-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:52.824353  104899 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:52.824403  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/etcd-ha-770465-m03
	I0916 10:50:52.824410  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:52.824416  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:52.824421  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:52.826275  104899 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0916 10:50:52.826374  104899 pod_ready.go:98] error getting pod "etcd-ha-770465-m03" in "kube-system" namespace (skipping!): pods "etcd-ha-770465-m03" not found
	I0916 10:50:52.826388  104899 pod_ready.go:82] duration metric: took 2.029299ms for pod "etcd-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	E0916 10:50:52.826399  104899 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "etcd-ha-770465-m03" in "kube-system" namespace (skipping!): pods "etcd-ha-770465-m03" not found
	I0916 10:50:52.826421  104899 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:52.826487  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465
	I0916 10:50:52.826496  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:52.826509  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:52.826517  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:52.828484  104899 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:50:52.990373  104899 request.go:632] Waited for 161.235844ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:52.990517  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:52.990533  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:52.990541  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:52.990546  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:52.993496  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:52.993957  104899 pod_ready.go:93] pod "kube-apiserver-ha-770465" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:52.993975  104899 pod_ready.go:82] duration metric: took 167.544718ms for pod "kube-apiserver-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:52.993985  104899 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:53.190325  104899 request.go:632] Waited for 196.259455ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m02
	I0916 10:50:53.190379  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m02
	I0916 10:50:53.190384  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:53.190391  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:53.190395  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:53.193490  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:53.390368  104899 request.go:632] Waited for 196.280991ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:50:53.390424  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:50:53.390428  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:53.390435  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:53.390439  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:53.393170  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:53.393644  104899 pod_ready.go:93] pod "kube-apiserver-ha-770465-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:53.393673  104899 pod_ready.go:82] duration metric: took 399.680351ms for pod "kube-apiserver-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:53.393685  104899 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:53.590718  104899 request.go:632] Waited for 196.960677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m03
	I0916 10:50:53.590779  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-770465-m03
	I0916 10:50:53.590785  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:53.590811  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:53.590818  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:53.593855  104899 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0916 10:50:53.593974  104899 pod_ready.go:98] error getting pod "kube-apiserver-ha-770465-m03" in "kube-system" namespace (skipping!): pods "kube-apiserver-ha-770465-m03" not found
	I0916 10:50:53.593990  104899 pod_ready.go:82] duration metric: took 200.298279ms for pod "kube-apiserver-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	E0916 10:50:53.594000  104899 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "kube-apiserver-ha-770465-m03" in "kube-system" namespace (skipping!): pods "kube-apiserver-ha-770465-m03" not found
	I0916 10:50:53.594008  104899 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:53.790356  104899 request.go:632] Waited for 196.27333ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-770465
	I0916 10:50:53.790452  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-770465
	I0916 10:50:53.790463  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:53.790475  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:53.790481  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:53.793668  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:53.990664  104899 request.go:632] Waited for 196.358715ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:53.990749  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:53.990760  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:53.990771  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:53.990779  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:53.994063  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:53.994599  104899 pod_ready.go:93] pod "kube-controller-manager-ha-770465" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:53.994620  104899 pod_ready.go:82] duration metric: took 400.602491ms for pod "kube-controller-manager-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:53.994634  104899 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:54.190546  104899 request.go:632] Waited for 195.823774ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-770465-m02
	I0916 10:50:54.190613  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-770465-m02
	I0916 10:50:54.190619  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:54.190626  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:54.190634  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:54.193685  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:54.390590  104899 request.go:632] Waited for 196.360655ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:50:54.390679  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:50:54.390691  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:54.390700  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:54.390707  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:54.393367  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:54.393784  104899 pod_ready.go:93] pod "kube-controller-manager-ha-770465-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:54.393804  104899 pod_ready.go:82] duration metric: took 399.158055ms for pod "kube-controller-manager-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:54.393818  104899 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:54.590884  104899 request.go:632] Waited for 196.989204ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-770465-m03
	I0916 10:50:54.590955  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-770465-m03
	I0916 10:50:54.590965  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:54.590977  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:54.591001  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:54.594049  104899 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0916 10:50:54.594210  104899 pod_ready.go:98] error getting pod "kube-controller-manager-ha-770465-m03" in "kube-system" namespace (skipping!): pods "kube-controller-manager-ha-770465-m03" not found
	I0916 10:50:54.594232  104899 pod_ready.go:82] duration metric: took 200.403067ms for pod "kube-controller-manager-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	E0916 10:50:54.594245  104899 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "kube-controller-manager-ha-770465-m03" in "kube-system" namespace (skipping!): pods "kube-controller-manager-ha-770465-m03" not found
	I0916 10:50:54.594253  104899 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-4qgcs" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:54.790691  104899 request.go:632] Waited for 196.343465ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4qgcs
	I0916 10:50:54.790757  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-4qgcs
	I0916 10:50:54.790765  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:54.790775  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:54.790784  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:54.793773  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:54.990273  104899 request.go:632] Waited for 195.920169ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:50:54.990330  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:50:54.990337  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:54.990346  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:54.990357  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:54.993452  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:54.993857  104899 pod_ready.go:93] pod "kube-proxy-4qgcs" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:54.993873  104899 pod_ready.go:82] duration metric: took 399.592382ms for pod "kube-proxy-4qgcs" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:54.993883  104899 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-78l2l" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:55.190872  104899 request.go:632] Waited for 196.927735ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-78l2l
	I0916 10:50:55.190956  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-78l2l
	I0916 10:50:55.190968  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:55.190976  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:55.190981  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:55.194174  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:55.391195  104899 request.go:632] Waited for 196.361616ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m04
	I0916 10:50:55.391269  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m04
	I0916 10:50:55.391275  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:55.391292  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:55.391301  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:55.394385  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:55.394840  104899 pod_ready.go:93] pod "kube-proxy-78l2l" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:55.394860  104899 pod_ready.go:82] duration metric: took 400.970384ms for pod "kube-proxy-78l2l" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:55.394869  104899 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-gd2mt" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:55.591031  104899 request.go:632] Waited for 196.068868ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gd2mt
	I0916 10:50:55.591106  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-gd2mt
	I0916 10:50:55.591113  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:55.591121  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:55.591128  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:55.594261  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:55.791206  104899 request.go:632] Waited for 196.373402ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:55.791263  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:55.791269  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:55.791278  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:55.791285  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:55.794391  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:55.794896  104899 pod_ready.go:93] pod "kube-proxy-gd2mt" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:55.794914  104899 pod_ready.go:82] duration metric: took 400.037683ms for pod "kube-proxy-gd2mt" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:55.794924  104899 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qlspc" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:55.991023  104899 request.go:632] Waited for 196.018864ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qlspc
	I0916 10:50:55.991093  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qlspc
	I0916 10:50:55.991101  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:55.991111  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:55.991119  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:55.994107  104899 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0916 10:50:55.994259  104899 pod_ready.go:98] error getting pod "kube-proxy-qlspc" in "kube-system" namespace (skipping!): pods "kube-proxy-qlspc" not found
	I0916 10:50:55.994286  104899 pod_ready.go:82] duration metric: took 199.355169ms for pod "kube-proxy-qlspc" in "kube-system" namespace to be "Ready" ...
	E0916 10:50:55.994299  104899 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "kube-proxy-qlspc" in "kube-system" namespace (skipping!): pods "kube-proxy-qlspc" not found
	I0916 10:50:55.994309  104899 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:56.190738  104899 request.go:632] Waited for 196.341457ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465
	I0916 10:50:56.190800  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465
	I0916 10:50:56.190807  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:56.190817  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:56.190827  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:56.193751  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:56.390463  104899 request.go:632] Waited for 196.12714ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:56.390530  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465
	I0916 10:50:56.390540  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:56.390550  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:56.390555  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:56.393332  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:56.393750  104899 pod_ready.go:93] pod "kube-scheduler-ha-770465" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:56.393768  104899 pod_ready.go:82] duration metric: took 399.450971ms for pod "kube-scheduler-ha-770465" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:56.393781  104899 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:56.590908  104899 request.go:632] Waited for 197.036205ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465-m02
	I0916 10:50:56.590966  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465-m02
	I0916 10:50:56.590973  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:56.590984  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:56.590994  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:56.594194  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:56.791154  104899 request.go:632] Waited for 196.34032ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:50:56.791236  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes/ha-770465-m02
	I0916 10:50:56.791251  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:56.791262  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:56.791270  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:56.794217  104899 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:50:56.794635  104899 pod_ready.go:93] pod "kube-scheduler-ha-770465-m02" in "kube-system" namespace has status "Ready":"True"
	I0916 10:50:56.794659  104899 pod_ready.go:82] duration metric: took 400.869415ms for pod "kube-scheduler-ha-770465-m02" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:56.794671  104899 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	I0916 10:50:56.990792  104899 request.go:632] Waited for 196.02223ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465-m03
	I0916 10:50:56.990849  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-770465-m03
	I0916 10:50:56.990854  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:56.990862  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:56.990867  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:56.993936  104899 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I0916 10:50:56.994055  104899 pod_ready.go:98] error getting pod "kube-scheduler-ha-770465-m03" in "kube-system" namespace (skipping!): pods "kube-scheduler-ha-770465-m03" not found
	I0916 10:50:56.994071  104899 pod_ready.go:82] duration metric: took 199.393034ms for pod "kube-scheduler-ha-770465-m03" in "kube-system" namespace to be "Ready" ...
	E0916 10:50:56.994082  104899 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-ha-770465-m03" in "kube-system" namespace (skipping!): pods "kube-scheduler-ha-770465-m03" not found
	I0916 10:50:56.994091  104899 pod_ready.go:39] duration metric: took 37.715699475s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:50:56.994107  104899 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:50:56.994152  104899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:50:57.005409  104899 system_svc.go:56] duration metric: took 11.291327ms WaitForService to wait for kubelet
	I0916 10:50:57.005436  104899 kubeadm.go:582] duration metric: took 37.829357485s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:50:57.005452  104899 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:50:57.190773  104899 request.go:632] Waited for 185.238228ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0916 10:50:57.190851  104899 round_trippers.go:463] GET https://192.168.49.2:8443/api/v1/nodes
	I0916 10:50:57.190862  104899 round_trippers.go:469] Request Headers:
	I0916 10:50:57.190873  104899 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:50:57.190881  104899 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:50:57.194339  104899 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:50:57.195351  104899 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:50:57.195374  104899 node_conditions.go:123] node cpu capacity is 8
	I0916 10:50:57.195386  104899 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:50:57.195391  104899 node_conditions.go:123] node cpu capacity is 8
	I0916 10:50:57.195395  104899 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:50:57.195399  104899 node_conditions.go:123] node cpu capacity is 8
	I0916 10:50:57.195405  104899 node_conditions.go:105] duration metric: took 189.948005ms to run NodePressure ...
	I0916 10:50:57.195419  104899 start.go:241] waiting for startup goroutines ...
	I0916 10:50:57.195440  104899 start.go:255] writing updated cluster config ...
	I0916 10:50:57.195801  104899 ssh_runner.go:195] Run: rm -f paused
	I0916 10:50:57.202088  104899 out.go:177] * Done! kubectl is now configured to use "ha-770465" cluster and "default" namespace by default
	E0916 10:50:57.203425  104899 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	8bd373c635b74       6e38f40d628db       Less than a second ago   Running             storage-provisioner       4                   a349c5e481295       storage-provisioner
	c41bfc02c715f       c69fa2e9cbf5f       45 seconds ago           Running             coredns                   2                   845737a6198f0       coredns-7c65d6cfc9-9lw9q
	5617f96404065       12968670680f4       45 seconds ago           Running             kindnet-cni               2                   c427a606eeb7f       kindnet-grjh8
	18d91709a26f9       c69fa2e9cbf5f       45 seconds ago           Running             coredns                   2                   bb80897d11058       coredns-7c65d6cfc9-sbs22
	bad099ddbe515       8c811b4aec35f       45 seconds ago           Running             busybox                   2                   1b7d10a77659a       busybox-7dff88458-845rc
	a913cc50ff03e       6e38f40d628db       45 seconds ago           Exited              storage-provisioner       3                   a349c5e481295       storage-provisioner
	4a76c9091f7d7       60c005f310ff3       45 seconds ago           Running             kube-proxy                2                   c708244615003       kube-proxy-gd2mt
	bc4aeaee8ca7f       175ffd71cce3d       54 seconds ago           Running             kube-controller-manager   2                   5101519158b36       kube-controller-manager-ha-770465
	07c7ffcfc3c4d       2e96e5913fc06       54 seconds ago           Running             etcd                      2                   cb8d48e5b9b7c       etcd-ha-770465
	7352793eaf7e3       6bab7719df100       54 seconds ago           Running             kube-apiserver            2                   3ce4136fcd2c4       kube-apiserver-ha-770465
	5b1ed4abbc0c2       9aa1fad941575       54 seconds ago           Running             kube-scheduler            2                   30719d4bba2b2       kube-scheduler-ha-770465
	911b97f172e25       38af8ddebf499       54 seconds ago           Running             kube-vip                  2                   f359217e09219       kube-vip-ha-770465
	e8544648700d8       38af8ddebf499       About a minute ago       Created             kube-vip                  1                   3b11ee3ddac2f       kube-vip-ha-770465
	0ee20b8c8789a       12968670680f4       2 minutes ago            Exited              kindnet-cni               1                   29b4d23a9d620       kindnet-grjh8
	81f453ca3f8d1       c69fa2e9cbf5f       2 minutes ago            Exited              coredns                   1                   0ea2d513d8370       coredns-7c65d6cfc9-9lw9q
	917ef16037c50       c69fa2e9cbf5f       2 minutes ago            Exited              coredns                   1                   d9ffbdccdd56c       coredns-7c65d6cfc9-sbs22
	8227f9c32d21c       8c811b4aec35f       2 minutes ago            Exited              busybox                   1                   4fee99e37559b       busybox-7dff88458-845rc
	de5c6dcf960e9       60c005f310ff3       2 minutes ago            Exited              kube-proxy                1                   2167f95b9241b       kube-proxy-gd2mt
	bcd02f03466d8       9aa1fad941575       3 minutes ago            Exited              kube-scheduler            1                   528f83f1c8d77       kube-scheduler-ha-770465
	4a562d336c170       175ffd71cce3d       3 minutes ago            Exited              kube-controller-manager   1                   ec32d7f38f4f8       kube-controller-manager-ha-770465
	e87832bf428c0       2e96e5913fc06       3 minutes ago            Exited              etcd                      1                   4dc0fb6f28527       etcd-ha-770465
	b715d9632d76b       6bab7719df100       3 minutes ago            Exited              kube-apiserver            1                   ba20d64a5ab26       kube-apiserver-ha-770465
	
	
	==> containerd <==
	Sep 16 10:50:13 ha-770465 containerd[596]: time="2024-09-16T10:50:13.256872024Z" level=info msg="StartContainer for \"5617f96404065c173db93507b510df9d42e9ae9957bdd5f2debfb56993896b56\""
	Sep 16 10:50:13 ha-770465 containerd[596]: time="2024-09-16T10:50:13.328016381Z" level=info msg="StartContainer for \"a913cc50ff03e6b56bb8bf1b16c2af687a48e35b7d5736f3833ed2965d68e521\" returns successfully"
	Sep 16 10:50:13 ha-770465 containerd[596]: time="2024-09-16T10:50:13.328029074Z" level=info msg="CreateContainer within sandbox \"845737a6198f040e311bed2ae925607c85d529638a12f4c7da6887f10e77ce7b\" for &ContainerMetadata{Name:coredns,Attempt:2,} returns container id \"c41bfc02c715fb0b9e1bfae72b90e6e638b5100a2396728fc79097168b7fe022\""
	Sep 16 10:50:13 ha-770465 containerd[596]: time="2024-09-16T10:50:13.329522981Z" level=info msg="StartContainer for \"c41bfc02c715fb0b9e1bfae72b90e6e638b5100a2396728fc79097168b7fe022\""
	Sep 16 10:50:13 ha-770465 containerd[596]: time="2024-09-16T10:50:13.459395420Z" level=info msg="StartContainer for \"bad099ddbe51585e5960b5e793b33dba7fd78bedd6474ce227253a5eeeb666bf\" returns successfully"
	Sep 16 10:50:13 ha-770465 containerd[596]: time="2024-09-16T10:50:13.459562439Z" level=info msg="StartContainer for \"18d91709a26f9e13dcb53b1197ae2d2329d62e3389888ab804db2f314c6abb36\" returns successfully"
	Sep 16 10:50:13 ha-770465 containerd[596]: time="2024-09-16T10:50:13.542866733Z" level=info msg="StartContainer for \"5617f96404065c173db93507b510df9d42e9ae9957bdd5f2debfb56993896b56\" returns successfully"
	Sep 16 10:50:13 ha-770465 containerd[596]: time="2024-09-16T10:50:13.550737490Z" level=info msg="StartContainer for \"c41bfc02c715fb0b9e1bfae72b90e6e638b5100a2396728fc79097168b7fe022\" returns successfully"
	Sep 16 10:50:43 ha-770465 containerd[596]: time="2024-09-16T10:50:43.368467847Z" level=info msg="shim disconnected" id=a913cc50ff03e6b56bb8bf1b16c2af687a48e35b7d5736f3833ed2965d68e521 namespace=k8s.io
	Sep 16 10:50:43 ha-770465 containerd[596]: time="2024-09-16T10:50:43.368531404Z" level=warning msg="cleaning up after shim disconnected" id=a913cc50ff03e6b56bb8bf1b16c2af687a48e35b7d5736f3833ed2965d68e521 namespace=k8s.io
	Sep 16 10:50:43 ha-770465 containerd[596]: time="2024-09-16T10:50:43.368542907Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 16 10:50:43 ha-770465 containerd[596]: time="2024-09-16T10:50:43.699934099Z" level=info msg="RemoveContainer for \"946241353e03d16a05ed42c23006cfa465d022f50c7580d1bec22425ee59a4ac\""
	Sep 16 10:50:43 ha-770465 containerd[596]: time="2024-09-16T10:50:43.705086178Z" level=info msg="RemoveContainer for \"946241353e03d16a05ed42c23006cfa465d022f50c7580d1bec22425ee59a4ac\" returns successfully"
	Sep 16 10:50:57 ha-770465 containerd[596]: time="2024-09-16T10:50:57.630676557Z" level=info msg="StopPodSandbox for \"c3c1ad84b80d11292f4cbee11f2b4cdf55236813b5755847e4a0852afe5f3456\""
	Sep 16 10:50:57 ha-770465 containerd[596]: time="2024-09-16T10:50:57.630804733Z" level=info msg="TearDown network for sandbox \"c3c1ad84b80d11292f4cbee11f2b4cdf55236813b5755847e4a0852afe5f3456\" successfully"
	Sep 16 10:50:57 ha-770465 containerd[596]: time="2024-09-16T10:50:57.630816950Z" level=info msg="StopPodSandbox for \"c3c1ad84b80d11292f4cbee11f2b4cdf55236813b5755847e4a0852afe5f3456\" returns successfully"
	Sep 16 10:50:57 ha-770465 containerd[596]: time="2024-09-16T10:50:57.631124785Z" level=info msg="RemovePodSandbox for \"c3c1ad84b80d11292f4cbee11f2b4cdf55236813b5755847e4a0852afe5f3456\""
	Sep 16 10:50:57 ha-770465 containerd[596]: time="2024-09-16T10:50:57.631158450Z" level=info msg="Forcibly stopping sandbox \"c3c1ad84b80d11292f4cbee11f2b4cdf55236813b5755847e4a0852afe5f3456\""
	Sep 16 10:50:57 ha-770465 containerd[596]: time="2024-09-16T10:50:57.631224283Z" level=info msg="TearDown network for sandbox \"c3c1ad84b80d11292f4cbee11f2b4cdf55236813b5755847e4a0852afe5f3456\" successfully"
	Sep 16 10:50:57 ha-770465 containerd[596]: time="2024-09-16T10:50:57.635623739Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c3c1ad84b80d11292f4cbee11f2b4cdf55236813b5755847e4a0852afe5f3456\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 16 10:50:57 ha-770465 containerd[596]: time="2024-09-16T10:50:57.635693203Z" level=info msg="RemovePodSandbox \"c3c1ad84b80d11292f4cbee11f2b4cdf55236813b5755847e4a0852afe5f3456\" returns successfully"
	Sep 16 10:50:58 ha-770465 containerd[596]: time="2024-09-16T10:50:58.435099040Z" level=info msg="CreateContainer within sandbox \"a349c5e481295f817e6d9c861004c2cfcfbeac62c09a388805592b1fc8447c24\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:4,}"
	Sep 16 10:50:58 ha-770465 containerd[596]: time="2024-09-16T10:50:58.449015717Z" level=info msg="CreateContainer within sandbox \"a349c5e481295f817e6d9c861004c2cfcfbeac62c09a388805592b1fc8447c24\" for &ContainerMetadata{Name:storage-provisioner,Attempt:4,} returns container id \"8bd373c635b74843a4f20d11674af4e068961e0f3d4040b0b015abc586a2cf25\""
	Sep 16 10:50:58 ha-770465 containerd[596]: time="2024-09-16T10:50:58.449559150Z" level=info msg="StartContainer for \"8bd373c635b74843a4f20d11674af4e068961e0f3d4040b0b015abc586a2cf25\""
	Sep 16 10:50:58 ha-770465 containerd[596]: time="2024-09-16T10:50:58.493656057Z" level=info msg="StartContainer for \"8bd373c635b74843a4f20d11674af4e068961e0f3d4040b0b015abc586a2cf25\" returns successfully"
	
	
	==> coredns [18d91709a26f9e13dcb53b1197ae2d2329d62e3389888ab804db2f314c6abb36] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:34880 - 49756 "HINFO IN 1821752596546931138.3115659817227240273. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012699579s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1641748868]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 10:50:13.490) (total time: 30001ms):
	Trace[1641748868]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (10:50:43.492)
	Trace[1641748868]: [30.001594481s] [30.001594481s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2096502610]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 10:50:13.491) (total time: 30001ms):
	Trace[2096502610]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (10:50:43.492)
	Trace[2096502610]: [30.001501974s] [30.001501974s] END
	[INFO] plugin/kubernetes: Trace[1364312763]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 10:50:13.490) (total time: 30001ms):
	Trace[1364312763]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (10:50:43.492)
	Trace[1364312763]: [30.001728788s] [30.001728788s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [81f453ca3f8d171840aacc686ad19952955400177043021ed6f8e79531037bec] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:54507 - 15852 "HINFO IN 7992863260379517052.3058821443598282648. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.00991266s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2002612651]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 10:48:03.956) (total time: 30000ms):
	Trace[2002612651]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (10:48:33.957)
	Trace[2002612651]: [30.000909037s] [30.000909037s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[912886218]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 10:48:03.956) (total time: 30000ms):
	Trace[912886218]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (10:48:33.957)
	Trace[912886218]: [30.000626226s] [30.000626226s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1231085643]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 10:48:03.956) (total time: 30001ms):
	Trace[1231085643]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (10:48:33.957)
	Trace[1231085643]: [30.001617592s] [30.001617592s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [917ef16037c509fa5bcfbad0bd3aae289f62731b5435ad933b59b707dbe0320e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:44724 - 9916 "HINFO IN 5396320650353980330.3094598020936758036. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011058339s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1325993115]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 10:48:03.956) (total time: 30000ms):
	Trace[1325993115]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (10:48:33.957)
	Trace[1325993115]: [30.000839358s] [30.000839358s] END
	[INFO] plugin/kubernetes: Trace[1505747015]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 10:48:03.956) (total time: 30001ms):
	Trace[1505747015]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (10:48:33.957)
	Trace[1505747015]: [30.001035282s] [30.001035282s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[38045809]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 10:48:03.956) (total time: 30001ms):
	Trace[38045809]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (10:48:33.957)
	Trace[38045809]: [30.001098584s] [30.001098584s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [c41bfc02c715fb0b9e1bfae72b90e6e638b5100a2396728fc79097168b7fe022] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:37123 - 15631 "HINFO IN 1057155973742697744.7291177972930158542. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009048721s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[422152358]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 10:50:13.583) (total time: 30000ms):
	Trace[422152358]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (10:50:43.584)
	Trace[422152358]: [30.000677513s] [30.000677513s] END
	[INFO] plugin/kubernetes: Trace[718266208]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 10:50:13.584) (total time: 30000ms):
	Trace[718266208]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (10:50:43.584)
	Trace[718266208]: [30.000311275s] [30.000311275s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[90841173]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 10:50:13.584) (total time: 30000ms):
	Trace[90841173]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (10:50:43.585)
	Trace[90841173]: [30.000828207s] [30.000828207s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-770465
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-770465
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-770465
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_44_20_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:44:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-770465
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:50:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:50:12 +0000   Mon, 16 Sep 2024 10:44:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:50:12 +0000   Mon, 16 Sep 2024 10:44:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:50:12 +0000   Mon, 16 Sep 2024 10:44:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:50:12 +0000   Mon, 16 Sep 2024 10:50:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-770465
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 c5bb7ec1c71c48c38f73e0b0f07b7d6d
	  System UUID:                f3656390-934b-423a-8190-9f78053eddee
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-845rc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	  kube-system                 coredns-7c65d6cfc9-9lw9q             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     6m35s
	  kube-system                 coredns-7c65d6cfc9-sbs22             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     6m35s
	  kube-system                 etcd-ha-770465                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         6m42s
	  kube-system                 kindnet-grjh8                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      6m35s
	  kube-system                 kube-apiserver-ha-770465             250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m42s
	  kube-system                 kube-controller-manager-ha-770465    200m (2%)     0 (0%)      0 (0%)           0 (0%)         6m40s
	  kube-system                 kube-proxy-gd2mt                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m35s
	  kube-system                 kube-scheduler-ha-770465             100m (1%)     0 (0%)      0 (0%)           0 (0%)         6m40s
	  kube-system                 kube-vip-ha-770465                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m57s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 45s                    kube-proxy       
	  Normal   Starting                 2m55s                  kube-proxy       
	  Normal   Starting                 6m33s                  kube-proxy       
	  Normal   Starting                 6m40s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  6m40s                  kubelet          Node ha-770465 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  6m40s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 6m40s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasNoDiskPressure    6m40s                  kubelet          Node ha-770465 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m40s                  kubelet          Node ha-770465 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m36s                  node-controller  Node ha-770465 event: Registered Node ha-770465 in Controller
	  Normal   RegisteredNode           6m14s                  node-controller  Node ha-770465 event: Registered Node ha-770465 in Controller
	  Normal   RegisteredNode           5m39s                  node-controller  Node ha-770465 event: Registered Node ha-770465 in Controller
	  Normal   RegisteredNode           3m52s                  node-controller  Node ha-770465 event: Registered Node ha-770465 in Controller
	  Normal   NodeAllocatableEnforced  3m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 3m11s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m11s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasNoDiskPressure    3m10s (x7 over 3m11s)  kubelet          Node ha-770465 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m10s (x7 over 3m11s)  kubelet          Node ha-770465 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  3m10s (x8 over 3m11s)  kubelet          Node ha-770465 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m58s                  node-controller  Node ha-770465 event: Registered Node ha-770465 in Controller
	  Normal   RegisteredNode           2m58s                  node-controller  Node ha-770465 event: Registered Node ha-770465 in Controller
	  Normal   RegisteredNode           2m42s                  node-controller  Node ha-770465 event: Registered Node ha-770465 in Controller
	  Normal   Starting                 62s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  62s (x8 over 62s)      kubelet          Node ha-770465 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x7 over 62s)      kubelet          Node ha-770465 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x7 over 62s)      kubelet          Node ha-770465 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  62s                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           50s                    node-controller  Node ha-770465 event: Registered Node ha-770465 in Controller
	  Normal   RegisteredNode           49s                    node-controller  Node ha-770465 event: Registered Node ha-770465 in Controller
	
	
	Name:               ha-770465-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-770465-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-770465
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_44_39_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:44:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-770465-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:50:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:50:12 +0000   Mon, 16 Sep 2024 10:44:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:50:12 +0000   Mon, 16 Sep 2024 10:44:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:50:12 +0000   Mon, 16 Sep 2024 10:44:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:50:12 +0000   Mon, 16 Sep 2024 10:44:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-770465-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 c55a9907052f4689a548136d7892222f
	  System UUID:                0ec75a9b-7a96-466a-872e-476404dc1e5d
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-klfw4                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	  kube-system                 etcd-ha-770465-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         6m20s
	  kube-system                 kindnet-kht59                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      6m22s
	  kube-system                 kube-apiserver-ha-770465-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 kube-controller-manager-ha-770465-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 kube-proxy-4qgcs                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 kube-scheduler-ha-770465-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         6m14s
	  kube-system                 kube-vip-ha-770465-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 40s                    kube-proxy       
	  Normal   Starting                 6m18s                  kube-proxy       
	  Normal   Starting                 2m57s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  6m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m22s (x8 over 6m22s)  kubelet          Node ha-770465-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     6m22s (x7 over 6m22s)  kubelet          Node ha-770465-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    6m22s (x7 over 6m22s)  kubelet          Node ha-770465-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           6m21s                  node-controller  Node ha-770465-m02 event: Registered Node ha-770465-m02 in Controller
	  Normal   RegisteredNode           6m14s                  node-controller  Node ha-770465-m02 event: Registered Node ha-770465-m02 in Controller
	  Normal   RegisteredNode           5m39s                  node-controller  Node ha-770465-m02 event: Registered Node ha-770465-m02 in Controller
	  Normal   NodeHasSufficientPID     3m59s (x7 over 3m59s)  kubelet          Node ha-770465-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  3m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    3m59s (x7 over 3m59s)  kubelet          Node ha-770465-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 3m59s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m59s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  3m59s (x8 over 3m59s)  kubelet          Node ha-770465-m02 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           3m52s                  node-controller  Node ha-770465-m02 event: Registered Node ha-770465-m02 in Controller
	  Normal   NodeHasSufficientPID     3m8s (x7 over 3m8s)    kubelet          Node ha-770465-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    3m8s (x7 over 3m8s)    kubelet          Node ha-770465-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  3m8s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 3m8s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m8s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  3m8s (x8 over 3m8s)    kubelet          Node ha-770465-m02 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m58s                  node-controller  Node ha-770465-m02 event: Registered Node ha-770465-m02 in Controller
	  Normal   RegisteredNode           2m58s                  node-controller  Node ha-770465-m02 event: Registered Node ha-770465-m02 in Controller
	  Normal   RegisteredNode           2m42s                  node-controller  Node ha-770465-m02 event: Registered Node ha-770465-m02 in Controller
	  Normal   Starting                 60s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  60s (x8 over 60s)      kubelet          Node ha-770465-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s (x7 over 60s)      kubelet          Node ha-770465-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s (x7 over 60s)      kubelet          Node ha-770465-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  60s                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           50s                    node-controller  Node ha-770465-m02 event: Registered Node ha-770465-m02 in Controller
	  Normal   RegisteredNode           49s                    node-controller  Node ha-770465-m02 event: Registered Node ha-770465-m02 in Controller
	
	
	Name:               ha-770465-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-770465-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=ha-770465
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_46_20_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:46:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-770465-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:50:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:50:21 +0000   Mon, 16 Sep 2024 10:48:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:50:21 +0000   Mon, 16 Sep 2024 10:48:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:50:21 +0000   Mon, 16 Sep 2024 10:48:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:50:21 +0000   Mon, 16 Sep 2024 10:48:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-770465-m04
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 40f023cd9588437084dd162c87efdf56
	  System UUID:                82d9765a-9474-4a2c-ae78-19bbbf1ab150
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-hjjqt    0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kindnet-bflwn              100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m38s
	  kube-system                 kube-proxy-78l2l           0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%)  100m (1%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m37s                  kube-proxy       
	  Normal   Starting                 34s                    kube-proxy       
	  Normal   Starting                 119s                   kube-proxy       
	  Normal   NodeHasNoDiskPressure    4m40s (x2 over 4m40s)  kubelet          Node ha-770465-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  4m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     4m40s (x2 over 4m40s)  kubelet          Node ha-770465-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  4m40s (x2 over 4m40s)  kubelet          Node ha-770465-m04 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           4m39s                  node-controller  Node ha-770465-m04 event: Registered Node ha-770465-m04 in Controller
	  Normal   NodeReady                4m39s                  kubelet          Node ha-770465-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m39s                  node-controller  Node ha-770465-m04 event: Registered Node ha-770465-m04 in Controller
	  Normal   RegisteredNode           4m36s                  node-controller  Node ha-770465-m04 event: Registered Node ha-770465-m04 in Controller
	  Normal   RegisteredNode           3m52s                  node-controller  Node ha-770465-m04 event: Registered Node ha-770465-m04 in Controller
	  Normal   RegisteredNode           2m58s                  node-controller  Node ha-770465-m04 event: Registered Node ha-770465-m04 in Controller
	  Normal   RegisteredNode           2m58s                  node-controller  Node ha-770465-m04 event: Registered Node ha-770465-m04 in Controller
	  Normal   RegisteredNode           2m42s                  node-controller  Node ha-770465-m04 event: Registered Node ha-770465-m04 in Controller
	  Normal   NodeNotReady             2m18s                  node-controller  Node ha-770465-m04 status is now: NodeNotReady
	  Warning  CgroupV1                 2m8s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  2m8s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m8s                   kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m2s (x8 over 2m8s)    kubelet          Node ha-770465-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m2s (x7 over 2m8s)    kubelet          Node ha-770465-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m2s (x7 over 2m8s)    kubelet          Node ha-770465-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           50s                    node-controller  Node ha-770465-m04 event: Registered Node ha-770465-m04 in Controller
	  Normal   RegisteredNode           49s                    node-controller  Node ha-770465-m04 event: Registered Node ha-770465-m04 in Controller
	  Warning  CgroupV1                 45s                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  45s                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 45s                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  38s (x7 over 45s)      kubelet          Node ha-770465-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    38s (x7 over 45s)      kubelet          Node ha-770465-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     38s (x7 over 45s)      kubelet          Node ha-770465-m04 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[  +0.095971] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-c95c64bb41bd
	[  +0.000006] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-c95c64bb41bd
	[  +0.000001] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +5.951420] net_ratelimit: 6 callbacks suppressed
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-c95c64bb41bd
	[  +0.000006] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +0.256004] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-c95c64bb41bd
	[  +0.000006] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-c95c64bb41bd
	[  +0.000002] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-c95c64bb41bd
	[  +0.000001] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +7.935271] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-c95c64bb41bd
	[  +0.000006] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-c95c64bb41bd
	[  +0.000001] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +0.003990] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-c95c64bb41bd
	[  +0.000004] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +0.255992] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-c95c64bb41bd
	[  +0.000006] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-c95c64bb41bd
	[  +0.000001] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-c95c64bb41bd
	[  +0.000001] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	
	
	==> etcd [07c7ffcfc3c4d5c30b8c269438bb247c7bfca89b403075a89434c0a1f50dd74e] <==
	{"level":"info","ts":"2024-09-16T10:50:04.744381Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"f23d31ee9f17f736"}
	{"level":"info","ts":"2024-09-16T10:50:04.747480Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892 17455162631699035958)"}
	{"level":"info","ts":"2024-09-16T10:50:04.747552Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"1849ecf187a2b8dd","removed-remote-peer-urls":["https://192.168.49.4:2380"]}
	{"level":"info","ts":"2024-09-16T10:50:04.747574Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"1849ecf187a2b8dd"}
	{"level":"info","ts":"2024-09-16T10:50:04.747649Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"unknown stream","remote-peer-id":"1849ecf187a2b8dd"}
	{"level":"info","ts":"2024-09-16T10:50:04.747730Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"unknown stream","remote-peer-id":"1849ecf187a2b8dd"}
	{"level":"info","ts":"2024-09-16T10:50:04.747843Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"1849ecf187a2b8dd"}
	{"level":"info","ts":"2024-09-16T10:50:04.747970Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"1849ecf187a2b8dd"}
	{"level":"info","ts":"2024-09-16T10:50:04.748060Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"1849ecf187a2b8dd"}
	{"level":"info","ts":"2024-09-16T10:50:04.748098Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"1849ecf187a2b8dd"}
	{"level":"info","ts":"2024-09-16T10:50:04.748138Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"1849ecf187a2b8dd"}
	{"level":"info","ts":"2024-09-16T10:50:05.153736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc [logterm: 3, index: 2488, vote: aec36adc501070cc] cast MsgPreVote for f23d31ee9f17f736 [logterm: 3, index: 2488] at term 3"}
	{"level":"info","ts":"2024-09-16T10:50:05.155288Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc [term: 3] received a MsgVote message with higher term from f23d31ee9f17f736 [term: 4]"}
	{"level":"info","ts":"2024-09-16T10:50:05.155332Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became follower at term 4"}
	{"level":"info","ts":"2024-09-16T10:50:05.155341Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc [logterm: 3, index: 2488, vote: 0] cast MsgVote for f23d31ee9f17f736 [logterm: 3, index: 2488] at term 4"}
	{"level":"info","ts":"2024-09-16T10:50:05.158266Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader f23d31ee9f17f736 at term 4"}
	{"level":"info","ts":"2024-09-16T10:50:05.162202Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:ha-770465 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:50:05.162509Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:50:05.162834Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:50:05.163676Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:50:05.163764Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:50:05.164824Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-16T10:50:05.164909Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:50:05.166613Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:50:05.166654Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [e87832bf428c0d5daf61e53f57c9813ace0d2d4a7ba9c30b2fee46730d2c6de1] <==
	{"level":"warn","ts":"2024-09-16T10:49:36.423051Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128031932341261118,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-16T10:49:36.430456Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"2.000907672s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-09-16T10:49:36.430592Z","caller":"traceutil/trace.go:171","msg":"trace[1012051487] range","detail":"{range_begin:/registry/health; range_end:; }","duration":"2.001057219s","start":"2024-09-16T10:49:34.429519Z","end":"2024-09-16T10:49:36.430577Z","steps":["trace[1012051487] 'agreement among raft nodes before linearized reading'  (duration: 2.000895676s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:49:36.430667Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T10:49:34.429488Z","time spent":"2.00116147s","remote":"127.0.0.1:33250","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":0,"request content":"key:\"/registry/health\" "}
	2024/09/16 10:49:36 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-09-16T10:49:36.813636Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T10:49:29.812859Z","time spent":"7.00076915s","remote":"127.0.0.1:33876","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	{"level":"warn","ts":"2024-09-16T10:49:36.926041Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128031932341261118,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-16T10:49:37.427889Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128031932341261118,"retry-timeout":"500ms"}
	{"level":"info","ts":"2024-09-16T10:49:37.446269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-16T10:49:37.446312Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:49:37.446322Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-16T10:49:37.446341Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc [logterm: 3, index: 2488] sent MsgPreVote request to f23d31ee9f17f736 at term 3"}
	{"level":"warn","ts":"2024-09-16T10:49:37.928016Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128031932341261118,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-16T10:49:38.428106Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128031932341261118,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-16T10:49:38.434505Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.999835178s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"","error":"context deadline exceeded"}
	{"level":"info","ts":"2024-09-16T10:49:38.434568Z","caller":"traceutil/trace.go:171","msg":"trace[1121695820] range","detail":"{range_begin:/registry/health; range_end:; }","duration":"1.999911829s","start":"2024-09-16T10:49:36.434644Z","end":"2024-09-16T10:49:38.434556Z","steps":["trace[1121695820] 'agreement among raft nodes before linearized reading'  (duration: 1.999833326s)"],"step_count":1}
	{"level":"warn","ts":"2024-09-16T10:49:38.434609Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T10:49:36.434605Z","time spent":"1.999991597s","remote":"127.0.0.1:33254","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":0,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-09-16T10:49:38.623029Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T10:49:31.622737Z","time spent":"7.000286647s","remote":"127.0.0.1:33876","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	{"level":"info","ts":"2024-09-16T10:49:38.923214Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-16T10:49:38.923270Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:49:38.923284Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-09-16T10:49:38.923300Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc [logterm: 3, index: 2488] sent MsgPreVote request to f23d31ee9f17f736 at term 3"}
	{"level":"warn","ts":"2024-09-16T10:49:38.928571Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128031932341261118,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-09-16T10:49:39.426037Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-16T10:49:32.425969Z","time spent":"7.000061849s","remote":"127.0.0.1:33462","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	{"level":"warn","ts":"2024-09-16T10:49:39.429488Z","caller":"etcdserver/v3_server.go:920","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128031932341261118,"retry-timeout":"500ms"}
	
	
	==> kernel <==
	 10:50:59 up 33 min,  0 users,  load average: 1.16, 1.47, 0.98
	Linux ha-770465 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [0ee20b8c8789adc13129c1dd9bbf0e03680faaa7a1039ad42d97dbdae47213fd] <==
	I0916 10:49:04.540761       1 main.go:322] Node ha-770465-m02 has CIDR [10.244.1.0/24] 
	I0916 10:49:04.540893       1 main.go:295] Handling node with IPs: map[192.168.49.4:{}]
	I0916 10:49:04.540902       1 main.go:322] Node ha-770465-m03 has CIDR [10.244.2.0/24] 
	I0916 10:49:04.540937       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0916 10:49:04.540944       1 main.go:322] Node ha-770465-m04 has CIDR [10.244.3.0/24] 
	I0916 10:49:04.540976       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:49:04.540983       1 main.go:299] handling current node
	I0916 10:49:14.543874       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0916 10:49:14.543914       1 main.go:322] Node ha-770465-m02 has CIDR [10.244.1.0/24] 
	I0916 10:49:14.544045       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0916 10:49:14.544057       1 main.go:322] Node ha-770465-m04 has CIDR [10.244.3.0/24] 
	I0916 10:49:14.544123       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:49:14.544134       1 main.go:299] handling current node
	I0916 10:49:24.544645       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0916 10:49:24.544680       1 main.go:322] Node ha-770465-m04 has CIDR [10.244.3.0/24] 
	I0916 10:49:24.544816       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:49:24.544829       1 main.go:299] handling current node
	I0916 10:49:24.544843       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0916 10:49:24.544850       1 main.go:322] Node ha-770465-m02 has CIDR [10.244.1.0/24] 
	I0916 10:49:34.543987       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0916 10:49:34.544018       1 main.go:322] Node ha-770465-m02 has CIDR [10.244.1.0/24] 
	I0916 10:49:34.544135       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0916 10:49:34.544145       1 main.go:322] Node ha-770465-m04 has CIDR [10.244.3.0/24] 
	I0916 10:49:34.544183       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:49:34.544190       1 main.go:299] handling current node
	
	
	==> kindnet [5617f96404065c173db93507b510df9d42e9ae9957bdd5f2debfb56993896b56] <==
	I0916 10:50:24.221057       1 main.go:299] handling current node
	I0916 10:50:24.223230       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0916 10:50:24.223260       1 main.go:322] Node ha-770465-m02 has CIDR [10.244.1.0/24] 
	I0916 10:50:24.223415       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.49.3 Flags: [] Table: 0} 
	I0916 10:50:24.223528       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0916 10:50:24.223542       1 main.go:322] Node ha-770465-m04 has CIDR [10.244.3.0/24] 
	I0916 10:50:24.223596       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.49.5 Flags: [] Table: 0} 
	I0916 10:50:34.227829       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0916 10:50:34.227863       1 main.go:322] Node ha-770465-m04 has CIDR [10.244.3.0/24] 
	I0916 10:50:34.227994       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:50:34.228004       1 main.go:299] handling current node
	I0916 10:50:34.228016       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0916 10:50:34.228020       1 main.go:322] Node ha-770465-m02 has CIDR [10.244.1.0/24] 
	I0916 10:50:44.220421       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:50:44.220467       1 main.go:299] handling current node
	I0916 10:50:44.220483       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0916 10:50:44.220491       1 main.go:322] Node ha-770465-m02 has CIDR [10.244.1.0/24] 
	I0916 10:50:44.220633       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0916 10:50:44.220647       1 main.go:322] Node ha-770465-m04 has CIDR [10.244.3.0/24] 
	I0916 10:50:54.220219       1 main.go:295] Handling node with IPs: map[192.168.49.5:{}]
	I0916 10:50:54.220254       1 main.go:322] Node ha-770465-m04 has CIDR [10.244.3.0/24] 
	I0916 10:50:54.220385       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0916 10:50:54.220398       1 main.go:299] handling current node
	I0916 10:50:54.220408       1 main.go:295] Handling node with IPs: map[192.168.49.3:{}]
	I0916 10:50:54.220412       1 main.go:322] Node ha-770465-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [7352793eaf7e3dcecd0a3311ddb9500bc8c4b3fe7077aeae92278547ba2ca174] <==
	I0916 10:50:06.751021       1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0916 10:50:06.751035       1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController
	I0916 10:50:06.751052       1 crd_finalizer.go:269] Starting CRDFinalizer
	I0916 10:50:06.825701       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 10:50:06.839222       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:50:06.839417       1 policy_source.go:224] refreshing policies
	I0916 10:50:06.850616       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:50:06.850658       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:50:06.850801       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 10:50:06.850824       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:50:06.850848       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 10:50:06.850877       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:50:06.850893       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:50:06.850898       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:50:06.850904       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:50:06.850828       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:50:06.850978       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 10:50:06.851269       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 10:50:06.857550       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0916 10:50:06.858556       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0916 10:50:06.925136       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:50:07.755425       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0916 10:50:08.068916       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I0916 10:50:08.070222       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:50:08.076378       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [b715d9632d76bec5b9249626c0e047c8c7d8720a8f0f370d24d64c3acc85d01d] <==
	E0916 10:49:39.535392       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0916 10:49:39.535416       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0916 10:49:39.535489       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0916 10:49:39.535491       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0916 10:49:39.535509       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0916 10:49:39.535586       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0916 10:49:39.535634       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0916 10:49:39.536112       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0916 10:49:39.536143       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0916 10:49:39.536379       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0916 10:49:39.536391       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0916 10:49:39.537471       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0916 10:49:39.538279       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0916 10:49:39.538586       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0916 10:49:39.538603       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0916 10:49:39.538894       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0916 10:49:39.538953       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0916 10:49:39.539304       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0916 10:49:39.539318       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0916 10:49:39.539342       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0916 10:49:39.539353       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0916 10:49:39.539377       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0916 10:49:39.539396       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0916 10:49:39.539307       1 watcher.go:342] watch chan error: etcdserver: no leader
	E0916 10:49:39.539447       1 watcher.go:342] watch chan error: etcdserver: no leader
	
	
	==> kube-controller-manager [4a562d336c1706e425c8ce858242155970a39095a512cf3b2064ce89d4f54369] <==
	I0916 10:48:43.354892       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="135.793µs"
	I0916 10:48:46.754272       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m04"
	I0916 10:48:57.698245       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-770465-m04"
	I0916 10:48:57.698462       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m04"
	I0916 10:48:57.707954       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m04"
	I0916 10:49:01.688748       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m04"
	I0916 10:49:04.291789       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m03"
	I0916 10:49:04.302952       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m03"
	I0916 10:49:04.345976       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="29.238623ms"
	I0916 10:49:04.398190       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="52.160869ms"
	I0916 10:49:04.409819       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="11.573808ms"
	I0916 10:49:04.410279       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="97.503µs"
	I0916 10:49:06.474706       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="68.139µs"
	I0916 10:49:07.079368       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="79.471µs"
	I0916 10:49:07.083495       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="58.63µs"
	I0916 10:49:07.422446       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="19.388254ms"
	I0916 10:49:07.422581       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="75.3µs"
	I0916 10:49:08.519926       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-770465-m04"
	I0916 10:49:08.520048       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m03"
	E0916 10:49:08.543492       1 garbagecollector.go:399] "Unhandled Error" err="error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"coordination.k8s.io/v1\", Kind:\"Lease\", Name:\"ha-770465-m03\", UID:\"11a2730d-1724-4ba7-9d9a-9d2e6b786df0\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"kube-node-lease\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32
{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Node\", Name:\"ha-770465-m03\", UID:\"277c51be-ae79-4a39-b8dc-63020200f29c\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: leases.coordination.k8s.io \"ha-770465-m03\" not found" logger="UnhandledError"
	E0916 10:49:21.619509       1 gc_controller.go:151] "Failed to get node" err="node \"ha-770465-m03\" not found" logger="pod-garbage-collector-controller" node="ha-770465-m03"
	E0916 10:49:21.619541       1 gc_controller.go:151] "Failed to get node" err="node \"ha-770465-m03\" not found" logger="pod-garbage-collector-controller" node="ha-770465-m03"
	E0916 10:49:21.619547       1 gc_controller.go:151] "Failed to get node" err="node \"ha-770465-m03\" not found" logger="pod-garbage-collector-controller" node="ha-770465-m03"
	E0916 10:49:21.619552       1 gc_controller.go:151] "Failed to get node" err="node \"ha-770465-m03\" not found" logger="pod-garbage-collector-controller" node="ha-770465-m03"
	E0916 10:49:21.619557       1 gc_controller.go:151] "Failed to get node" err="node \"ha-770465-m03\" not found" logger="pod-garbage-collector-controller" node="ha-770465-m03"
	
	
	==> kube-controller-manager [bc4aeaee8ca7f77d1a03e7915f302f8dadcb2ee9e534a211360e2b02159dcad8] <==
	I0916 10:50:13.663396       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="111.91µs"
	I0916 10:50:13.724919       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="175.927µs"
	I0916 10:50:13.734217       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-tw8rq EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-tw8rq\": the object has been modified; please apply your changes to the latest version and try again"
	I0916 10:50:13.734549       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"5a99e4e7-454d-48ca-8c88-14bcdda1194b", APIVersion:"v1", ResourceVersion:"251", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-tw8rq EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-tw8rq": the object has been modified; please apply your changes to the latest version and try again
	I0916 10:50:14.902369       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465"
	I0916 10:50:17.762262       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="19.365146ms"
	I0916 10:50:17.762368       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="61.891µs"
	I0916 10:50:18.808148       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="25.038408ms"
	I0916 10:50:18.808271       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="68.178µs"
	I0916 10:50:21.199235       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-770465-m04"
	I0916 10:50:23.864960       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="18.804606ms"
	I0916 10:50:23.865061       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="58.367µs"
	I0916 10:50:24.916058       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="25.944112ms"
	I0916 10:50:24.916157       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="63.155µs"
	E0916 10:50:30.125624       1 gc_controller.go:151] "Failed to get node" err="node \"ha-770465-m03\" not found" logger="pod-garbage-collector-controller" node="ha-770465-m03"
	E0916 10:50:30.125671       1 gc_controller.go:151] "Failed to get node" err="node \"ha-770465-m03\" not found" logger="pod-garbage-collector-controller" node="ha-770465-m03"
	E0916 10:50:30.125683       1 gc_controller.go:151] "Failed to get node" err="node \"ha-770465-m03\" not found" logger="pod-garbage-collector-controller" node="ha-770465-m03"
	E0916 10:50:30.125690       1 gc_controller.go:151] "Failed to get node" err="node \"ha-770465-m03\" not found" logger="pod-garbage-collector-controller" node="ha-770465-m03"
	E0916 10:50:30.125697       1 gc_controller.go:151] "Failed to get node" err="node \"ha-770465-m03\" not found" logger="pod-garbage-collector-controller" node="ha-770465-m03"
	I0916 10:50:52.772335       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="19.605994ms"
	I0916 10:50:52.772495       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="85.153µs"
	I0916 10:50:52.795038       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="13.248787ms"
	I0916 10:50:52.795353       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-tw8rq EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-tw8rq\": the object has been modified; please apply your changes to the latest version and try again"
	I0916 10:50:52.795950       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="88.386µs"
	I0916 10:50:52.796306       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"5a99e4e7-454d-48ca-8c88-14bcdda1194b", APIVersion:"v1", ResourceVersion:"251", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-tw8rq EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-tw8rq": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-proxy [4a76c9091f7d7a6caa1b53a3d6a309ee6b15e2c44ae7a65869fb7c2260dc2271] <==
	I0916 10:50:13.347702       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:50:13.564486       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:50:13.564582       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:50:13.632810       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:50:13.632880       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:50:13.649966       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:50:13.650730       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:50:13.651001       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:50:13.657291       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:50:13.657509       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:50:13.657639       1 config.go:328] "Starting node config controller"
	I0916 10:50:13.657997       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:50:13.657667       1 config.go:199] "Starting service config controller"
	I0916 10:50:13.658215       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:50:13.758752       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:50:13.758736       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:50:13.758867       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [de5c6dcf960e9561503f4b0b4b3900a6a55e051755584f47521977a698ad11bb] <==
	I0916 10:48:03.746816       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:48:04.024990       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0916 10:48:04.025076       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:48:04.045298       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:48:04.045353       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:48:04.047216       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:48:04.047660       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:48:04.047689       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:48:04.048772       1 config.go:199] "Starting service config controller"
	I0916 10:48:04.048791       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:48:04.048824       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:48:04.048826       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:48:04.048889       1 config.go:328] "Starting node config controller"
	I0916 10:48:04.048897       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:48:04.149460       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:48:04.149502       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:48:04.149599       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [5b1ed4abbc0c28b550363a11f4cfcd62578f9d3aa8a9a13fe78007375871de1a] <==
	I0916 10:50:05.122759       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:50:06.764352       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:50:06.764392       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 10:50:06.764405       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:50:06.764414       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:50:06.832896       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:50:06.832920       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:50:06.836358       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:50:06.836400       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:50:06.836504       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:50:06.836562       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:50:06.937315       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [bcd02f03466d85592977b36046584eb0eb24d4040a9a28d2400852992bb02a91] <==
	I0916 10:47:57.021055       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:47:58.320429       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:47:58.320473       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 10:47:58.320485       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:47:58.320494       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:47:58.344793       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:47:58.344842       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:47:58.347539       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:47:58.347685       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:47:58.347706       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:47:58.347718       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:47:58.448378       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:50:11 ha-770465 kubelet[737]: E0916 10:50:11.868190     737 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ha-770465\" not found"
	Sep 16 10:50:11 ha-770465 kubelet[737]: E0916 10:50:11.969211     737 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ha-770465\" not found"
	Sep 16 10:50:12 ha-770465 kubelet[737]: E0916 10:50:12.069807     737 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ha-770465\" not found"
	Sep 16 10:50:12 ha-770465 kubelet[737]: I0916 10:50:12.171955     737 kubelet_node_status.go:488] "Fast updating node status as it just became ready"
	Sep 16 10:50:12 ha-770465 kubelet[737]: I0916 10:50:12.428373     737 apiserver.go:52] "Watching apiserver"
	Sep 16 10:50:12 ha-770465 kubelet[737]: I0916 10:50:12.515673     737 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 16 10:50:12 ha-770465 kubelet[737]: I0916 10:50:12.549118     737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/98658171-1486-44a9-8a20-0d77ea019206-xtables-lock\") pod \"kindnet-grjh8\" (UID: \"98658171-1486-44a9-8a20-0d77ea019206\") " pod="kube-system/kindnet-grjh8"
	Sep 16 10:50:12 ha-770465 kubelet[737]: I0916 10:50:12.549578     737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/98658171-1486-44a9-8a20-0d77ea019206-lib-modules\") pod \"kindnet-grjh8\" (UID: \"98658171-1486-44a9-8a20-0d77ea019206\") " pod="kube-system/kindnet-grjh8"
	Sep 16 10:50:12 ha-770465 kubelet[737]: I0916 10:50:12.549610     737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc3bf04d-635a-4264-883b-2fd72cac2e24-xtables-lock\") pod \"kube-proxy-gd2mt\" (UID: \"fc3bf04d-635a-4264-883b-2fd72cac2e24\") " pod="kube-system/kube-proxy-gd2mt"
	Sep 16 10:50:12 ha-770465 kubelet[737]: I0916 10:50:12.549630     737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc3bf04d-635a-4264-883b-2fd72cac2e24-lib-modules\") pod \"kube-proxy-gd2mt\" (UID: \"fc3bf04d-635a-4264-883b-2fd72cac2e24\") " pod="kube-system/kube-proxy-gd2mt"
	Sep 16 10:50:12 ha-770465 kubelet[737]: I0916 10:50:12.549657     737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/98658171-1486-44a9-8a20-0d77ea019206-cni-cfg\") pod \"kindnet-grjh8\" (UID: \"98658171-1486-44a9-8a20-0d77ea019206\") " pod="kube-system/kindnet-grjh8"
	Sep 16 10:50:12 ha-770465 kubelet[737]: I0916 10:50:12.549712     737 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/cf470925-4874-4744-8015-700e93ab924f-tmp\") pod \"storage-provisioner\" (UID: \"cf470925-4874-4744-8015-700e93ab924f\") " pod="kube-system/storage-provisioner"
	Sep 16 10:50:12 ha-770465 kubelet[737]: I0916 10:50:12.624134     737 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Sep 16 10:50:17 ha-770465 kubelet[737]: E0916 10:50:17.482822     737 summary_sys_containers.go:51] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Sep 16 10:50:17 ha-770465 kubelet[737]: E0916 10:50:17.482866     737 helpers.go:854] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal="allocatableMemory.available"
	Sep 16 10:50:27 ha-770465 kubelet[737]: E0916 10:50:27.506133     737 summary_sys_containers.go:51] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Sep 16 10:50:27 ha-770465 kubelet[737]: E0916 10:50:27.506226     737 helpers.go:854] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal="allocatableMemory.available"
	Sep 16 10:50:37 ha-770465 kubelet[737]: E0916 10:50:37.527545     737 summary_sys_containers.go:51] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Sep 16 10:50:37 ha-770465 kubelet[737]: E0916 10:50:37.527594     737 helpers.go:854] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal="allocatableMemory.available"
	Sep 16 10:50:43 ha-770465 kubelet[737]: I0916 10:50:43.698657     737 scope.go:117] "RemoveContainer" containerID="946241353e03d16a05ed42c23006cfa465d022f50c7580d1bec22425ee59a4ac"
	Sep 16 10:50:43 ha-770465 kubelet[737]: I0916 10:50:43.699074     737 scope.go:117] "RemoveContainer" containerID="a913cc50ff03e6b56bb8bf1b16c2af687a48e35b7d5736f3833ed2965d68e521"
	Sep 16 10:50:43 ha-770465 kubelet[737]: E0916 10:50:43.699259     737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(cf470925-4874-4744-8015-700e93ab924f)\"" pod="kube-system/storage-provisioner" podUID="cf470925-4874-4744-8015-700e93ab924f"
	Sep 16 10:50:47 ha-770465 kubelet[737]: E0916 10:50:47.545387     737 summary_sys_containers.go:51] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Sep 16 10:50:47 ha-770465 kubelet[737]: E0916 10:50:47.545454     737 helpers.go:854] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal="allocatableMemory.available"
	Sep 16 10:50:58 ha-770465 kubelet[737]: I0916 10:50:58.432803     737 scope.go:117] "RemoveContainer" containerID="a913cc50ff03e6b56bb8bf1b16c2af687a48e35b7d5736f3833ed2965d68e521"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-770465 -n ha-770465
helpers_test.go:261: (dbg) Run:  kubectl --context ha-770465 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context ha-770465 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (454.698µs)
helpers_test.go:263: kubectl --context ha-770465 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestMultiControlPlane/serial/RestartCluster (68.91s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (2s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-079070 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-079070 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": fork/exec /usr/local/bin/kubectl: exec format error (498.599µs)
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-079070 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": fork/exec /usr/local/bin/kubectl: exec format error
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-079070 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/MultiNodeLabels]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-079070
helpers_test.go:235: (dbg) docker inspect multinode-079070:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1f3af6522540e0cf7a676c3192a1437f3ee712969b647e0c5ff8f6e5d39943b2",
	        "Created": "2024-09-16T10:56:12.200290899Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 157680,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:56:12.309897613Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/1f3af6522540e0cf7a676c3192a1437f3ee712969b647e0c5ff8f6e5d39943b2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1f3af6522540e0cf7a676c3192a1437f3ee712969b647e0c5ff8f6e5d39943b2/hostname",
	        "HostsPath": "/var/lib/docker/containers/1f3af6522540e0cf7a676c3192a1437f3ee712969b647e0c5ff8f6e5d39943b2/hosts",
	        "LogPath": "/var/lib/docker/containers/1f3af6522540e0cf7a676c3192a1437f3ee712969b647e0c5ff8f6e5d39943b2/1f3af6522540e0cf7a676c3192a1437f3ee712969b647e0c5ff8f6e5d39943b2-json.log",
	        "Name": "/multinode-079070",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-079070:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-079070",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e5e11918fc04456f16b46e3608935d02c293129393f819ada075321189caa2ad-init/diff:/var/lib/docker/overlay2/e391a31d00d345931d18da68f8a7d7338830f6d9b98fe203461225d03a65d594/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e5e11918fc04456f16b46e3608935d02c293129393f819ada075321189caa2ad/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e5e11918fc04456f16b46e3608935d02c293129393f819ada075321189caa2ad/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e5e11918fc04456f16b46e3608935d02c293129393f819ada075321189caa2ad/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-079070",
	                "Source": "/var/lib/docker/volumes/multinode-079070/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-079070",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-079070",
	                "name.minikube.sigs.k8s.io": "multinode-079070",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a560079fd7a6ca102362f0cdf2062b82a677a42b7b5efbb4988b26509a1f350a",
	            "SandboxKey": "/var/run/docker/netns/a560079fd7a6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32908"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32909"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32912"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32910"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32911"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-079070": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null,
	                    "NetworkID": "49585fce923a48b44636990469ad4decadcc5b1b88fcdd63ced7ebb1e3971b52",
	                    "EndpointID": "01c8b09cda6dc7f6b7f0ccee5666ccccb7fa2d2fc265a3505bf1c12e7ef0dc1b",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "multinode-079070",
	                        "1f3af6522540"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-079070 -n multinode-079070
helpers_test.go:244: <<< TestMultiNode/serial/MultiNodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/MultiNodeLabels]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-079070 logs -n 25: (1.119208661s)
helpers_test.go:252: TestMultiNode/serial/MultiNodeLabels logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-609600 ssh -- ls                    | mount-start-2-609600 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-595986                           | mount-start-1-595986 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-609600 ssh -- ls                    | mount-start-2-609600 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-609600                           | mount-start-2-609600 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:55 UTC |
	| start   | -p mount-start-2-609600                           | mount-start-2-609600 | jenkins | v1.34.0 | 16 Sep 24 10:55 UTC | 16 Sep 24 10:56 UTC |
	| ssh     | mount-start-2-609600 ssh -- ls                    | mount-start-2-609600 | jenkins | v1.34.0 | 16 Sep 24 10:56 UTC | 16 Sep 24 10:56 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-609600                           | mount-start-2-609600 | jenkins | v1.34.0 | 16 Sep 24 10:56 UTC | 16 Sep 24 10:56 UTC |
	| delete  | -p mount-start-1-595986                           | mount-start-1-595986 | jenkins | v1.34.0 | 16 Sep 24 10:56 UTC | 16 Sep 24 10:56 UTC |
	| start   | -p multinode-079070                               | multinode-079070     | jenkins | v1.34.0 | 16 Sep 24 10:56 UTC | 16 Sep 24 10:57 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                    |                      |         |         |                     |                     |
	| kubectl | -p multinode-079070 -- apply -f                   | multinode-079070     | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-079070 -- rollout                    | multinode-079070     | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-079070 -- get pods -o                | multinode-079070     | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-079070 -- get pods -o                | multinode-079070     | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-079070 -- exec                       | multinode-079070     | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | busybox-7dff88458-pjlvx --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-079070 -- exec                       | multinode-079070     | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | busybox-7dff88458-x6h7b --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-079070 -- exec                       | multinode-079070     | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | busybox-7dff88458-pjlvx --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-079070 -- exec                       | multinode-079070     | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | busybox-7dff88458-x6h7b --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-079070 -- exec                       | multinode-079070     | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | busybox-7dff88458-pjlvx -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-079070 -- exec                       | multinode-079070     | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | busybox-7dff88458-x6h7b -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-079070 -- get pods -o                | multinode-079070     | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-079070 -- exec                       | multinode-079070     | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | busybox-7dff88458-pjlvx                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-079070 -- exec                       | multinode-079070     | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | busybox-7dff88458-pjlvx -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.67.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-079070 -- exec                       | multinode-079070     | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | busybox-7dff88458-x6h7b                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-079070 -- exec                       | multinode-079070     | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | busybox-7dff88458-x6h7b -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.67.1                         |                      |         |         |                     |                     |
	| node    | add -p multinode-079070 -v 3                      | multinode-079070     | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:56:06
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:56:06.855156  157008 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:56:06.855263  157008 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:56:06.855270  157008 out.go:358] Setting ErrFile to fd 2...
	I0916 10:56:06.855274  157008 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:56:06.855452  157008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 10:56:06.856103  157008 out.go:352] Setting JSON to false
	I0916 10:56:06.857043  157008 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2311,"bootTime":1726481856,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:56:06.857147  157008 start.go:139] virtualization: kvm guest
	I0916 10:56:06.859338  157008 out.go:177] * [multinode-079070] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:56:06.861126  157008 notify.go:220] Checking for updates...
	I0916 10:56:06.861141  157008 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:56:06.862675  157008 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:56:06.864295  157008 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:56:06.865662  157008 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 10:56:06.866835  157008 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:56:06.868151  157008 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:56:06.869617  157008 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:56:06.892121  157008 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:56:06.892220  157008 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:56:06.943619  157008 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2024-09-16 10:56:06.934377277 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:56:06.943724  157008 docker.go:318] overlay module found
	I0916 10:56:06.945405  157008 out.go:177] * Using the docker driver based on user configuration
	I0916 10:56:06.946509  157008 start.go:297] selected driver: docker
	I0916 10:56:06.946521  157008 start.go:901] validating driver "docker" against <nil>
	I0916 10:56:06.946533  157008 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:56:06.947259  157008 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:56:06.995087  157008 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2024-09-16 10:56:06.986178566 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:56:06.995247  157008 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:56:06.995479  157008 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:56:06.997194  157008 out.go:177] * Using Docker driver with root privileges
	I0916 10:56:06.998684  157008 cni.go:84] Creating CNI manager for ""
	I0916 10:56:06.998744  157008 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0916 10:56:06.998754  157008 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 10:56:06.998838  157008 start.go:340] cluster config:
	{Name:multinode-079070 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-079070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:56:07.000232  157008 out.go:177] * Starting "multinode-079070" primary control-plane node in "multinode-079070" cluster
	I0916 10:56:07.001648  157008 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 10:56:07.002874  157008 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:56:07.004023  157008 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:56:07.004052  157008 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:56:07.004064  157008 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4
	I0916 10:56:07.004088  157008 cache.go:56] Caching tarball of preloaded images
	I0916 10:56:07.004166  157008 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 10:56:07.004180  157008 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 10:56:07.004506  157008 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/config.json ...
	I0916 10:56:07.004528  157008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/config.json: {Name:mk1da92c3cc279d70ea91ed70bd44957fd57d510 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0916 10:56:07.023941  157008 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:56:07.023962  157008 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:56:07.024032  157008 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:56:07.024049  157008 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:56:07.024053  157008 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:56:07.024059  157008 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:56:07.024066  157008 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:56:07.025089  157008 image.go:273] response: 
	I0916 10:56:07.076745  157008 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:56:07.076789  157008 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:56:07.076819  157008 start.go:360] acquireMachinesLock for multinode-079070: {Name:mka8d048a8e19e1d22189c5e81470c7f2336c084 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:56:07.076924  157008 start.go:364] duration metric: took 86.301µs to acquireMachinesLock for "multinode-079070"
	I0916 10:56:07.076948  157008 start.go:93] Provisioning new machine with config: &{Name:multinode-079070 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-079070 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 10:56:07.077038  157008 start.go:125] createHost starting for "" (driver="docker")
	I0916 10:56:07.078848  157008 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 10:56:07.079090  157008 start.go:159] libmachine.API.Create for "multinode-079070" (driver="docker")
	I0916 10:56:07.079122  157008 client.go:168] LocalClient.Create starting
	I0916 10:56:07.079181  157008 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem
	I0916 10:56:07.079213  157008 main.go:141] libmachine: Decoding PEM data...
	I0916 10:56:07.079230  157008 main.go:141] libmachine: Parsing certificate...
	I0916 10:56:07.079285  157008 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem
	I0916 10:56:07.079306  157008 main.go:141] libmachine: Decoding PEM data...
	I0916 10:56:07.079316  157008 main.go:141] libmachine: Parsing certificate...
	I0916 10:56:07.079616  157008 cli_runner.go:164] Run: docker network inspect multinode-079070 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 10:56:07.096186  157008 cli_runner.go:211] docker network inspect multinode-079070 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 10:56:07.096287  157008 network_create.go:284] running [docker network inspect multinode-079070] to gather additional debugging logs...
	I0916 10:56:07.096307  157008 cli_runner.go:164] Run: docker network inspect multinode-079070
	W0916 10:56:07.112374  157008 cli_runner.go:211] docker network inspect multinode-079070 returned with exit code 1
	I0916 10:56:07.112412  157008 network_create.go:287] error running [docker network inspect multinode-079070]: docker network inspect multinode-079070: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-079070 not found
	I0916 10:56:07.112424  157008 network_create.go:289] output of [docker network inspect multinode-079070]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-079070 not found
	
	** /stderr **
	I0916 10:56:07.112556  157008 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:56:07.129968  157008 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c95c64bb41bd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:bd:76:ab:c4} reservation:<nil>}
	I0916 10:56:07.130501  157008 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fad43aa9929b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:94:3f:61:fd} reservation:<nil>}
	I0916 10:56:07.131035  157008 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a9df90}
	I0916 10:56:07.131069  157008 network_create.go:124] attempt to create docker network multinode-079070 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0916 10:56:07.131117  157008 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-079070 multinode-079070
	I0916 10:56:07.190981  157008 network_create.go:108] docker network multinode-079070 192.168.67.0/24 created
	I0916 10:56:07.191010  157008 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-079070" container
	I0916 10:56:07.191075  157008 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 10:56:07.207629  157008 cli_runner.go:164] Run: docker volume create multinode-079070 --label name.minikube.sigs.k8s.io=multinode-079070 --label created_by.minikube.sigs.k8s.io=true
	I0916 10:56:07.224927  157008 oci.go:103] Successfully created a docker volume multinode-079070
	I0916 10:56:07.225051  157008 cli_runner.go:164] Run: docker run --rm --name multinode-079070-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-079070 --entrypoint /usr/bin/test -v multinode-079070:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 10:56:07.759087  157008 oci.go:107] Successfully prepared a docker volume multinode-079070
	I0916 10:56:07.759157  157008 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:56:07.759182  157008 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 10:56:07.759253  157008 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-079070:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 10:56:12.136896  157008 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-079070:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.377562571s)
	I0916 10:56:12.136945  157008 kic.go:203] duration metric: took 4.377757648s to extract preloaded images to volume ...
	W0916 10:56:12.137124  157008 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 10:56:12.137277  157008 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 10:56:12.185030  157008 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-079070 --name multinode-079070 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-079070 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-079070 --network multinode-079070 --ip 192.168.67.2 --volume multinode-079070:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 10:56:12.503339  157008 cli_runner.go:164] Run: docker container inspect multinode-079070 --format={{.State.Running}}
	I0916 10:56:12.521228  157008 cli_runner.go:164] Run: docker container inspect multinode-079070 --format={{.State.Status}}
	I0916 10:56:12.539581  157008 cli_runner.go:164] Run: docker exec multinode-079070 stat /var/lib/dpkg/alternatives/iptables
	I0916 10:56:12.584105  157008 oci.go:144] the created container "multinode-079070" has a running status.
	I0916 10:56:12.584140  157008 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070/id_rsa...
	I0916 10:56:12.775237  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 10:56:12.775302  157008 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 10:56:12.799502  157008 cli_runner.go:164] Run: docker container inspect multinode-079070 --format={{.State.Status}}
	I0916 10:56:12.818341  157008 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 10:56:12.818363  157008 kic_runner.go:114] Args: [docker exec --privileged multinode-079070 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 10:56:12.930383  157008 cli_runner.go:164] Run: docker container inspect multinode-079070 --format={{.State.Status}}
	I0916 10:56:12.950563  157008 machine.go:93] provisionDockerMachine start ...
	I0916 10:56:12.950646  157008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070
	I0916 10:56:12.973362  157008 main.go:141] libmachine: Using SSH client type: native
	I0916 10:56:12.973701  157008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32908 <nil> <nil>}
	I0916 10:56:12.973720  157008 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:56:13.143400  157008 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-079070
	
	I0916 10:56:13.143431  157008 ubuntu.go:169] provisioning hostname "multinode-079070"
	I0916 10:56:13.143493  157008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070
	I0916 10:56:13.167054  157008 main.go:141] libmachine: Using SSH client type: native
	I0916 10:56:13.167313  157008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32908 <nil> <nil>}
	I0916 10:56:13.167337  157008 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-079070 && echo "multinode-079070" | sudo tee /etc/hostname
	I0916 10:56:13.318706  157008 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-079070
	
	I0916 10:56:13.318789  157008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070
	I0916 10:56:13.335511  157008 main.go:141] libmachine: Using SSH client type: native
	I0916 10:56:13.335747  157008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32908 <nil> <nil>}
	I0916 10:56:13.335776  157008 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-079070' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-079070/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-079070' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:56:13.468115  157008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:56:13.468148  157008 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 10:56:13.468177  157008 ubuntu.go:177] setting up certificates
	I0916 10:56:13.468190  157008 provision.go:84] configureAuth start
	I0916 10:56:13.468242  157008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-079070
	I0916 10:56:13.485691  157008 provision.go:143] copyHostCerts
	I0916 10:56:13.485731  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:56:13.485767  157008 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 10:56:13.485777  157008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:56:13.485837  157008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 10:56:13.485915  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:56:13.485934  157008 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 10:56:13.485941  157008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:56:13.485967  157008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 10:56:13.486014  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:56:13.486033  157008 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 10:56:13.486039  157008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:56:13.486060  157008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 10:56:13.486112  157008 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.multinode-079070 san=[127.0.0.1 192.168.67.2 localhost minikube multinode-079070]
	I0916 10:56:13.600716  157008 provision.go:177] copyRemoteCerts
	I0916 10:56:13.600789  157008 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:56:13.600824  157008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070
	I0916 10:56:13.617706  157008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070/id_rsa Username:docker}
	I0916 10:56:13.712339  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:56:13.712404  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 10:56:13.734571  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:56:13.734631  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0916 10:56:13.756544  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:56:13.756620  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:56:13.778902  157008 provision.go:87] duration metric: took 310.700375ms to configureAuth
	I0916 10:56:13.778931  157008 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:56:13.779104  157008 config.go:182] Loaded profile config "multinode-079070": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:56:13.779116  157008 machine.go:96] duration metric: took 828.530064ms to provisionDockerMachine
	I0916 10:56:13.779125  157008 client.go:171] duration metric: took 6.699995187s to LocalClient.Create
	I0916 10:56:13.779164  157008 start.go:167] duration metric: took 6.700059073s to libmachine.API.Create "multinode-079070"
	I0916 10:56:13.779180  157008 start.go:293] postStartSetup for "multinode-079070" (driver="docker")
	I0916 10:56:13.779193  157008 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:56:13.779247  157008 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:56:13.779295  157008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070
	I0916 10:56:13.796444  157008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070/id_rsa Username:docker}
	I0916 10:56:13.892329  157008 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:56:13.895193  157008 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.4 LTS"
	I0916 10:56:13.895212  157008 command_runner.go:130] > NAME="Ubuntu"
	I0916 10:56:13.895218  157008 command_runner.go:130] > VERSION_ID="22.04"
	I0916 10:56:13.895223  157008 command_runner.go:130] > VERSION="22.04.4 LTS (Jammy Jellyfish)"
	I0916 10:56:13.895228  157008 command_runner.go:130] > VERSION_CODENAME=jammy
	I0916 10:56:13.895232  157008 command_runner.go:130] > ID=ubuntu
	I0916 10:56:13.895246  157008 command_runner.go:130] > ID_LIKE=debian
	I0916 10:56:13.895252  157008 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0916 10:56:13.895257  157008 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0916 10:56:13.895262  157008 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0916 10:56:13.895271  157008 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0916 10:56:13.895277  157008 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0916 10:56:13.895332  157008 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:56:13.895355  157008 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:56:13.895362  157008 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:56:13.895368  157008 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:56:13.895379  157008 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 10:56:13.895426  157008 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 10:56:13.895517  157008 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 10:56:13.895529  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /etc/ssl/certs/111892.pem
	I0916 10:56:13.895631  157008 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:56:13.903481  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:56:13.925525  157008 start.go:296] duration metric: took 146.331045ms for postStartSetup
	I0916 10:56:13.925862  157008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-079070
	I0916 10:56:13.942731  157008 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/config.json ...
	I0916 10:56:13.943004  157008 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:56:13.943050  157008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070
	I0916 10:56:13.959728  157008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070/id_rsa Username:docker}
	I0916 10:56:14.048589  157008 command_runner.go:130] > 31%
	I0916 10:56:14.048685  157008 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:56:14.052463  157008 command_runner.go:130] > 203G
	I0916 10:56:14.052661  157008 start.go:128] duration metric: took 6.975610001s to createHost
	I0916 10:56:14.052678  157008 start.go:83] releasing machines lock for "multinode-079070", held for 6.975744478s
	I0916 10:56:14.052730  157008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-079070
	I0916 10:56:14.069077  157008 ssh_runner.go:195] Run: cat /version.json
	I0916 10:56:14.069154  157008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070
	I0916 10:56:14.069094  157008 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:56:14.069266  157008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070
	I0916 10:56:14.086861  157008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070/id_rsa Username:docker}
	I0916 10:56:14.087891  157008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070/id_rsa Username:docker}
	I0916 10:56:14.175251  157008 command_runner.go:130] > {"iso_version": "v1.34.0-1726281733-19643", "kicbase_version": "v0.0.45-1726358845-19644", "minikube_version": "v1.34.0", "commit": "f890713149c79cf50e25c13e6a5c0470aa0f0450"}
	I0916 10:56:14.175374  157008 ssh_runner.go:195] Run: systemctl --version
	I0916 10:56:14.250692  157008 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0916 10:56:14.250757  157008 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.12)
	I0916 10:56:14.250786  157008 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0916 10:56:14.250864  157008 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:56:14.255286  157008 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0916 10:56:14.255315  157008 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0916 10:56:14.255324  157008 command_runner.go:130] > Device: 35h/53d	Inode: 534561      Links: 1
	I0916 10:56:14.255332  157008 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:56:14.255341  157008 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0916 10:56:14.255348  157008 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0916 10:56:14.255356  157008 command_runner.go:130] > Change: 2024-09-16 10:23:17.243457347 +0000
	I0916 10:56:14.255364  157008 command_runner.go:130] >  Birth: 2024-09-16 10:23:17.243457347 +0000
	I0916 10:56:14.255583  157008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 10:56:14.278852  157008 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:56:14.278929  157008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:56:14.304421  157008 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0916 10:56:14.304476  157008 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 10:56:14.304486  157008 start.go:495] detecting cgroup driver to use...
	I0916 10:56:14.304515  157008 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:56:14.304550  157008 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 10:56:14.315391  157008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 10:56:14.325823  157008 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:56:14.325875  157008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:56:14.337764  157008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:56:14.349981  157008 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:56:14.424880  157008 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:56:14.437969  157008 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0916 10:56:14.505325  157008 docker.go:233] disabling docker service ...
	I0916 10:56:14.505381  157008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:56:14.522467  157008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:56:14.532669  157008 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:56:14.612746  157008 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0916 10:56:14.612821  157008 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:56:14.693144  157008 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0916 10:56:14.693227  157008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:56:14.703590  157008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:56:14.716972  157008 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0916 10:56:14.717841  157008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 10:56:14.726833  157008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:56:14.735526  157008 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:56:14.735593  157008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:56:14.744272  157008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:56:14.753048  157008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:56:14.762010  157008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:56:14.771227  157008 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:56:14.780036  157008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:56:14.789074  157008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:56:14.797916  157008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:56:14.807028  157008 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:56:14.813737  157008 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0916 10:56:14.814419  157008 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:56:14.821900  157008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:56:14.900926  157008 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 10:56:14.994947  157008 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 10:56:14.995012  157008 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 10:56:14.998453  157008 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I0916 10:56:14.998478  157008 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0916 10:56:14.998488  157008 command_runner.go:130] > Device: 40h/64d	Inode: 175         Links: 1
	I0916 10:56:14.998507  157008 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:56:14.998519  157008 command_runner.go:130] > Access: 2024-09-16 10:56:14.961807921 +0000
	I0916 10:56:14.998527  157008 command_runner.go:130] > Modify: 2024-09-16 10:56:14.961807921 +0000
	I0916 10:56:14.998531  157008 command_runner.go:130] > Change: 2024-09-16 10:56:14.961807921 +0000
	I0916 10:56:14.998535  157008 command_runner.go:130] >  Birth: -
	I0916 10:56:14.998552  157008 start.go:563] Will wait 60s for crictl version
	I0916 10:56:14.998604  157008 ssh_runner.go:195] Run: which crictl
	I0916 10:56:15.001804  157008 command_runner.go:130] > /usr/bin/crictl
	I0916 10:56:15.001870  157008 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:56:15.031721  157008 command_runner.go:130] > Version:  0.1.0
	I0916 10:56:15.031759  157008 command_runner.go:130] > RuntimeName:  containerd
	I0916 10:56:15.031768  157008 command_runner.go:130] > RuntimeVersion:  1.7.22
	I0916 10:56:15.031775  157008 command_runner.go:130] > RuntimeApiVersion:  v1
	I0916 10:56:15.033635  157008 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 10:56:15.033711  157008 ssh_runner.go:195] Run: containerd --version
	I0916 10:56:15.054975  157008 command_runner.go:130] > containerd containerd.io 1.7.22 7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c
	I0916 10:56:15.055045  157008 ssh_runner.go:195] Run: containerd --version
	I0916 10:56:15.076262  157008 command_runner.go:130] > containerd containerd.io 1.7.22 7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c
	I0916 10:56:15.078620  157008 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 10:56:15.080057  157008 cli_runner.go:164] Run: docker network inspect multinode-079070 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:56:15.096411  157008 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0916 10:56:15.099798  157008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:56:15.109798  157008 kubeadm.go:883] updating cluster {Name:multinode-079070 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-079070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:56:15.109910  157008 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:56:15.109953  157008 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:56:15.138875  157008 command_runner.go:130] > {
	I0916 10:56:15.138898  157008 command_runner.go:130] >   "images": [
	I0916 10:56:15.138905  157008 command_runner.go:130] >     {
	I0916 10:56:15.138917  157008 command_runner.go:130] >       "id": "sha256:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 10:56:15.138931  157008 command_runner.go:130] >       "repoTags": [
	I0916 10:56:15.138940  157008 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 10:56:15.138946  157008 command_runner.go:130] >       ],
	I0916 10:56:15.138953  157008 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:15.138963  157008 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 10:56:15.138970  157008 command_runner.go:130] >       ],
	I0916 10:56:15.138976  157008 command_runner.go:130] >       "size": "36793393",
	I0916 10:56:15.138985  157008 command_runner.go:130] >       "uid": null,
	I0916 10:56:15.138992  157008 command_runner.go:130] >       "username": "",
	I0916 10:56:15.139002  157008 command_runner.go:130] >       "spec": null,
	I0916 10:56:15.139008  157008 command_runner.go:130] >       "pinned": false
	I0916 10:56:15.139017  157008 command_runner.go:130] >     },
	I0916 10:56:15.139022  157008 command_runner.go:130] >     {
	I0916 10:56:15.139037  157008 command_runner.go:130] >       "id": "sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 10:56:15.139047  157008 command_runner.go:130] >       "repoTags": [
	I0916 10:56:15.139054  157008 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 10:56:15.139060  157008 command_runner.go:130] >       ],
	I0916 10:56:15.139066  157008 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:15.139082  157008 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0916 10:56:15.139090  157008 command_runner.go:130] >       ],
	I0916 10:56:15.139096  157008 command_runner.go:130] >       "size": "9058936",
	I0916 10:56:15.139106  157008 command_runner.go:130] >       "uid": null,
	I0916 10:56:15.139112  157008 command_runner.go:130] >       "username": "",
	I0916 10:56:15.139121  157008 command_runner.go:130] >       "spec": null,
	I0916 10:56:15.139128  157008 command_runner.go:130] >       "pinned": false
	I0916 10:56:15.139135  157008 command_runner.go:130] >     },
	I0916 10:56:15.139139  157008 command_runner.go:130] >     {
	I0916 10:56:15.139148  157008 command_runner.go:130] >       "id": "sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 10:56:15.139157  157008 command_runner.go:130] >       "repoTags": [
	I0916 10:56:15.139171  157008 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 10:56:15.139180  157008 command_runner.go:130] >       ],
	I0916 10:56:15.139186  157008 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:15.139200  157008 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"
	I0916 10:56:15.139214  157008 command_runner.go:130] >       ],
	I0916 10:56:15.139223  157008 command_runner.go:130] >       "size": "18562039",
	I0916 10:56:15.139227  157008 command_runner.go:130] >       "uid": null,
	I0916 10:56:15.139231  157008 command_runner.go:130] >       "username": "nonroot",
	I0916 10:56:15.139241  157008 command_runner.go:130] >       "spec": null,
	I0916 10:56:15.139248  157008 command_runner.go:130] >       "pinned": false
	I0916 10:56:15.139257  157008 command_runner.go:130] >     },
	I0916 10:56:15.139262  157008 command_runner.go:130] >     {
	I0916 10:56:15.139273  157008 command_runner.go:130] >       "id": "sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 10:56:15.139282  157008 command_runner.go:130] >       "repoTags": [
	I0916 10:56:15.139291  157008 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 10:56:15.139299  157008 command_runner.go:130] >       ],
	I0916 10:56:15.139305  157008 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:15.139321  157008 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 10:56:15.139330  157008 command_runner.go:130] >       ],
	I0916 10:56:15.139338  157008 command_runner.go:130] >       "size": "56909194",
	I0916 10:56:15.139346  157008 command_runner.go:130] >       "uid": {
	I0916 10:56:15.139353  157008 command_runner.go:130] >         "value": "0"
	I0916 10:56:15.139379  157008 command_runner.go:130] >       },
	I0916 10:56:15.139392  157008 command_runner.go:130] >       "username": "",
	I0916 10:56:15.139399  157008 command_runner.go:130] >       "spec": null,
	I0916 10:56:15.139404  157008 command_runner.go:130] >       "pinned": false
	I0916 10:56:15.139412  157008 command_runner.go:130] >     },
	I0916 10:56:15.139418  157008 command_runner.go:130] >     {
	I0916 10:56:15.139428  157008 command_runner.go:130] >       "id": "sha256:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 10:56:15.139438  157008 command_runner.go:130] >       "repoTags": [
	I0916 10:56:15.139446  157008 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 10:56:15.139454  157008 command_runner.go:130] >       ],
	I0916 10:56:15.139461  157008 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:15.139481  157008 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 10:56:15.139488  157008 command_runner.go:130] >       ],
	I0916 10:56:15.139493  157008 command_runner.go:130] >       "size": "28047142",
	I0916 10:56:15.139502  157008 command_runner.go:130] >       "uid": {
	I0916 10:56:15.139510  157008 command_runner.go:130] >         "value": "0"
	I0916 10:56:15.139521  157008 command_runner.go:130] >       },
	I0916 10:56:15.139528  157008 command_runner.go:130] >       "username": "",
	I0916 10:56:15.139534  157008 command_runner.go:130] >       "spec": null,
	I0916 10:56:15.139543  157008 command_runner.go:130] >       "pinned": false
	I0916 10:56:15.139549  157008 command_runner.go:130] >     },
	I0916 10:56:15.139557  157008 command_runner.go:130] >     {
	I0916 10:56:15.139568  157008 command_runner.go:130] >       "id": "sha256:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 10:56:15.139575  157008 command_runner.go:130] >       "repoTags": [
	I0916 10:56:15.139584  157008 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 10:56:15.139593  157008 command_runner.go:130] >       ],
	I0916 10:56:15.139601  157008 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:15.139615  157008 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1"
	I0916 10:56:15.139623  157008 command_runner.go:130] >       ],
	I0916 10:56:15.139631  157008 command_runner.go:130] >       "size": "26221554",
	I0916 10:56:15.139639  157008 command_runner.go:130] >       "uid": {
	I0916 10:56:15.139646  157008 command_runner.go:130] >         "value": "0"
	I0916 10:56:15.139653  157008 command_runner.go:130] >       },
	I0916 10:56:15.139657  157008 command_runner.go:130] >       "username": "",
	I0916 10:56:15.139665  157008 command_runner.go:130] >       "spec": null,
	I0916 10:56:15.139671  157008 command_runner.go:130] >       "pinned": false
	I0916 10:56:15.139679  157008 command_runner.go:130] >     },
	I0916 10:56:15.139685  157008 command_runner.go:130] >     {
	I0916 10:56:15.139695  157008 command_runner.go:130] >       "id": "sha256:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 10:56:15.139704  157008 command_runner.go:130] >       "repoTags": [
	I0916 10:56:15.139711  157008 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 10:56:15.139720  157008 command_runner.go:130] >       ],
	I0916 10:56:15.139726  157008 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:15.139756  157008 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"
	I0916 10:56:15.139765  157008 command_runner.go:130] >       ],
	I0916 10:56:15.139773  157008 command_runner.go:130] >       "size": "30211884",
	I0916 10:56:15.139782  157008 command_runner.go:130] >       "uid": null,
	I0916 10:56:15.139789  157008 command_runner.go:130] >       "username": "",
	I0916 10:56:15.139799  157008 command_runner.go:130] >       "spec": null,
	I0916 10:56:15.139806  157008 command_runner.go:130] >       "pinned": false
	I0916 10:56:15.139813  157008 command_runner.go:130] >     },
	I0916 10:56:15.139818  157008 command_runner.go:130] >     {
	I0916 10:56:15.139825  157008 command_runner.go:130] >       "id": "sha256:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 10:56:15.139831  157008 command_runner.go:130] >       "repoTags": [
	I0916 10:56:15.139842  157008 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 10:56:15.139849  157008 command_runner.go:130] >       ],
	I0916 10:56:15.139858  157008 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:15.139870  157008 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"
	I0916 10:56:15.139878  157008 command_runner.go:130] >       ],
	I0916 10:56:15.139887  157008 command_runner.go:130] >       "size": "20177215",
	I0916 10:56:15.139896  157008 command_runner.go:130] >       "uid": {
	I0916 10:56:15.139902  157008 command_runner.go:130] >         "value": "0"
	I0916 10:56:15.139908  157008 command_runner.go:130] >       },
	I0916 10:56:15.139912  157008 command_runner.go:130] >       "username": "",
	I0916 10:56:15.139918  157008 command_runner.go:130] >       "spec": null,
	I0916 10:56:15.139927  157008 command_runner.go:130] >       "pinned": false
	I0916 10:56:15.139933  157008 command_runner.go:130] >     },
	I0916 10:56:15.139941  157008 command_runner.go:130] >     {
	I0916 10:56:15.139951  157008 command_runner.go:130] >       "id": "sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 10:56:15.139960  157008 command_runner.go:130] >       "repoTags": [
	I0916 10:56:15.139967  157008 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 10:56:15.139975  157008 command_runner.go:130] >       ],
	I0916 10:56:15.139982  157008 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:15.139992  157008 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 10:56:15.139999  157008 command_runner.go:130] >       ],
	I0916 10:56:15.140006  157008 command_runner.go:130] >       "size": "320368",
	I0916 10:56:15.140015  157008 command_runner.go:130] >       "uid": {
	I0916 10:56:15.140022  157008 command_runner.go:130] >         "value": "65535"
	I0916 10:56:15.140030  157008 command_runner.go:130] >       },
	I0916 10:56:15.140036  157008 command_runner.go:130] >       "username": "",
	I0916 10:56:15.140046  157008 command_runner.go:130] >       "spec": null,
	I0916 10:56:15.140054  157008 command_runner.go:130] >       "pinned": true
	I0916 10:56:15.140062  157008 command_runner.go:130] >     }
	I0916 10:56:15.140068  157008 command_runner.go:130] >   ]
	I0916 10:56:15.140075  157008 command_runner.go:130] > }
	I0916 10:56:15.141136  157008 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 10:56:15.141152  157008 containerd.go:534] Images already preloaded, skipping extraction
	I0916 10:56:15.141194  157008 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:56:15.173374  157008 command_runner.go:130] > {
	I0916 10:56:15.173399  157008 command_runner.go:130] >   "images": [
	I0916 10:56:15.173404  157008 command_runner.go:130] >     {
	I0916 10:56:15.173412  157008 command_runner.go:130] >       "id": "sha256:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 10:56:15.173417  157008 command_runner.go:130] >       "repoTags": [
	I0916 10:56:15.173422  157008 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 10:56:15.173425  157008 command_runner.go:130] >       ],
	I0916 10:56:15.173430  157008 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:15.173442  157008 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 10:56:15.173447  157008 command_runner.go:130] >       ],
	I0916 10:56:15.173454  157008 command_runner.go:130] >       "size": "36793393",
	I0916 10:56:15.173459  157008 command_runner.go:130] >       "uid": null,
	I0916 10:56:15.173465  157008 command_runner.go:130] >       "username": "",
	I0916 10:56:15.173473  157008 command_runner.go:130] >       "spec": null,
	I0916 10:56:15.173479  157008 command_runner.go:130] >       "pinned": false
	I0916 10:56:15.173485  157008 command_runner.go:130] >     },
	I0916 10:56:15.173490  157008 command_runner.go:130] >     {
	I0916 10:56:15.173501  157008 command_runner.go:130] >       "id": "sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 10:56:15.173507  157008 command_runner.go:130] >       "repoTags": [
	I0916 10:56:15.173514  157008 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 10:56:15.173517  157008 command_runner.go:130] >       ],
	I0916 10:56:15.173522  157008 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:15.173529  157008 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0916 10:56:15.173533  157008 command_runner.go:130] >       ],
	I0916 10:56:15.173539  157008 command_runner.go:130] >       "size": "9058936",
	I0916 10:56:15.173545  157008 command_runner.go:130] >       "uid": null,
	I0916 10:56:15.173556  157008 command_runner.go:130] >       "username": "",
	I0916 10:56:15.173563  157008 command_runner.go:130] >       "spec": null,
	I0916 10:56:15.173573  157008 command_runner.go:130] >       "pinned": false
	I0916 10:56:15.173578  157008 command_runner.go:130] >     },
	I0916 10:56:15.173602  157008 command_runner.go:130] >     {
	I0916 10:56:15.173617  157008 command_runner.go:130] >       "id": "sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 10:56:15.173623  157008 command_runner.go:130] >       "repoTags": [
	I0916 10:56:15.173631  157008 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 10:56:15.173639  157008 command_runner.go:130] >       ],
	I0916 10:56:15.173649  157008 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:15.173663  157008 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"
	I0916 10:56:15.173672  157008 command_runner.go:130] >       ],
	I0916 10:56:15.173682  157008 command_runner.go:130] >       "size": "18562039",
	I0916 10:56:15.173692  157008 command_runner.go:130] >       "uid": null,
	I0916 10:56:15.173701  157008 command_runner.go:130] >       "username": "nonroot",
	I0916 10:56:15.173710  157008 command_runner.go:130] >       "spec": null,
	I0916 10:56:15.173716  157008 command_runner.go:130] >       "pinned": false
	I0916 10:56:15.173720  157008 command_runner.go:130] >     },
	I0916 10:56:15.173729  157008 command_runner.go:130] >     {
	I0916 10:56:15.173742  157008 command_runner.go:130] >       "id": "sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 10:56:15.173752  157008 command_runner.go:130] >       "repoTags": [
	I0916 10:56:15.173763  157008 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 10:56:15.173771  157008 command_runner.go:130] >       ],
	I0916 10:56:15.173779  157008 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:15.173793  157008 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 10:56:15.173801  157008 command_runner.go:130] >       ],
	I0916 10:56:15.173808  157008 command_runner.go:130] >       "size": "56909194",
	I0916 10:56:15.173812  157008 command_runner.go:130] >       "uid": {
	I0916 10:56:15.173821  157008 command_runner.go:130] >         "value": "0"
	I0916 10:56:15.173829  157008 command_runner.go:130] >       },
	I0916 10:56:15.173839  157008 command_runner.go:130] >       "username": "",
	I0916 10:56:15.173848  157008 command_runner.go:130] >       "spec": null,
	I0916 10:56:15.173857  157008 command_runner.go:130] >       "pinned": false
	I0916 10:56:15.173866  157008 command_runner.go:130] >     },
	I0916 10:56:15.173874  157008 command_runner.go:130] >     {
	I0916 10:56:15.173889  157008 command_runner.go:130] >       "id": "sha256:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 10:56:15.173896  157008 command_runner.go:130] >       "repoTags": [
	I0916 10:56:15.173904  157008 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 10:56:15.173913  157008 command_runner.go:130] >       ],
	I0916 10:56:15.173923  157008 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:15.173941  157008 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 10:56:15.173950  157008 command_runner.go:130] >       ],
	I0916 10:56:15.173960  157008 command_runner.go:130] >       "size": "28047142",
	I0916 10:56:15.173969  157008 command_runner.go:130] >       "uid": {
	I0916 10:56:15.173978  157008 command_runner.go:130] >         "value": "0"
	I0916 10:56:15.173986  157008 command_runner.go:130] >       },
	I0916 10:56:15.173994  157008 command_runner.go:130] >       "username": "",
	I0916 10:56:15.173998  157008 command_runner.go:130] >       "spec": null,
	I0916 10:56:15.174007  157008 command_runner.go:130] >       "pinned": false
	I0916 10:56:15.174015  157008 command_runner.go:130] >     },
	I0916 10:56:15.174021  157008 command_runner.go:130] >     {
	I0916 10:56:15.174034  157008 command_runner.go:130] >       "id": "sha256:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 10:56:15.174043  157008 command_runner.go:130] >       "repoTags": [
	I0916 10:56:15.174054  157008 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 10:56:15.174062  157008 command_runner.go:130] >       ],
	I0916 10:56:15.174072  157008 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:15.174085  157008 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1"
	I0916 10:56:15.174091  157008 command_runner.go:130] >       ],
	I0916 10:56:15.174097  157008 command_runner.go:130] >       "size": "26221554",
	I0916 10:56:15.174106  157008 command_runner.go:130] >       "uid": {
	I0916 10:56:15.174115  157008 command_runner.go:130] >         "value": "0"
	I0916 10:56:15.174121  157008 command_runner.go:130] >       },
	I0916 10:56:15.174131  157008 command_runner.go:130] >       "username": "",
	I0916 10:56:15.174140  157008 command_runner.go:130] >       "spec": null,
	I0916 10:56:15.174149  157008 command_runner.go:130] >       "pinned": false
	I0916 10:56:15.174157  157008 command_runner.go:130] >     },
	I0916 10:56:15.174165  157008 command_runner.go:130] >     {
	I0916 10:56:15.174176  157008 command_runner.go:130] >       "id": "sha256:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 10:56:15.174184  157008 command_runner.go:130] >       "repoTags": [
	I0916 10:56:15.174189  157008 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 10:56:15.174200  157008 command_runner.go:130] >       ],
	I0916 10:56:15.174210  157008 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:15.174224  157008 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"
	I0916 10:56:15.174233  157008 command_runner.go:130] >       ],
	I0916 10:56:15.174242  157008 command_runner.go:130] >       "size": "30211884",
	I0916 10:56:15.174251  157008 command_runner.go:130] >       "uid": null,
	I0916 10:56:15.174261  157008 command_runner.go:130] >       "username": "",
	I0916 10:56:15.174269  157008 command_runner.go:130] >       "spec": null,
	I0916 10:56:15.174274  157008 command_runner.go:130] >       "pinned": false
	I0916 10:56:15.174278  157008 command_runner.go:130] >     },
	I0916 10:56:15.174286  157008 command_runner.go:130] >     {
	I0916 10:56:15.174299  157008 command_runner.go:130] >       "id": "sha256:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 10:56:15.174306  157008 command_runner.go:130] >       "repoTags": [
	I0916 10:56:15.174317  157008 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 10:56:15.174325  157008 command_runner.go:130] >       ],
	I0916 10:56:15.174335  157008 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:15.174353  157008 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"
	I0916 10:56:15.174361  157008 command_runner.go:130] >       ],
	I0916 10:56:15.174368  157008 command_runner.go:130] >       "size": "20177215",
	I0916 10:56:15.174372  157008 command_runner.go:130] >       "uid": {
	I0916 10:56:15.174381  157008 command_runner.go:130] >         "value": "0"
	I0916 10:56:15.174389  157008 command_runner.go:130] >       },
	I0916 10:56:15.174399  157008 command_runner.go:130] >       "username": "",
	I0916 10:56:15.174408  157008 command_runner.go:130] >       "spec": null,
	I0916 10:56:15.174417  157008 command_runner.go:130] >       "pinned": false
	I0916 10:56:15.174426  157008 command_runner.go:130] >     },
	I0916 10:56:15.174434  157008 command_runner.go:130] >     {
	I0916 10:56:15.174447  157008 command_runner.go:130] >       "id": "sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 10:56:15.174454  157008 command_runner.go:130] >       "repoTags": [
	I0916 10:56:15.174459  157008 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 10:56:15.174466  157008 command_runner.go:130] >       ],
	I0916 10:56:15.174476  157008 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:15.174490  157008 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 10:56:15.174500  157008 command_runner.go:130] >       ],
	I0916 10:56:15.174509  157008 command_runner.go:130] >       "size": "320368",
	I0916 10:56:15.174518  157008 command_runner.go:130] >       "uid": {
	I0916 10:56:15.174527  157008 command_runner.go:130] >         "value": "65535"
	I0916 10:56:15.174535  157008 command_runner.go:130] >       },
	I0916 10:56:15.174542  157008 command_runner.go:130] >       "username": "",
	I0916 10:56:15.174547  157008 command_runner.go:130] >       "spec": null,
	I0916 10:56:15.174555  157008 command_runner.go:130] >       "pinned": true
	I0916 10:56:15.174563  157008 command_runner.go:130] >     }
	I0916 10:56:15.174569  157008 command_runner.go:130] >   ]
	I0916 10:56:15.174578  157008 command_runner.go:130] > }
	I0916 10:56:15.174716  157008 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 10:56:15.174727  157008 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:56:15.174735  157008 kubeadm.go:934] updating node { 192.168.67.2 8443 v1.31.1 containerd true true} ...
	I0916 10:56:15.174844  157008 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-079070 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-079070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:56:15.174914  157008 ssh_runner.go:195] Run: sudo crictl info
	I0916 10:56:15.205300  157008 command_runner.go:130] > {
	I0916 10:56:15.205321  157008 command_runner.go:130] >   "status": {
	I0916 10:56:15.205327  157008 command_runner.go:130] >     "conditions": [
	I0916 10:56:15.205330  157008 command_runner.go:130] >       {
	I0916 10:56:15.205336  157008 command_runner.go:130] >         "type": "RuntimeReady",
	I0916 10:56:15.205346  157008 command_runner.go:130] >         "status": true,
	I0916 10:56:15.205350  157008 command_runner.go:130] >         "reason": "",
	I0916 10:56:15.205356  157008 command_runner.go:130] >         "message": ""
	I0916 10:56:15.205365  157008 command_runner.go:130] >       },
	I0916 10:56:15.205371  157008 command_runner.go:130] >       {
	I0916 10:56:15.205377  157008 command_runner.go:130] >         "type": "NetworkReady",
	I0916 10:56:15.205385  157008 command_runner.go:130] >         "status": true,
	I0916 10:56:15.205394  157008 command_runner.go:130] >         "reason": "",
	I0916 10:56:15.205403  157008 command_runner.go:130] >         "message": ""
	I0916 10:56:15.205407  157008 command_runner.go:130] >       },
	I0916 10:56:15.205410  157008 command_runner.go:130] >       {
	I0916 10:56:15.205418  157008 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings",
	I0916 10:56:15.205422  157008 command_runner.go:130] >         "status": true,
	I0916 10:56:15.205428  157008 command_runner.go:130] >         "reason": "",
	I0916 10:56:15.205432  157008 command_runner.go:130] >         "message": ""
	I0916 10:56:15.205438  157008 command_runner.go:130] >       }
	I0916 10:56:15.205441  157008 command_runner.go:130] >     ]
	I0916 10:56:15.205445  157008 command_runner.go:130] >   },
	I0916 10:56:15.205451  157008 command_runner.go:130] >   "cniconfig": {
	I0916 10:56:15.205460  157008 command_runner.go:130] >     "PluginDirs": [
	I0916 10:56:15.205473  157008 command_runner.go:130] >       "/opt/cni/bin"
	I0916 10:56:15.205489  157008 command_runner.go:130] >     ],
	I0916 10:56:15.205502  157008 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I0916 10:56:15.205510  157008 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I0916 10:56:15.205514  157008 command_runner.go:130] >     "Prefix": "eth",
	I0916 10:56:15.205521  157008 command_runner.go:130] >     "Networks": [
	I0916 10:56:15.205524  157008 command_runner.go:130] >       {
	I0916 10:56:15.205529  157008 command_runner.go:130] >         "Config": {
	I0916 10:56:15.205533  157008 command_runner.go:130] >           "Name": "cni-loopback",
	I0916 10:56:15.205540  157008 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I0916 10:56:15.205544  157008 command_runner.go:130] >           "Plugins": [
	I0916 10:56:15.205554  157008 command_runner.go:130] >             {
	I0916 10:56:15.205565  157008 command_runner.go:130] >               "Network": {
	I0916 10:56:15.205572  157008 command_runner.go:130] >                 "type": "loopback",
	I0916 10:56:15.205583  157008 command_runner.go:130] >                 "ipam": {},
	I0916 10:56:15.205593  157008 command_runner.go:130] >                 "dns": {}
	I0916 10:56:15.205602  157008 command_runner.go:130] >               },
	I0916 10:56:15.205613  157008 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I0916 10:56:15.205621  157008 command_runner.go:130] >             }
	I0916 10:56:15.205628  157008 command_runner.go:130] >           ],
	I0916 10:56:15.205640  157008 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I0916 10:56:15.205649  157008 command_runner.go:130] >         },
	I0916 10:56:15.205659  157008 command_runner.go:130] >         "IFName": "lo"
	I0916 10:56:15.205667  157008 command_runner.go:130] >       },
	I0916 10:56:15.205676  157008 command_runner.go:130] >       {
	I0916 10:56:15.205682  157008 command_runner.go:130] >         "Config": {
	I0916 10:56:15.205691  157008 command_runner.go:130] >           "Name": "loopback",
	I0916 10:56:15.205702  157008 command_runner.go:130] >           "CNIVersion": "1.0.0",
	I0916 10:56:15.205710  157008 command_runner.go:130] >           "Plugins": [
	I0916 10:56:15.205718  157008 command_runner.go:130] >             {
	I0916 10:56:15.205723  157008 command_runner.go:130] >               "Network": {
	I0916 10:56:15.205731  157008 command_runner.go:130] >                 "cniVersion": "1.0.0",
	I0916 10:56:15.205740  157008 command_runner.go:130] >                 "name": "loopback",
	I0916 10:56:15.205751  157008 command_runner.go:130] >                 "type": "loopback",
	I0916 10:56:15.205768  157008 command_runner.go:130] >                 "ipam": {},
	I0916 10:56:15.205783  157008 command_runner.go:130] >                 "dns": {}
	I0916 10:56:15.205793  157008 command_runner.go:130] >               },
	I0916 10:56:15.205807  157008 command_runner.go:130] >               "Source": "{\"cniVersion\":\"1.0.0\",\"name\":\"loopback\",\"type\":\"loopback\"}"
	I0916 10:56:15.205815  157008 command_runner.go:130] >             }
	I0916 10:56:15.205821  157008 command_runner.go:130] >           ],
	I0916 10:56:15.205841  157008 command_runner.go:130] >           "Source": "{\"cniVersion\":\"1.0.0\",\"name\":\"loopback\",\"plugins\":[{\"cniVersion\":\"1.0.0\",\"name\":\"loopback\",\"type\":\"loopback\"}]}"
	I0916 10:56:15.205851  157008 command_runner.go:130] >         },
	I0916 10:56:15.205858  157008 command_runner.go:130] >         "IFName": "eth0"
	I0916 10:56:15.205866  157008 command_runner.go:130] >       }
	I0916 10:56:15.205872  157008 command_runner.go:130] >     ]
	I0916 10:56:15.205879  157008 command_runner.go:130] >   },
	I0916 10:56:15.205888  157008 command_runner.go:130] >   "config": {
	I0916 10:56:15.205897  157008 command_runner.go:130] >     "containerd": {
	I0916 10:56:15.205907  157008 command_runner.go:130] >       "snapshotter": "overlayfs",
	I0916 10:56:15.205917  157008 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I0916 10:56:15.205924  157008 command_runner.go:130] >       "defaultRuntime": {
	I0916 10:56:15.205929  157008 command_runner.go:130] >         "runtimeType": "",
	I0916 10:56:15.205940  157008 command_runner.go:130] >         "runtimePath": "",
	I0916 10:56:15.205950  157008 command_runner.go:130] >         "runtimeEngine": "",
	I0916 10:56:15.205958  157008 command_runner.go:130] >         "PodAnnotations": null,
	I0916 10:56:15.205976  157008 command_runner.go:130] >         "ContainerAnnotations": null,
	I0916 10:56:15.205986  157008 command_runner.go:130] >         "runtimeRoot": "",
	I0916 10:56:15.205995  157008 command_runner.go:130] >         "options": null,
	I0916 10:56:15.206007  157008 command_runner.go:130] >         "privileged_without_host_devices": false,
	I0916 10:56:15.206018  157008 command_runner.go:130] >         "privileged_without_host_devices_all_devices_allowed": false,
	I0916 10:56:15.206025  157008 command_runner.go:130] >         "baseRuntimeSpec": "",
	I0916 10:56:15.206031  157008 command_runner.go:130] >         "cniConfDir": "",
	I0916 10:56:15.206041  157008 command_runner.go:130] >         "cniMaxConfNum": 0,
	I0916 10:56:15.206050  157008 command_runner.go:130] >         "snapshotter": "",
	I0916 10:56:15.206058  157008 command_runner.go:130] >         "sandboxMode": ""
	I0916 10:56:15.206066  157008 command_runner.go:130] >       },
	I0916 10:56:15.206076  157008 command_runner.go:130] >       "untrustedWorkloadRuntime": {
	I0916 10:56:15.206101  157008 command_runner.go:130] >         "runtimeType": "",
	I0916 10:56:15.206110  157008 command_runner.go:130] >         "runtimePath": "",
	I0916 10:56:15.206118  157008 command_runner.go:130] >         "runtimeEngine": "",
	I0916 10:56:15.206123  157008 command_runner.go:130] >         "PodAnnotations": null,
	I0916 10:56:15.206132  157008 command_runner.go:130] >         "ContainerAnnotations": null,
	I0916 10:56:15.206141  157008 command_runner.go:130] >         "runtimeRoot": "",
	I0916 10:56:15.206151  157008 command_runner.go:130] >         "options": null,
	I0916 10:56:15.206164  157008 command_runner.go:130] >         "privileged_without_host_devices": false,
	I0916 10:56:15.206176  157008 command_runner.go:130] >         "privileged_without_host_devices_all_devices_allowed": false,
	I0916 10:56:15.206185  157008 command_runner.go:130] >         "baseRuntimeSpec": "",
	I0916 10:56:15.206195  157008 command_runner.go:130] >         "cniConfDir": "",
	I0916 10:56:15.206203  157008 command_runner.go:130] >         "cniMaxConfNum": 0,
	I0916 10:56:15.206209  157008 command_runner.go:130] >         "snapshotter": "",
	I0916 10:56:15.206213  157008 command_runner.go:130] >         "sandboxMode": ""
	I0916 10:56:15.206221  157008 command_runner.go:130] >       },
	I0916 10:56:15.206235  157008 command_runner.go:130] >       "runtimes": {
	I0916 10:56:15.206245  157008 command_runner.go:130] >         "runc": {
	I0916 10:56:15.206256  157008 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I0916 10:56:15.206265  157008 command_runner.go:130] >           "runtimePath": "",
	I0916 10:56:15.206275  157008 command_runner.go:130] >           "runtimeEngine": "",
	I0916 10:56:15.206284  157008 command_runner.go:130] >           "PodAnnotations": null,
	I0916 10:56:15.206291  157008 command_runner.go:130] >           "ContainerAnnotations": null,
	I0916 10:56:15.206299  157008 command_runner.go:130] >           "runtimeRoot": "",
	I0916 10:56:15.206302  157008 command_runner.go:130] >           "options": {
	I0916 10:56:15.206310  157008 command_runner.go:130] >             "SystemdCgroup": false
	I0916 10:56:15.206319  157008 command_runner.go:130] >           },
	I0916 10:56:15.206327  157008 command_runner.go:130] >           "privileged_without_host_devices": false,
	I0916 10:56:15.206373  157008 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I0916 10:56:15.206388  157008 command_runner.go:130] >           "baseRuntimeSpec": "",
	I0916 10:56:15.206394  157008 command_runner.go:130] >           "cniConfDir": "",
	I0916 10:56:15.206401  157008 command_runner.go:130] >           "cniMaxConfNum": 0,
	I0916 10:56:15.206408  157008 command_runner.go:130] >           "snapshotter": "",
	I0916 10:56:15.206415  157008 command_runner.go:130] >           "sandboxMode": "podsandbox"
	I0916 10:56:15.206424  157008 command_runner.go:130] >         }
	I0916 10:56:15.206429  157008 command_runner.go:130] >       },
	I0916 10:56:15.206436  157008 command_runner.go:130] >       "noPivot": false,
	I0916 10:56:15.206447  157008 command_runner.go:130] >       "disableSnapshotAnnotations": true,
	I0916 10:56:15.206454  157008 command_runner.go:130] >       "discardUnpackedLayers": true,
	I0916 10:56:15.206465  157008 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I0916 10:56:15.206473  157008 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false
	I0916 10:56:15.206481  157008 command_runner.go:130] >     },
	I0916 10:56:15.206485  157008 command_runner.go:130] >     "cni": {
	I0916 10:56:15.206494  157008 command_runner.go:130] >       "binDir": "/opt/cni/bin",
	I0916 10:56:15.206505  157008 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I0916 10:56:15.206515  157008 command_runner.go:130] >       "maxConfNum": 1,
	I0916 10:56:15.206522  157008 command_runner.go:130] >       "setupSerially": false,
	I0916 10:56:15.206531  157008 command_runner.go:130] >       "confTemplate": "",
	I0916 10:56:15.206540  157008 command_runner.go:130] >       "ipPref": ""
	I0916 10:56:15.206549  157008 command_runner.go:130] >     },
	I0916 10:56:15.206559  157008 command_runner.go:130] >     "registry": {
	I0916 10:56:15.206571  157008 command_runner.go:130] >       "configPath": "/etc/containerd/certs.d",
	I0916 10:56:15.206580  157008 command_runner.go:130] >       "mirrors": null,
	I0916 10:56:15.206587  157008 command_runner.go:130] >       "configs": null,
	I0916 10:56:15.206594  157008 command_runner.go:130] >       "auths": null,
	I0916 10:56:15.206602  157008 command_runner.go:130] >       "headers": null
	I0916 10:56:15.206612  157008 command_runner.go:130] >     },
	I0916 10:56:15.206618  157008 command_runner.go:130] >     "imageDecryption": {
	I0916 10:56:15.206637  157008 command_runner.go:130] >       "keyModel": "node"
	I0916 10:56:15.206645  157008 command_runner.go:130] >     },
	I0916 10:56:15.206653  157008 command_runner.go:130] >     "disableTCPService": true,
	I0916 10:56:15.206663  157008 command_runner.go:130] >     "streamServerAddress": "",
	I0916 10:56:15.206672  157008 command_runner.go:130] >     "streamServerPort": "10010",
	I0916 10:56:15.206680  157008 command_runner.go:130] >     "streamIdleTimeout": "4h0m0s",
	I0916 10:56:15.206687  157008 command_runner.go:130] >     "enableSelinux": false,
	I0916 10:56:15.206694  157008 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I0916 10:56:15.206710  157008 command_runner.go:130] >     "sandboxImage": "registry.k8s.io/pause:3.10",
	I0916 10:56:15.206722  157008 command_runner.go:130] >     "statsCollectPeriod": 10,
	I0916 10:56:15.206732  157008 command_runner.go:130] >     "systemdCgroup": false,
	I0916 10:56:15.206742  157008 command_runner.go:130] >     "enableTLSStreaming": false,
	I0916 10:56:15.206751  157008 command_runner.go:130] >     "x509KeyPairStreaming": {
	I0916 10:56:15.206761  157008 command_runner.go:130] >       "tlsCertFile": "",
	I0916 10:56:15.206768  157008 command_runner.go:130] >       "tlsKeyFile": ""
	I0916 10:56:15.206771  157008 command_runner.go:130] >     },
	I0916 10:56:15.206777  157008 command_runner.go:130] >     "maxContainerLogSize": 16384,
	I0916 10:56:15.206787  157008 command_runner.go:130] >     "disableCgroup": false,
	I0916 10:56:15.206797  157008 command_runner.go:130] >     "disableApparmor": false,
	I0916 10:56:15.206804  157008 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I0916 10:56:15.206814  157008 command_runner.go:130] >     "maxConcurrentDownloads": 3,
	I0916 10:56:15.206824  157008 command_runner.go:130] >     "disableProcMount": false,
	I0916 10:56:15.206834  157008 command_runner.go:130] >     "unsetSeccompProfile": "",
	I0916 10:56:15.206844  157008 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I0916 10:56:15.206854  157008 command_runner.go:130] >     "disableHugetlbController": true,
	I0916 10:56:15.206864  157008 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I0916 10:56:15.206871  157008 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I0916 10:56:15.206877  157008 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I0916 10:56:15.206888  157008 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I0916 10:56:15.206898  157008 command_runner.go:130] >     "enableUnprivilegedICMP": false,
	I0916 10:56:15.206905  157008 command_runner.go:130] >     "enableCDI": false,
	I0916 10:56:15.206915  157008 command_runner.go:130] >     "cdiSpecDirs": [
	I0916 10:56:15.206924  157008 command_runner.go:130] >       "/etc/cdi",
	I0916 10:56:15.206933  157008 command_runner.go:130] >       "/var/run/cdi"
	I0916 10:56:15.206941  157008 command_runner.go:130] >     ],
	I0916 10:56:15.206952  157008 command_runner.go:130] >     "imagePullProgressTimeout": "5m0s",
	I0916 10:56:15.206961  157008 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I0916 10:56:15.206969  157008 command_runner.go:130] >     "imagePullWithSyncFs": false,
	I0916 10:56:15.206975  157008 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I0916 10:56:15.206985  157008 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I0916 10:56:15.206997  157008 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I0916 10:56:15.207007  157008 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I0916 10:56:15.207019  157008 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri"
	I0916 10:56:15.207027  157008 command_runner.go:130] >   },
	I0916 10:56:15.207034  157008 command_runner.go:130] >   "golang": "go1.22.7",
	I0916 10:56:15.207044  157008 command_runner.go:130] >   "lastCNILoadStatus": "OK",
	I0916 10:56:15.207055  157008 command_runner.go:130] >   "lastCNILoadStatus.default": "OK"
	I0916 10:56:15.207062  157008 command_runner.go:130] > }
	I0916 10:56:15.207858  157008 cni.go:84] Creating CNI manager for ""
	I0916 10:56:15.207876  157008 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 10:56:15.207885  157008 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:56:15.207904  157008 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-079070 NodeName:multinode-079070 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:56:15.208019  157008 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "multinode-079070"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:56:15.208068  157008 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:56:15.215694  157008 command_runner.go:130] > kubeadm
	I0916 10:56:15.215718  157008 command_runner.go:130] > kubectl
	I0916 10:56:15.215725  157008 command_runner.go:130] > kubelet
	I0916 10:56:15.216368  157008 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:56:15.216431  157008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:56:15.224456  157008 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0916 10:56:15.241056  157008 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:56:15.257712  157008 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2170 bytes)
	I0916 10:56:15.275465  157008 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0916 10:56:15.279078  157008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:56:15.290481  157008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:56:15.369272  157008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:56:15.381993  157008 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070 for IP: 192.168.67.2
	I0916 10:56:15.382015  157008 certs.go:194] generating shared ca certs ...
	I0916 10:56:15.382033  157008 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:56:15.382191  157008 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 10:56:15.382253  157008 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 10:56:15.382266  157008 certs.go:256] generating profile certs ...
	I0916 10:56:15.382344  157008 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/client.key
	I0916 10:56:15.382363  157008 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/client.crt with IP's: []
	I0916 10:56:15.890361  157008 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/client.crt ...
	I0916 10:56:15.890397  157008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/client.crt: {Name:mke77f19dd9f1aa14d60b0b2a0a9ccea8a327db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:56:15.890605  157008 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/client.key ...
	I0916 10:56:15.890622  157008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/client.key: {Name:mkb98bd48c6b5f4f7b008ccbf89314aa876a0d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:56:15.890727  157008 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/apiserver.key.5aac267e
	I0916 10:56:15.890743  157008 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/apiserver.crt.5aac267e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.67.2]
	I0916 10:56:16.123421  157008 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/apiserver.crt.5aac267e ...
	I0916 10:56:16.123454  157008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/apiserver.crt.5aac267e: {Name:mk080ec82addec1a87e312f5523e395a1817fa15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:56:16.123654  157008 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/apiserver.key.5aac267e ...
	I0916 10:56:16.123672  157008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/apiserver.key.5aac267e: {Name:mkc108633e0515ffb371d90ff0bbaa0a5c33d482 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:56:16.123793  157008 certs.go:381] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/apiserver.crt.5aac267e -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/apiserver.crt
	I0916 10:56:16.123877  157008 certs.go:385] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/apiserver.key.5aac267e -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/apiserver.key
	I0916 10:56:16.123982  157008 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/proxy-client.key
	I0916 10:56:16.124001  157008 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/proxy-client.crt with IP's: []
	I0916 10:56:16.327344  157008 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/proxy-client.crt ...
	I0916 10:56:16.327374  157008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/proxy-client.crt: {Name:mkc1cfc8a9cd4f01cded61bcfa2e37fb4a0e6ae6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:56:16.327537  157008 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/proxy-client.key ...
	I0916 10:56:16.327550  157008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/proxy-client.key: {Name:mk0df2d3d9d6721ce4f6b0e843e07f616c6a4e76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:56:16.327620  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:56:16.327638  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:56:16.327649  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:56:16.327665  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:56:16.327678  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:56:16.327690  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:56:16.327704  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:56:16.327718  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:56:16.327793  157008 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 10:56:16.327837  157008 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 10:56:16.327847  157008 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:56:16.327868  157008 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 10:56:16.327892  157008 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:56:16.327912  157008 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 10:56:16.327949  157008 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:56:16.327974  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem -> /usr/share/ca-certificates/11189.pem
	I0916 10:56:16.328002  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /usr/share/ca-certificates/111892.pem
	I0916 10:56:16.328019  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:56:16.328586  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:56:16.350654  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:56:16.372438  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:56:16.394606  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:56:16.417148  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 10:56:16.438859  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:56:16.460596  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:56:16.483188  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:56:16.505562  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 10:56:16.527152  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 10:56:16.549292  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:56:16.571712  157008 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:56:16.589014  157008 ssh_runner.go:195] Run: openssl version
	I0916 10:56:16.593976  157008 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0916 10:56:16.594042  157008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 10:56:16.602722  157008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 10:56:16.606139  157008 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 10:56:16.606168  157008 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 10:56:16.606201  157008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 10:56:16.612378  157008 command_runner.go:130] > 3ec20f2e
	I0916 10:56:16.612615  157008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:56:16.621138  157008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:56:16.629549  157008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:56:16.632756  157008 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:56:16.632814  157008 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:56:16.632860  157008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:56:16.638696  157008 command_runner.go:130] > b5213941
	I0916 10:56:16.638950  157008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:56:16.647440  157008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 10:56:16.655969  157008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 10:56:16.659028  157008 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 10:56:16.659046  157008 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 10:56:16.659083  157008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 10:56:16.665289  157008 command_runner.go:130] > 51391683
	I0916 10:56:16.665377  157008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 10:56:16.674236  157008 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:56:16.677262  157008 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:56:16.677331  157008 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:56:16.677369  157008 kubeadm.go:392] StartCluster: {Name:multinode-079070 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-079070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:56:16.677439  157008 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 10:56:16.677479  157008 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:56:16.711086  157008 cri.go:89] found id: ""
	I0916 10:56:16.711145  157008 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:56:16.718546  157008 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0916 10:56:16.718576  157008 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0916 10:56:16.718586  157008 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0916 10:56:16.719225  157008 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:56:16.726999  157008 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 10:56:16.727062  157008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:56:16.735027  157008 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0916 10:56:16.735052  157008 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0916 10:56:16.735059  157008 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0916 10:56:16.735068  157008 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:56:16.735105  157008 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:56:16.735117  157008 kubeadm.go:157] found existing configuration files:
	
	I0916 10:56:16.735155  157008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 10:56:16.742888  157008 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:56:16.742944  157008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:56:16.742983  157008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:56:16.750593  157008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 10:56:16.758166  157008 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:56:16.758208  157008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:56:16.758254  157008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:56:16.765870  157008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 10:56:16.773621  157008 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:56:16.773665  157008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:56:16.773704  157008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:56:16.781238  157008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 10:56:16.788983  157008 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:56:16.789036  157008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:56:16.789085  157008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:56:16.796590  157008 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 10:56:16.831101  157008 kubeadm.go:310] W0916 10:56:16.830474    1135 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:56:16.831136  157008 command_runner.go:130] ! W0916 10:56:16.830474    1135 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:56:16.831532  157008 kubeadm.go:310] W0916 10:56:16.831041    1135 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:56:16.831558  157008 command_runner.go:130] ! W0916 10:56:16.831041    1135 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:56:16.848622  157008 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 10:56:16.848665  157008 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 10:56:16.900277  157008 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:56:16.900308  157008 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:56:26.545599  157008 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 10:56:26.545632  157008 command_runner.go:130] > [init] Using Kubernetes version: v1.31.1
	I0916 10:56:26.545678  157008 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 10:56:26.545707  157008 command_runner.go:130] > [preflight] Running pre-flight checks
	I0916 10:56:26.545831  157008 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 10:56:26.545841  157008 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0916 10:56:26.545886  157008 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 10:56:26.545894  157008 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 10:56:26.545922  157008 kubeadm.go:310] OS: Linux
	I0916 10:56:26.545928  157008 command_runner.go:130] > OS: Linux
	I0916 10:56:26.545979  157008 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 10:56:26.545989  157008 command_runner.go:130] > CGROUPS_CPU: enabled
	I0916 10:56:26.546046  157008 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 10:56:26.546057  157008 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0916 10:56:26.546121  157008 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 10:56:26.546132  157008 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0916 10:56:26.546226  157008 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 10:56:26.546246  157008 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0916 10:56:26.546317  157008 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 10:56:26.546329  157008 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0916 10:56:26.546409  157008 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 10:56:26.546426  157008 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0916 10:56:26.546489  157008 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 10:56:26.546498  157008 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0916 10:56:26.546584  157008 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 10:56:26.546597  157008 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0916 10:56:26.546662  157008 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 10:56:26.546669  157008 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0916 10:56:26.546734  157008 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:56:26.546741  157008 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:56:26.546818  157008 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:56:26.546825  157008 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:56:26.546941  157008 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 10:56:26.546954  157008 command_runner.go:130] > [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 10:56:26.547015  157008 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:56:26.547099  157008 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:56:26.548928  157008 out.go:235]   - Generating certificates and keys ...
	I0916 10:56:26.549008  157008 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0916 10:56:26.549017  157008 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 10:56:26.549086  157008 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0916 10:56:26.549094  157008 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 10:56:26.549175  157008 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 10:56:26.549183  157008 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 10:56:26.549258  157008 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0916 10:56:26.549267  157008 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 10:56:26.549352  157008 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0916 10:56:26.549359  157008 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 10:56:26.549404  157008 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0916 10:56:26.549410  157008 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 10:56:26.549453  157008 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0916 10:56:26.549459  157008 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 10:56:26.549571  157008 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-079070] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0916 10:56:26.549581  157008 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-079070] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0916 10:56:26.549642  157008 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0916 10:56:26.549649  157008 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 10:56:26.549807  157008 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-079070] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0916 10:56:26.549826  157008 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-079070] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0916 10:56:26.549911  157008 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 10:56:26.549918  157008 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 10:56:26.549970  157008 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 10:56:26.549975  157008 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 10:56:26.550017  157008 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0916 10:56:26.550023  157008 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 10:56:26.550068  157008 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:56:26.550074  157008 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:56:26.550119  157008 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:56:26.550122  157008 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:56:26.550168  157008 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 10:56:26.550171  157008 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 10:56:26.550215  157008 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:56:26.550221  157008 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:56:26.550311  157008 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:56:26.550320  157008 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:56:26.550364  157008 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:56:26.550374  157008 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:56:26.550479  157008 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:56:26.550485  157008 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:56:26.550558  157008 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:56:26.550567  157008 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:56:26.552267  157008 out.go:235]   - Booting up control plane ...
	I0916 10:56:26.552374  157008 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:56:26.552390  157008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:56:26.552473  157008 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:56:26.552480  157008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:56:26.552534  157008 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:56:26.552541  157008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:56:26.552641  157008 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:56:26.552658  157008 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:56:26.552750  157008 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:56:26.552766  157008 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:56:26.552809  157008 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0916 10:56:26.552816  157008 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 10:56:26.552963  157008 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 10:56:26.552970  157008 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 10:56:26.553096  157008 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:56:26.553104  157008 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:56:26.553153  157008 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.715009ms
	I0916 10:56:26.553159  157008 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.715009ms
	I0916 10:56:26.553219  157008 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 10:56:26.553225  157008 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 10:56:26.553271  157008 command_runner.go:130] > [api-check] The API server is healthy after 4.50164473s
	I0916 10:56:26.553277  157008 kubeadm.go:310] [api-check] The API server is healthy after 4.50164473s
	I0916 10:56:26.553371  157008 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:56:26.553377  157008 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:56:26.553499  157008 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:56:26.553506  157008 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:56:26.553569  157008 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:56:26.553578  157008 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:56:26.553761  157008 command_runner.go:130] > [mark-control-plane] Marking the node multinode-079070 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:56:26.553771  157008 kubeadm.go:310] [mark-control-plane] Marking the node multinode-079070 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:56:26.553845  157008 command_runner.go:130] > [bootstrap-token] Using token: rkgcgy.5qjb792nhey505s7
	I0916 10:56:26.553856  157008 kubeadm.go:310] [bootstrap-token] Using token: rkgcgy.5qjb792nhey505s7
	I0916 10:56:26.555547  157008 out.go:235]   - Configuring RBAC rules ...
	I0916 10:56:26.555682  157008 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:56:26.555693  157008 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:56:26.555826  157008 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:56:26.555837  157008 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:56:26.555970  157008 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:56:26.555980  157008 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:56:26.556143  157008 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:56:26.556155  157008 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:56:26.556249  157008 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:56:26.556256  157008 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:56:26.556349  157008 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:56:26.556364  157008 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:56:26.556457  157008 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:56:26.556463  157008 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:56:26.556500  157008 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0916 10:56:26.556505  157008 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 10:56:26.556543  157008 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0916 10:56:26.556549  157008 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 10:56:26.556553  157008 kubeadm.go:310] 
	I0916 10:56:26.556644  157008 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0916 10:56:26.556654  157008 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 10:56:26.556658  157008 kubeadm.go:310] 
	I0916 10:56:26.556727  157008 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0916 10:56:26.556732  157008 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 10:56:26.556740  157008 kubeadm.go:310] 
	I0916 10:56:26.556766  157008 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0916 10:56:26.556772  157008 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 10:56:26.556831  157008 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:56:26.556840  157008 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:56:26.556882  157008 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:56:26.556887  157008 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:56:26.556892  157008 kubeadm.go:310] 
	I0916 10:56:26.556938  157008 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0916 10:56:26.556947  157008 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 10:56:26.556951  157008 kubeadm.go:310] 
	I0916 10:56:26.557000  157008 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:56:26.557010  157008 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:56:26.557014  157008 kubeadm.go:310] 
	I0916 10:56:26.557073  157008 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0916 10:56:26.557080  157008 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 10:56:26.557141  157008 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:56:26.557149  157008 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:56:26.557255  157008 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:56:26.557267  157008 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:56:26.557273  157008 kubeadm.go:310] 
	I0916 10:56:26.557382  157008 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:56:26.557389  157008 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:56:26.557458  157008 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0916 10:56:26.557466  157008 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 10:56:26.557470  157008 kubeadm.go:310] 
	I0916 10:56:26.557541  157008 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token rkgcgy.5qjb792nhey505s7 \
	I0916 10:56:26.557547  157008 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rkgcgy.5qjb792nhey505s7 \
	I0916 10:56:26.557653  157008 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 \
	I0916 10:56:26.557667  157008 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 \
	I0916 10:56:26.557707  157008 command_runner.go:130] > 	--control-plane 
	I0916 10:56:26.557716  157008 kubeadm.go:310] 	--control-plane 
	I0916 10:56:26.557726  157008 kubeadm.go:310] 
	I0916 10:56:26.557846  157008 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:56:26.557855  157008 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:56:26.557862  157008 kubeadm.go:310] 
	I0916 10:56:26.557990  157008 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token rkgcgy.5qjb792nhey505s7 \
	I0916 10:56:26.557998  157008 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rkgcgy.5qjb792nhey505s7 \
	I0916 10:56:26.558130  157008 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 
	I0916 10:56:26.558159  157008 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 
	I0916 10:56:26.558167  157008 cni.go:84] Creating CNI manager for ""
	I0916 10:56:26.558177  157008 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 10:56:26.560007  157008 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 10:56:26.561514  157008 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 10:56:26.565197  157008 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0916 10:56:26.565217  157008 command_runner.go:130] >   Size: 4085020   	Blocks: 7992       IO Block: 4096   regular file
	I0916 10:56:26.565223  157008 command_runner.go:130] > Device: 35h/53d	Inode: 538361      Links: 1
	I0916 10:56:26.565230  157008 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:56:26.565236  157008 command_runner.go:130] > Access: 2023-12-04 16:39:01.000000000 +0000
	I0916 10:56:26.565241  157008 command_runner.go:130] > Modify: 2023-12-04 16:39:01.000000000 +0000
	I0916 10:56:26.565246  157008 command_runner.go:130] > Change: 2024-09-16 10:23:17.639492271 +0000
	I0916 10:56:26.565252  157008 command_runner.go:130] >  Birth: 2024-09-16 10:23:17.615490154 +0000
	I0916 10:56:26.565328  157008 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 10:56:26.565341  157008 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 10:56:26.581928  157008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 10:56:26.749921  157008 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0916 10:56:26.755312  157008 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0916 10:56:26.761273  157008 command_runner.go:130] > serviceaccount/kindnet created
	I0916 10:56:26.770548  157008 command_runner.go:130] > daemonset.apps/kindnet created
	I0916 10:56:26.773772  157008 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:56:26.773832  157008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:56:26.773844  157008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-079070 minikube.k8s.io/updated_at=2024_09_16T10_56_26_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=multinode-079070 minikube.k8s.io/primary=true
	I0916 10:56:26.781089  157008 command_runner.go:130] > -16
	I0916 10:56:26.781169  157008 ops.go:34] apiserver oom_adj: -16
	I0916 10:56:26.851211  157008 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0916 10:56:26.855705  157008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:56:26.863415  157008 command_runner.go:130] > node/multinode-079070 labeled
	I0916 10:56:26.932966  157008 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 10:56:27.356650  157008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:56:27.418775  157008 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 10:56:27.856414  157008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:56:27.921088  157008 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 10:56:28.356742  157008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:56:28.421191  157008 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 10:56:28.856593  157008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:56:28.920587  157008 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 10:56:29.355888  157008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:56:29.418112  157008 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 10:56:29.856494  157008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:56:29.921216  157008 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 10:56:30.355824  157008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:56:30.420433  157008 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 10:56:30.855935  157008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:56:30.917980  157008 command_runner.go:130] > NAME      SECRETS   AGE
	I0916 10:56:30.918001  157008 command_runner.go:130] > default   0         0s
	I0916 10:56:30.920595  157008 kubeadm.go:1113] duration metric: took 4.146831321s to wait for elevateKubeSystemPrivileges
	I0916 10:56:30.920623  157008 kubeadm.go:394] duration metric: took 14.243257616s to StartCluster
	I0916 10:56:30.920648  157008 settings.go:142] acquiring lock: {Name:mk5f140da961bb985c566f732d611f3e238f1f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:56:30.920708  157008 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:56:30.921341  157008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:56:30.921560  157008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 10:56:30.921569  157008 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 10:56:30.921632  157008 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 10:56:30.921715  157008 addons.go:69] Setting storage-provisioner=true in profile "multinode-079070"
	I0916 10:56:30.921732  157008 addons.go:234] Setting addon storage-provisioner=true in "multinode-079070"
	I0916 10:56:30.921749  157008 addons.go:69] Setting default-storageclass=true in profile "multinode-079070"
	I0916 10:56:30.921768  157008 host.go:66] Checking if "multinode-079070" exists ...
	I0916 10:56:30.921781  157008 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-079070"
	I0916 10:56:30.921817  157008 config.go:182] Loaded profile config "multinode-079070": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:56:30.922110  157008 cli_runner.go:164] Run: docker container inspect multinode-079070 --format={{.State.Status}}
	I0916 10:56:30.922249  157008 cli_runner.go:164] Run: docker container inspect multinode-079070 --format={{.State.Status}}
	I0916 10:56:30.923572  157008 out.go:177] * Verifying Kubernetes components...
	I0916 10:56:30.925109  157008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:56:30.944133  157008 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:56:30.944364  157008 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:56:30.944358  157008 kapi.go:59] client config for multinode-079070: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:56:30.944941  157008 addons.go:234] Setting addon default-storageclass=true in "multinode-079070"
	I0916 10:56:30.944971  157008 host.go:66] Checking if "multinode-079070" exists ...
	I0916 10:56:30.945310  157008 cli_runner.go:164] Run: docker container inspect multinode-079070 --format={{.State.Status}}
	I0916 10:56:30.945511  157008 cert_rotation.go:140] Starting client certificate rotation controller
	I0916 10:56:30.946317  157008 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:56:30.946339  157008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:56:30.946394  157008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070
	I0916 10:56:30.973774  157008 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:56:30.973800  157008 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:56:30.973860  157008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070
	I0916 10:56:30.980682  157008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070/id_rsa Username:docker}
	I0916 10:56:30.997301  157008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070/id_rsa Username:docker}
	I0916 10:56:31.040185  157008 command_runner.go:130] > apiVersion: v1
	I0916 10:56:31.040213  157008 command_runner.go:130] > data:
	I0916 10:56:31.040219  157008 command_runner.go:130] >   Corefile: |
	I0916 10:56:31.040224  157008 command_runner.go:130] >     .:53 {
	I0916 10:56:31.040230  157008 command_runner.go:130] >         errors
	I0916 10:56:31.040235  157008 command_runner.go:130] >         health {
	I0916 10:56:31.040242  157008 command_runner.go:130] >            lameduck 5s
	I0916 10:56:31.040249  157008 command_runner.go:130] >         }
	I0916 10:56:31.040254  157008 command_runner.go:130] >         ready
	I0916 10:56:31.040264  157008 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0916 10:56:31.040275  157008 command_runner.go:130] >            pods insecure
	I0916 10:56:31.040286  157008 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0916 10:56:31.040293  157008 command_runner.go:130] >            ttl 30
	I0916 10:56:31.040298  157008 command_runner.go:130] >         }
	I0916 10:56:31.040305  157008 command_runner.go:130] >         prometheus :9153
	I0916 10:56:31.040322  157008 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0916 10:56:31.040334  157008 command_runner.go:130] >            max_concurrent 1000
	I0916 10:56:31.040339  157008 command_runner.go:130] >         }
	I0916 10:56:31.040345  157008 command_runner.go:130] >         cache 30
	I0916 10:56:31.040351  157008 command_runner.go:130] >         loop
	I0916 10:56:31.040358  157008 command_runner.go:130] >         reload
	I0916 10:56:31.040367  157008 command_runner.go:130] >         loadbalance
	I0916 10:56:31.040375  157008 command_runner.go:130] >     }
	I0916 10:56:31.040383  157008 command_runner.go:130] > kind: ConfigMap
	I0916 10:56:31.040392  157008 command_runner.go:130] > metadata:
	I0916 10:56:31.040404  157008 command_runner.go:130] >   creationTimestamp: "2024-09-16T10:56:25Z"
	I0916 10:56:31.040414  157008 command_runner.go:130] >   name: coredns
	I0916 10:56:31.040422  157008 command_runner.go:130] >   namespace: kube-system
	I0916 10:56:31.040431  157008 command_runner.go:130] >   resourceVersion: "230"
	I0916 10:56:31.040442  157008 command_runner.go:130] >   uid: 61333853-db84-4ece-9b85-fe8b8c445fe7
	I0916 10:56:31.044393  157008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 10:56:31.122783  157008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:56:31.321465  157008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:56:31.321529  157008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:56:31.552916  157008 command_runner.go:130] > configmap/coredns replaced
	I0916 10:56:31.552954  157008 start.go:971] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS's ConfigMap
	I0916 10:56:31.553368  157008 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:56:31.553390  157008 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:56:31.553581  157008 kapi.go:59] client config for multinode-079070: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:56:31.553718  157008 kapi.go:59] client config for multinode-079070: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:56:31.553807  157008 node_ready.go:35] waiting up to 6m0s for node "multinode-079070" to be "Ready" ...
	I0916 10:56:31.553888  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:31.553896  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:31.553903  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:31.553907  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:31.554077  157008 round_trippers.go:463] GET https://192.168.67.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0916 10:56:31.554091  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:31.554102  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:31.554114  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:31.627866  157008 round_trippers.go:574] Response Status: 200 OK in 73 milliseconds
	I0916 10:56:31.627956  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:31.627982  157008 round_trippers.go:580]     Audit-Id: 51185b8b-8f43-4861-b97d-1b9d042a2f64
	I0916 10:56:31.627991  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:31.627997  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:31.628001  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:31.628006  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:31.628009  157008 round_trippers.go:580]     Content-Length: 291
	I0916 10:56:31.628042  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:31 GMT
	I0916 10:56:31.628069  157008 request.go:1351] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5cb441ca-082b-4eb4-a4cd-53c25782759a","resourceVersion":"344","creationTimestamp":"2024-09-16T10:56:25Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0916 10:56:31.628270  157008 round_trippers.go:574] Response Status: 200 OK in 74 milliseconds
	I0916 10:56:31.628290  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:31.628300  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:31.628304  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:31.628311  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:31 GMT
	I0916 10:56:31.628315  157008 round_trippers.go:580]     Audit-Id: 0e11a036-6be7-479e-b7e9-2de2a6190cd8
	I0916 10:56:31.628319  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:31.628323  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:31.628511  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"312","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:31.628598  157008 request.go:1351] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5cb441ca-082b-4eb4-a4cd-53c25782759a","resourceVersion":"344","creationTimestamp":"2024-09-16T10:56:25Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0916 10:56:31.628662  157008 round_trippers.go:463] PUT https://192.168.67.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0916 10:56:31.628670  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:31.628680  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:31.628686  157008 round_trippers.go:473]     Content-Type: application/json
	I0916 10:56:31.628692  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:31.629346  157008 node_ready.go:49] node "multinode-079070" has status "Ready":"True"
	I0916 10:56:31.629363  157008 node_ready.go:38] duration metric: took 75.539349ms for node "multinode-079070" to be "Ready" ...
	I0916 10:56:31.629373  157008 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:56:31.629420  157008 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 10:56:31.629433  157008 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 10:56:31.629491  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:56:31.629497  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:31.629508  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:31.629514  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:31.634812  157008 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:56:31.634836  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:31.634845  157008 round_trippers.go:580]     Audit-Id: 446936f7-c9fa-4538-a97e-d9b0b4dbffec
	I0916 10:56:31.634851  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:31.634856  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:31.634860  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:31.634865  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:31.634871  157008 round_trippers.go:580]     Content-Length: 291
	I0916 10:56:31.634877  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:31 GMT
	I0916 10:56:31.634904  157008 request.go:1351] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5cb441ca-082b-4eb4-a4cd-53c25782759a","resourceVersion":"347","creationTimestamp":"2024-09-16T10:56:25Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0916 10:56:31.635415  157008 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:56:31.635433  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:31.635442  157008 round_trippers.go:580]     Audit-Id: 43853a5b-f579-401f-b5c9-2ee3f298e9cb
	I0916 10:56:31.635446  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:31.635451  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:31.635456  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:31.635460  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:31.635464  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:31 GMT
	I0916 10:56:31.636786  157008 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"346"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 61465 chars]
	I0916 10:56:31.641995  157008 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:31.642157  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:31.642183  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:31.642203  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:31.642217  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:31.644661  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:31.644680  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:31.644688  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:31.644693  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:31.644700  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:31 GMT
	I0916 10:56:31.644704  157008 round_trippers.go:580]     Audit-Id: 2623bbb1-2d9a-4028-aa38-e7debef1e200
	I0916 10:56:31.644708  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:31.644712  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:31.644851  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:31.645455  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:31.645476  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:31.645488  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:31.645496  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:31.648810  157008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:56:31.648826  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:31.648833  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:31.648837  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:31.648841  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:31 GMT
	I0916 10:56:31.648846  157008 round_trippers.go:580]     Audit-Id: 660d80fa-17b4-4ed1-8159-38e17c40fa38
	I0916 10:56:31.648867  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:31.648877  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:31.649248  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"312","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:31.972548  157008 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0916 10:56:32.022566  157008 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0916 10:56:32.030400  157008 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0916 10:56:32.039038  157008 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0916 10:56:32.047162  157008 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0916 10:56:32.054416  157008 round_trippers.go:463] GET https://192.168.67.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0916 10:56:32.054444  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:32.054456  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:32.054461  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:32.056621  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:32.056648  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:32.056658  157008 round_trippers.go:580]     Content-Length: 291
	I0916 10:56:32.056664  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:32 GMT
	I0916 10:56:32.056669  157008 round_trippers.go:580]     Audit-Id: 98922aa7-f49e-4e44-adf3-12632ccf9cf5
	I0916 10:56:32.056675  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:32.056680  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:32.056692  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:32.056704  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:32.056734  157008 request.go:1351] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5cb441ca-082b-4eb4-a4cd-53c25782759a","resourceVersion":"357","creationTimestamp":"2024-09-16T10:56:25Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0916 10:56:32.056744  157008 command_runner.go:130] > pod/storage-provisioner created
	I0916 10:56:32.056849  157008 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-079070" context rescaled to 1 replicas
	I0916 10:56:32.062302  157008 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0916 10:56:32.062453  157008 round_trippers.go:463] GET https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0916 10:56:32.062470  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:32.062481  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:32.062488  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:32.064682  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:32.064708  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:32.064717  157008 round_trippers.go:580]     Audit-Id: b3d35512-1e1a-450f-8dbb-a6eea9961197
	I0916 10:56:32.064723  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:32.064729  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:32.064734  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:32.064738  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:32.064742  157008 round_trippers.go:580]     Content-Length: 1273
	I0916 10:56:32.064746  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:32 GMT
	I0916 10:56:32.064780  157008 request.go:1351] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"370"},"items":[{"metadata":{"name":"standard","uid":"5f2cf213-a251-482f-97a5-f1e644f2e8ce","resourceVersion":"349","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0916 10:56:32.065262  157008 request.go:1351] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"5f2cf213-a251-482f-97a5-f1e644f2e8ce","resourceVersion":"349","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0916 10:56:32.065340  157008 round_trippers.go:463] PUT https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0916 10:56:32.065355  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:32.065365  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:32.065373  157008 round_trippers.go:473]     Content-Type: application/json
	I0916 10:56:32.065379  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:32.129019  157008 round_trippers.go:574] Response Status: 200 OK in 63 milliseconds
	I0916 10:56:32.129049  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:32.129059  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:32.129066  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:32.129070  157008 round_trippers.go:580]     Content-Length: 1220
	I0916 10:56:32.129074  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:32 GMT
	I0916 10:56:32.129079  157008 round_trippers.go:580]     Audit-Id: 1e443822-5287-4100-a3f6-8394ffb54563
	I0916 10:56:32.129083  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:32.129112  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:32.129393  157008 request.go:1351] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"5f2cf213-a251-482f-97a5-f1e644f2e8ce","resourceVersion":"349","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0916 10:56:32.131114  157008 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0916 10:56:32.132342  157008 addons.go:510] duration metric: took 1.210706788s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 10:56:32.142636  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:32.142659  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:32.142671  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:32.142677  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:32.145184  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:32.145212  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:32.145222  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:32.145227  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:32.145232  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:32.145237  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:32 GMT
	I0916 10:56:32.145241  157008 round_trippers.go:580]     Audit-Id: 69eae447-8caa-43f1-af36-1a3c5c6e846f
	I0916 10:56:32.145244  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:32.150096  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:32.151140  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:32.151161  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:32.151171  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:32.151177  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:32.154690  157008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:56:32.154705  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:32.154712  157008 round_trippers.go:580]     Audit-Id: 0047e009-782b-4643-9062-94b8d551a82e
	I0916 10:56:32.154715  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:32.154718  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:32.154721  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:32.154723  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:32.154726  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:32 GMT
	I0916 10:56:32.154898  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"312","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:32.642474  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:32.642496  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:32.642504  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:32.642508  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:32.644888  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:32.644909  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:32.644915  157008 round_trippers.go:580]     Audit-Id: 5b94c7c5-19d4-4ee6-89fe-bcfdd621bfec
	I0916 10:56:32.644919  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:32.644923  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:32.644926  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:32.644938  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:32.644942  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:32 GMT
	I0916 10:56:32.645124  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:32.645665  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:32.645682  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:32.645690  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:32.645694  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:32.647448  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:32.647467  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:32.647475  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:32.647480  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:32.647483  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:32.647487  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:32.647490  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:32 GMT
	I0916 10:56:32.647494  157008 round_trippers.go:580]     Audit-Id: 6acd9f16-52f1-46a1-b390-f1942b0abdac
	I0916 10:56:32.647603  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"312","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:33.142223  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:33.142245  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:33.142253  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:33.142257  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:33.144492  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:33.144513  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:33.144520  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:33 GMT
	I0916 10:56:33.144523  157008 round_trippers.go:580]     Audit-Id: 8054e685-51c7-4899-8b50-34e1ee7c903b
	I0916 10:56:33.144526  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:33.144528  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:33.144531  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:33.144533  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:33.144741  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:33.145213  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:33.145232  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:33.145241  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:33.145246  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:33.147109  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:33.147127  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:33.147136  157008 round_trippers.go:580]     Audit-Id: 75b4fee9-8b18-4680-9c74-1c82385fa12a
	I0916 10:56:33.147139  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:33.147142  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:33.147145  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:33.147147  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:33.147150  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:33 GMT
	I0916 10:56:33.147263  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"312","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:33.642964  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:33.642989  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:33.643001  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:33.643007  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:33.645190  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:33.645210  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:33.645219  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:33 GMT
	I0916 10:56:33.645225  157008 round_trippers.go:580]     Audit-Id: 8fa16d23-90bd-43b6-a601-65b154b1d4fc
	I0916 10:56:33.645229  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:33.645233  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:33.645237  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:33.645242  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:33.645418  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:33.645886  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:33.645903  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:33.645913  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:33.645919  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:33.647644  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:33.647660  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:33.647666  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:33.647670  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:33 GMT
	I0916 10:56:33.647674  157008 round_trippers.go:580]     Audit-Id: e6537a1e-d810-40d7-8dbd-b88d44a28624
	I0916 10:56:33.647676  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:33.647680  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:33.647683  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:33.647842  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"312","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:33.648146  157008 pod_ready.go:103] pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace has status "Ready":"False"
	I0916 10:56:34.142496  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:34.142518  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:34.142531  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:34.142539  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:34.144972  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:34.144993  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:34.145003  157008 round_trippers.go:580]     Audit-Id: e7381edd-9b91-4222-8325-456b90d96f77
	I0916 10:56:34.145011  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:34.145016  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:34.145019  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:34.145022  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:34.145027  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:34 GMT
	I0916 10:56:34.145215  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:34.145723  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:34.145738  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:34.145745  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:34.145751  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:34.147626  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:34.147645  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:34.147653  157008 round_trippers.go:580]     Audit-Id: 336760eb-2369-4d0d-9c21-43da9df1c17f
	I0916 10:56:34.147659  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:34.147664  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:34.147668  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:34.147695  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:34.147704  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:34 GMT
	I0916 10:56:34.147819  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"312","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:34.642360  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:34.642382  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:34.642390  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:34.642394  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:34.644646  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:34.644663  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:34.644672  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:34.644679  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:34.644685  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:34.644690  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:34 GMT
	I0916 10:56:34.644694  157008 round_trippers.go:580]     Audit-Id: 8f5d2094-4df7-464a-af25-356f4aa2d209
	I0916 10:56:34.644698  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:34.644864  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:34.645352  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:34.645365  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:34.645372  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:34.645376  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:34.647030  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:34.647045  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:34.647051  157008 round_trippers.go:580]     Audit-Id: 357886cd-0a6b-4abd-88ef-45638f6b15e1
	I0916 10:56:34.647054  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:34.647058  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:34.647064  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:34.647068  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:34.647071  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:34 GMT
	I0916 10:56:34.647247  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"312","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:35.142882  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:35.142907  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:35.142915  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:35.142920  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:35.145165  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:35.145185  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:35.145195  157008 round_trippers.go:580]     Audit-Id: 407c2dc4-abd0-4a01-ae00-17f731397130
	I0916 10:56:35.145201  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:35.145205  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:35.145211  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:35.145217  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:35.145221  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:35 GMT
	I0916 10:56:35.145369  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:35.145805  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:35.145816  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:35.145824  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:35.145827  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:35.147496  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:35.147512  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:35.147517  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:35 GMT
	I0916 10:56:35.147598  157008 round_trippers.go:580]     Audit-Id: 9283b9fa-9fd1-4121-9154-d2c15f23c59a
	I0916 10:56:35.147614  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:35.147621  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:35.147626  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:35.147631  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:35.147730  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"312","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:35.642344  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:35.642366  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:35.642373  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:35.642377  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:35.644664  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:35.644690  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:35.644702  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:35.644707  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:35.644711  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:35.644717  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:35.644720  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:35 GMT
	I0916 10:56:35.644723  157008 round_trippers.go:580]     Audit-Id: 99f7bdb0-c491-48ae-b869-9bd95ea9e71b
	I0916 10:56:35.644833  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:35.645260  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:35.645273  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:35.645280  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:35.645284  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:35.647086  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:35.647101  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:35.647108  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:35.647112  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:35 GMT
	I0916 10:56:35.647114  157008 round_trippers.go:580]     Audit-Id: c211c660-502c-42a9-b4c9-b408e079465b
	I0916 10:56:35.647117  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:35.647120  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:35.647123  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:35.647256  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"312","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:36.142911  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:36.142934  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:36.142942  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:36.142946  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:36.145213  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:36.145236  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:36.145245  157008 round_trippers.go:580]     Audit-Id: 78fe43db-8f88-44e7-bb2c-21fb2b6cb58b
	I0916 10:56:36.145250  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:36.145255  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:36.145259  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:36.145264  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:36.145267  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:36 GMT
	I0916 10:56:36.145412  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:36.145900  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:36.145914  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:36.145921  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:36.145926  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:36.147657  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:36.147671  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:36.147678  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:36 GMT
	I0916 10:56:36.147682  157008 round_trippers.go:580]     Audit-Id: 6b3f4435-56cc-48e7-b362-c549e5237d88
	I0916 10:56:36.147685  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:36.147689  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:36.147691  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:36.147696  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:36.147857  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"312","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:36.148125  157008 pod_ready.go:103] pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace has status "Ready":"False"
	I0916 10:56:36.642550  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:36.642574  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:36.642582  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:36.642586  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:36.645022  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:36.645051  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:36.645061  157008 round_trippers.go:580]     Audit-Id: 799f67ff-f54f-4a15-8940-318d582a7b9f
	I0916 10:56:36.645067  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:36.645073  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:36.645079  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:36.645086  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:36.645090  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:36 GMT
	I0916 10:56:36.645316  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:36.645795  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:36.645814  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:36.645823  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:36.645830  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:36.647699  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:36.647718  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:36.647728  157008 round_trippers.go:580]     Audit-Id: 7c0f1549-8fbe-4365-bc32-3097f6a77717
	I0916 10:56:36.647746  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:36.647751  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:36.647755  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:36.647759  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:36.647763  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:36 GMT
	I0916 10:56:36.647899  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:37.142452  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:37.142479  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:37.142489  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:37.142495  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:37.144769  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:37.144793  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:37.144800  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:37.144805  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:37.144809  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:37.144813  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:37.144816  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:37 GMT
	I0916 10:56:37.144820  157008 round_trippers.go:580]     Audit-Id: 2c9cc07a-3013-472a-a752-fed7ab9e817c
	I0916 10:56:37.145020  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:37.145532  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:37.145550  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:37.145557  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:37.145561  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:37.147371  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:37.147386  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:37.147392  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:37.147396  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:37.147399  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:37.147403  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:37.147406  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:37 GMT
	I0916 10:56:37.147409  157008 round_trippers.go:580]     Audit-Id: 22f7bc0d-20f5-45e1-91b5-3da35c74ac71
	I0916 10:56:37.147537  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:37.643232  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:37.643257  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:37.643268  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:37.643274  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:37.645538  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:37.645561  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:37.645569  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:37 GMT
	I0916 10:56:37.645574  157008 round_trippers.go:580]     Audit-Id: 76a91815-41bd-485b-8b59-4264bdbeefb6
	I0916 10:56:37.645581  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:37.645586  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:37.645593  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:37.645599  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:37.645771  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:37.646349  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:37.646367  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:37.646377  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:37.646383  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:37.648034  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:37.648055  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:37.648062  157008 round_trippers.go:580]     Audit-Id: c56787a8-cc48-48bc-9191-497663001f45
	I0916 10:56:37.648065  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:37.648069  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:37.648073  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:37.648076  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:37.648079  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:37 GMT
	I0916 10:56:37.648217  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:38.142890  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:38.142912  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:38.142920  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:38.142924  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:38.145351  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:38.145378  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:38.145387  157008 round_trippers.go:580]     Audit-Id: dab850a8-08d5-47f3-b3ab-7d1db3e8aa1c
	I0916 10:56:38.145392  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:38.145395  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:38.145398  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:38.145402  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:38.145408  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:38 GMT
	I0916 10:56:38.145600  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:38.146040  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:38.146055  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:38.146067  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:38.146074  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:38.147918  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:38.147938  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:38.147945  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:38.147949  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:38 GMT
	I0916 10:56:38.147954  157008 round_trippers.go:580]     Audit-Id: 74a6db3a-0e4a-4eb4-877b-85a0b5569740
	I0916 10:56:38.147958  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:38.147961  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:38.147964  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:38.148138  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:38.148437  157008 pod_ready.go:103] pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace has status "Ready":"False"
	I0916 10:56:38.642732  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:38.642752  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:38.642759  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:38.642762  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:38.644965  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:38.644986  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:38.644995  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:38.645000  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:38.645006  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:38.645021  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:38.645028  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:38 GMT
	I0916 10:56:38.645032  157008 round_trippers.go:580]     Audit-Id: e4d71c8a-1bc4-48b3-be36-8396d0758057
	I0916 10:56:38.645234  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:38.645757  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:38.645773  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:38.645779  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:38.645782  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:38.647400  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:38.647420  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:38.647429  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:38.647435  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:38.647442  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:38 GMT
	I0916 10:56:38.647447  157008 round_trippers.go:580]     Audit-Id: 09569ea1-6bdb-44d1-8497-4e5e61e7bb6d
	I0916 10:56:38.647452  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:38.647457  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:38.647564  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:39.143252  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:39.143280  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:39.143291  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:39.143297  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:39.145422  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:39.145443  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:39.145454  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:39.145461  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:39.145467  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:39.145472  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:39.145478  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:39 GMT
	I0916 10:56:39.145485  157008 round_trippers.go:580]     Audit-Id: 442474b2-b52a-4ca2-9bb0-7e1fc453e12f
	I0916 10:56:39.145698  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:39.146154  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:39.146172  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:39.146179  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:39.146182  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:39.147860  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:39.147875  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:39.147881  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:39.147884  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:39 GMT
	I0916 10:56:39.147887  157008 round_trippers.go:580]     Audit-Id: 1fe38ef4-1582-478d-bb85-7e66944d8580
	I0916 10:56:39.147890  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:39.147893  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:39.147895  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:39.148018  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:39.642669  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:39.642689  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:39.642697  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:39.642703  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:39.645064  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:39.645090  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:39.645100  157008 round_trippers.go:580]     Audit-Id: e9897d13-7c61-4ae0-80a2-5fab644839e5
	I0916 10:56:39.645106  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:39.645110  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:39.645114  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:39.645118  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:39.645123  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:39 GMT
	I0916 10:56:39.645283  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:39.645720  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:39.645733  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:39.645740  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:39.645743  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:39.649244  157008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:56:39.649262  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:39.649271  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:39 GMT
	I0916 10:56:39.649275  157008 round_trippers.go:580]     Audit-Id: c97461af-803f-475a-8436-3b1c370b135e
	I0916 10:56:39.649280  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:39.649283  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:39.649285  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:39.649289  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:39.649417  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:40.143022  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:40.143043  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:40.143052  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:40.143056  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:40.145373  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:40.145390  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:40.145397  157008 round_trippers.go:580]     Audit-Id: 8bc3c74e-e536-4d48-b687-9246f0f84bd7
	I0916 10:56:40.145402  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:40.145406  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:40.145408  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:40.145411  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:40.145413  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:40 GMT
	I0916 10:56:40.145598  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:40.146167  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:40.146181  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:40.146189  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:40.146196  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:40.148098  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:40.148117  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:40.148126  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:40.148131  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:40.148136  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:40.148140  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:40.148146  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:40 GMT
	I0916 10:56:40.148151  157008 round_trippers.go:580]     Audit-Id: 5dc070cf-aa48-44c7-a6f6-17ded03df785
	I0916 10:56:40.148326  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:40.148611  157008 pod_ready.go:103] pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace has status "Ready":"False"
	I0916 10:56:40.642964  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:40.642984  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:40.642992  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:40.642997  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:40.645424  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:40.645448  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:40.645460  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:40.645466  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:40 GMT
	I0916 10:56:40.645483  157008 round_trippers.go:580]     Audit-Id: 45d0a64d-0dc0-4a32-94de-df1e680cd584
	I0916 10:56:40.645492  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:40.645497  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:40.645506  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:40.645720  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:40.646160  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:40.646174  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:40.646181  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:40.646186  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:40.647827  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:40.647847  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:40.647855  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:40.647863  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:40.647870  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:40.647874  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:40 GMT
	I0916 10:56:40.647877  157008 round_trippers.go:580]     Audit-Id: 7e478325-d236-4bb4-ba42-24d90136f6da
	I0916 10:56:40.647881  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:40.648015  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:41.142934  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:41.142959  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:41.142971  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:41.142977  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:41.145157  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:41.145175  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:41.145182  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:41.145185  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:41.145187  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:41.145190  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:41.145192  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:41 GMT
	I0916 10:56:41.145196  157008 round_trippers.go:580]     Audit-Id: 66ed0539-cbb2-4f81-8a3f-2a9e17642fd8
	I0916 10:56:41.145470  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:41.146032  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:41.146049  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:41.146059  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:41.146064  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:41.147871  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:41.147896  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:41.147906  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:41.147912  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:41 GMT
	I0916 10:56:41.147916  157008 round_trippers.go:580]     Audit-Id: bf8ea6ac-a4ca-454b-8c94-f58b5d38d2f5
	I0916 10:56:41.147919  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:41.147922  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:41.147925  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:41.148050  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:41.642699  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:41.642720  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:41.642729  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:41.642733  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:41.645098  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:41.645122  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:41.645130  157008 round_trippers.go:580]     Audit-Id: 880c64a5-729d-4843-b5c4-e4a615acade3
	I0916 10:56:41.645135  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:41.645139  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:41.645143  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:41.645147  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:41.645150  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:41 GMT
	I0916 10:56:41.645337  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:41.645971  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:41.645991  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:41.646002  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:41.646007  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:41.647892  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:41.647912  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:41.647919  157008 round_trippers.go:580]     Audit-Id: cee4b896-dc55-4db2-8e82-79941468f1b1
	I0916 10:56:41.647922  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:41.647926  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:41.647928  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:41.647932  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:41.647935  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:41 GMT
	I0916 10:56:41.648107  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:42.142866  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:42.142888  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:42.142897  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:42.142900  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:42.145254  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:42.145276  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:42.145284  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:42.145290  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:42.145294  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:42.145298  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:42 GMT
	I0916 10:56:42.145302  157008 round_trippers.go:580]     Audit-Id: c8e2a663-8e72-4c44-98ba-1985318a55fb
	I0916 10:56:42.145305  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:42.145426  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:42.145891  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:42.145905  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:42.145912  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:42.145916  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:42.147587  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:42.147609  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:42.147616  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:42.147620  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:42.147624  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:42.147626  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:42 GMT
	I0916 10:56:42.147629  157008 round_trippers.go:580]     Audit-Id: b3d62189-418c-4ad6-a27b-858a4c72209a
	I0916 10:56:42.147634  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:42.147925  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:42.642575  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:42.642600  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:42.642613  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:42.642618  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:42.644849  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:42.644867  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:42.644873  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:42.644877  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:42.644881  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:42.644883  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:42.644886  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:42 GMT
	I0916 10:56:42.644889  157008 round_trippers.go:580]     Audit-Id: 190c79eb-c977-443a-aa61-ca45d56ca3ac
	I0916 10:56:42.645072  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:42.645534  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:42.645548  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:42.645556  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:42.645560  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:42.647272  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:42.647293  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:42.647302  157008 round_trippers.go:580]     Audit-Id: 2b929a02-e11b-421a-841f-968f9fe1a429
	I0916 10:56:42.647313  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:42.647325  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:42.647330  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:42.647337  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:42.647343  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:42 GMT
	I0916 10:56:42.647424  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:42.647812  157008 pod_ready.go:103] pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace has status "Ready":"False"
	I0916 10:56:43.143080  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:43.143101  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:43.143112  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:43.143118  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:43.145287  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:43.145319  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:43.145329  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:43 GMT
	I0916 10:56:43.145334  157008 round_trippers.go:580]     Audit-Id: 67756c19-6cb9-412d-adc5-03e47fff2c5a
	I0916 10:56:43.145339  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:43.145343  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:43.145349  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:43.145359  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:43.145569  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:43.146006  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:43.146017  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:43.146024  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:43.146028  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:43.147834  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:43.147857  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:43.147867  157008 round_trippers.go:580]     Audit-Id: 395753cd-e690-4930-96f5-2daf875c1fd9
	I0916 10:56:43.147872  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:43.147876  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:43.147881  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:43.147885  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:43.147889  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:43 GMT
	I0916 10:56:43.147978  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:43.642597  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:43.642622  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:43.642632  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:43.642640  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:43.645073  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:43.645094  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:43.645099  157008 round_trippers.go:580]     Audit-Id: d08fecb1-a4f4-48d2-8780-b07e986801cc
	I0916 10:56:43.645102  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:43.645109  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:43.645114  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:43.645118  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:43.645123  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:43 GMT
	I0916 10:56:43.645364  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:43.645847  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:43.645861  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:43.645868  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:43.645871  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:43.647621  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:43.647644  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:43.647653  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:43 GMT
	I0916 10:56:43.647656  157008 round_trippers.go:580]     Audit-Id: 8e6f6780-f356-4f84-afc4-d4512f3a49d7
	I0916 10:56:43.647660  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:43.647664  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:43.647667  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:43.647671  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:43.647822  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:44.142464  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:44.142494  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:44.142502  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:44.142505  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:44.144755  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:44.144777  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:44.144783  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:44.144788  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:44.144792  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:44.144795  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:44.144799  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:44 GMT
	I0916 10:56:44.144802  157008 round_trippers.go:580]     Audit-Id: f77168c7-e6df-4b08-989c-911b1e9cda12
	I0916 10:56:44.144905  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"411","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6480 chars]
	I0916 10:56:44.145462  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:44.145482  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:44.145493  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:44.145498  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:44.147326  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:44.147347  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:44.147356  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:44.147362  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:44.147365  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:44.147367  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:44 GMT
	I0916 10:56:44.147371  157008 round_trippers.go:580]     Audit-Id: d51edc01-e188-4104-8c51-823ed1e940ef
	I0916 10:56:44.147373  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:44.147525  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:44.147882  157008 pod_ready.go:93] pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace has status "Ready":"True"
	I0916 10:56:44.147900  157008 pod_ready.go:82] duration metric: took 12.505831153s for pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:44.147910  157008 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ql4g8" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:44.147979  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ql4g8
	I0916 10:56:44.147988  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:44.147999  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:44.148007  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:44.149787  157008 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0916 10:56:44.149803  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:44.149809  157008 round_trippers.go:580]     Audit-Id: 60bef53e-3bf4-4412-a111-cdaea4798b44
	I0916 10:56:44.149813  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:44.149817  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:44.149821  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:44.149832  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:44.149836  157008 round_trippers.go:580]     Content-Length: 216
	I0916 10:56:44.149843  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:44 GMT
	I0916 10:56:44.149870  157008 request.go:1351] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods \"coredns-7c65d6cfc9-ql4g8\" not found","reason":"NotFound","details":{"name":"coredns-7c65d6cfc9-ql4g8","kind":"pods"},"code":404}
	I0916 10:56:44.150037  157008 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-ql4g8" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-ql4g8" not found
	I0916 10:56:44.150054  157008 pod_ready.go:82] duration metric: took 2.137296ms for pod "coredns-7c65d6cfc9-ql4g8" in "kube-system" namespace to be "Ready" ...
	E0916 10:56:44.150063  157008 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-ql4g8" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-ql4g8" not found
	I0916 10:56:44.150070  157008 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:44.150125  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-079070
	I0916 10:56:44.150136  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:44.150147  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:44.150159  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:44.152022  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:44.152043  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:44.152051  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:44.152057  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:44.152063  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:44.152069  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:44.152075  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:44 GMT
	I0916 10:56:44.152079  157008 round_trippers.go:580]     Audit-Id: 7818f6dd-4220-4c59-b1e7-1c05c7e61fd6
	I0916 10:56:44.152224  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-079070","namespace":"kube-system","uid":"fdcbee48-0d3b-47d7-acd2-298979345c7a","resourceVersion":"400","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"4cd98cb286c25ab6542db09649b1ab0f","kubernetes.io/config.mirror":"4cd98cb286c25ab6542db09649b1ab0f","kubernetes.io/config.seen":"2024-09-16T10:56:25.844758522Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6440 chars]
	I0916 10:56:44.152628  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:44.152642  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:44.152649  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:44.152653  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:44.154262  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:44.154279  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:44.154285  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:44.154289  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:44.154292  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:44.154295  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:44.154298  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:44 GMT
	I0916 10:56:44.154300  157008 round_trippers.go:580]     Audit-Id: 6f8c9198-ce5a-43ce-954a-2d68287215ba
	I0916 10:56:44.154411  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:44.154691  157008 pod_ready.go:93] pod "etcd-multinode-079070" in "kube-system" namespace has status "Ready":"True"
	I0916 10:56:44.154708  157008 pod_ready.go:82] duration metric: took 4.632679ms for pod "etcd-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:44.154721  157008 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:44.154775  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-079070
	I0916 10:56:44.154782  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:44.154789  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:44.154793  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:44.156659  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:44.156676  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:44.156682  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:44.156686  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:44.156689  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:44.156693  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:44.156696  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:44 GMT
	I0916 10:56:44.156699  157008 round_trippers.go:580]     Audit-Id: adfef526-ccfc-44c2-a102-dd2e2a752f99
	I0916 10:56:44.156899  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-079070","namespace":"kube-system","uid":"72784d35-4a94-476c-a4e9-dc3c6c3a8c46","resourceVersion":"397","creationTimestamp":"2024-09-16T10:56:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.mirror":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.seen":"2024-09-16T10:56:20.578394748Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8518 chars]
	I0916 10:56:44.157356  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:44.157372  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:44.157381  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:44.157390  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:44.158999  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:44.159012  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:44.159018  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:44 GMT
	I0916 10:56:44.159021  157008 round_trippers.go:580]     Audit-Id: 14d97088-b923-40d0-84d0-e1cdb103c15b
	I0916 10:56:44.159024  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:44.159027  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:44.159030  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:44.159033  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:44.159125  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:44.159412  157008 pod_ready.go:93] pod "kube-apiserver-multinode-079070" in "kube-system" namespace has status "Ready":"True"
	I0916 10:56:44.159435  157008 pod_ready.go:82] duration metric: took 4.699671ms for pod "kube-apiserver-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:44.159444  157008 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:44.159496  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-079070
	I0916 10:56:44.159503  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:44.159510  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:44.159514  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:44.161267  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:44.161286  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:44.161295  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:44.161300  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:44.161304  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:44.161308  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:44 GMT
	I0916 10:56:44.161312  157008 round_trippers.go:580]     Audit-Id: 826f35b8-e90b-4694-906b-458f3b78a215
	I0916 10:56:44.161317  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:44.161423  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-079070","namespace":"kube-system","uid":"44a2f17c-654a-4d64-b474-70ce475c3afe","resourceVersion":"403","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"93bd5dba25d1e51504f9fc3f55fd27c8","kubernetes.io/config.mirror":"93bd5dba25d1e51504f9fc3f55fd27c8","kubernetes.io/config.seen":"2024-09-16T10:56:25.844752735Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8093 chars]
	I0916 10:56:44.161845  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:44.161865  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:44.161874  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:44.161880  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:44.163467  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:44.163490  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:44.163499  157008 round_trippers.go:580]     Audit-Id: 480dda20-4693-4a50-9b5a-922519db13af
	I0916 10:56:44.163505  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:44.163509  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:44.163514  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:44.163518  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:44.163526  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:44 GMT
	I0916 10:56:44.163618  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:44.163956  157008 pod_ready.go:93] pod "kube-controller-manager-multinode-079070" in "kube-system" namespace has status "Ready":"True"
	I0916 10:56:44.163974  157008 pod_ready.go:82] duration metric: took 4.523834ms for pod "kube-controller-manager-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:44.163985  157008 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2vhmt" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:44.164041  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2vhmt
	I0916 10:56:44.164048  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:44.164056  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:44.164061  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:44.165708  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:44.165729  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:44.165738  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:44.165744  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:44.165750  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:44.165755  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:44.165759  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:44 GMT
	I0916 10:56:44.165764  157008 round_trippers.go:580]     Audit-Id: 45666010-4ea2-45ba-9eec-334bf9a42b9d
	I0916 10:56:44.165901  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2vhmt","generateName":"kube-proxy-","namespace":"kube-system","uid":"6f3faf85-04e9-4840-855d-dd1ef9d4e463","resourceVersion":"383","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6175 chars]
	I0916 10:56:44.342829  157008 request.go:632] Waited for 176.336266ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:44.342901  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:44.342909  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:44.342919  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:44.342923  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:44.345243  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:44.345266  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:44.345275  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:44.345280  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:44 GMT
	I0916 10:56:44.345296  157008 round_trippers.go:580]     Audit-Id: cc4348c5-82ce-4efa-867e-db0d26ed964a
	I0916 10:56:44.345302  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:44.345307  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:44.345314  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:44.345417  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:44.345908  157008 pod_ready.go:93] pod "kube-proxy-2vhmt" in "kube-system" namespace has status "Ready":"True"
	I0916 10:56:44.345936  157008 pod_ready.go:82] duration metric: took 181.942972ms for pod "kube-proxy-2vhmt" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:44.345951  157008 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:44.543383  157008 request.go:632] Waited for 197.342144ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 10:56:44.543447  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 10:56:44.543453  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:44.543461  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:44.543465  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:44.545618  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:44.545638  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:44.545647  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:44.545651  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:44 GMT
	I0916 10:56:44.545655  157008 round_trippers.go:580]     Audit-Id: c16b43f3-9846-468f-b142-3dc438b7c8a7
	I0916 10:56:44.545659  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:44.545663  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:44.545667  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:44.545800  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-079070","namespace":"kube-system","uid":"cc5dd2a3-a136-42b4-a6f9-11c733cb38c4","resourceVersion":"395","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.mirror":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.seen":"2024-09-16T10:56:25.844756911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4975 chars]
	I0916 10:56:44.743274  157008 request.go:632] Waited for 197.065316ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:44.743352  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:44.743360  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:44.743369  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:44.743374  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:44.745605  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:44.745624  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:44.745630  157008 round_trippers.go:580]     Audit-Id: 72575e76-f5ba-4148-baa3-79826b9fa941
	I0916 10:56:44.745634  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:44.745638  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:44.745641  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:44.745644  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:44.745646  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:44 GMT
	I0916 10:56:44.745813  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:44.746109  157008 pod_ready.go:93] pod "kube-scheduler-multinode-079070" in "kube-system" namespace has status "Ready":"True"
	I0916 10:56:44.746123  157008 pod_ready.go:82] duration metric: took 400.165771ms for pod "kube-scheduler-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:44.746130  157008 pod_ready.go:39] duration metric: took 13.116745728s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:56:44.746145  157008 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:56:44.746201  157008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:56:44.756316  157008 command_runner.go:130] > 1464
	I0916 10:56:44.757072  157008 api_server.go:72] duration metric: took 13.835479469s to wait for apiserver process to appear ...
	I0916 10:56:44.757092  157008 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:56:44.757116  157008 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0916 10:56:44.760758  157008 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0916 10:56:44.760825  157008 round_trippers.go:463] GET https://192.168.67.2:8443/version
	I0916 10:56:44.760830  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:44.760839  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:44.760844  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:44.761651  157008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 10:56:44.761671  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:44.761679  157008 round_trippers.go:580]     Content-Length: 263
	I0916 10:56:44.761683  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:44 GMT
	I0916 10:56:44.761686  157008 round_trippers.go:580]     Audit-Id: db581900-5e49-4d17-80dd-6040f90c7677
	I0916 10:56:44.761688  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:44.761691  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:44.761694  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:44.761696  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:44.761712  157008 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0916 10:56:44.761804  157008 api_server.go:141] control plane version: v1.31.1
	I0916 10:56:44.761821  157008 api_server.go:131] duration metric: took 4.723091ms to wait for apiserver health ...
	I0916 10:56:44.761828  157008 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:56:44.943280  157008 request.go:632] Waited for 181.380007ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:56:44.943361  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:56:44.943367  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:44.943374  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:44.943379  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:44.946324  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:44.946346  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:44.946356  157008 round_trippers.go:580]     Audit-Id: e7e29ee8-f8fa-46d7-a199-50cf285a8fda
	I0916 10:56:44.946363  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:44.946367  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:44.946371  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:44.946376  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:44.946379  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:44 GMT
	I0916 10:56:44.946824  157008 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"417"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"411","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 58808 chars]
	I0916 10:56:44.948609  157008 system_pods.go:59] 8 kube-system pods found
	I0916 10:56:44.948642  157008 system_pods.go:61] "coredns-7c65d6cfc9-ft9gh" [8052b6a1-7257-44d4-a318-740afd039d2c] Running
	I0916 10:56:44.948650  157008 system_pods.go:61] "etcd-multinode-079070" [fdcbee48-0d3b-47d7-acd2-298979345c7a] Running
	I0916 10:56:44.948656  157008 system_pods.go:61] "kindnet-flmdv" [91449e63-0ca3-4dc6-92ef-e3c5ab102dae] Running
	I0916 10:56:44.948663  157008 system_pods.go:61] "kube-apiserver-multinode-079070" [72784d35-4a94-476c-a4e9-dc3c6c3a8c46] Running
	I0916 10:56:44.948672  157008 system_pods.go:61] "kube-controller-manager-multinode-079070" [44a2f17c-654a-4d64-b474-70ce475c3afe] Running
	I0916 10:56:44.948678  157008 system_pods.go:61] "kube-proxy-2vhmt" [6f3faf85-04e9-4840-855d-dd1ef9d4e463] Running
	I0916 10:56:44.948687  157008 system_pods.go:61] "kube-scheduler-multinode-079070" [cc5dd2a3-a136-42b4-a6f9-11c733cb38c4] Running
	I0916 10:56:44.948692  157008 system_pods.go:61] "storage-provisioner" [43862f2e-c773-468d-ab03-8b0bc0633ad4] Running
	I0916 10:56:44.948703  157008 system_pods.go:74] duration metric: took 186.86592ms to wait for pod list to return data ...
	I0916 10:56:44.948716  157008 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:56:45.143167  157008 request.go:632] Waited for 194.333179ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:56:45.143221  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:56:45.143226  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:45.143233  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:45.143236  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:45.145785  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:45.145807  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:45.145814  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:45.145817  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:45.145822  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:45.145826  157008 round_trippers.go:580]     Content-Length: 261
	I0916 10:56:45.145829  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:45 GMT
	I0916 10:56:45.145832  157008 round_trippers.go:580]     Audit-Id: 3e574ad9-7a25-4f42-b316-2d50be01118c
	I0916 10:56:45.145834  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:45.145858  157008 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"418"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"4622bf83-82d0-4a2c-a46c-d6dbfa5ce9ea","resourceVersion":"300","creationTimestamp":"2024-09-16T10:56:30Z"}}]}
	I0916 10:56:45.146022  157008 default_sa.go:45] found service account: "default"
	I0916 10:56:45.146038  157008 default_sa.go:55] duration metric: took 197.316045ms for default service account to be created ...
	I0916 10:56:45.146047  157008 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:56:45.343489  157008 request.go:632] Waited for 197.37506ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:56:45.343554  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:56:45.343571  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:45.343581  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:45.343594  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:45.346644  157008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:56:45.346666  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:45.346674  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:45.346677  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:45.346680  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:45.346683  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:45.346685  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:45 GMT
	I0916 10:56:45.346688  157008 round_trippers.go:580]     Audit-Id: ce162911-dd20-4acf-b944-e5a3e23b5b5b
	I0916 10:56:45.347074  157008 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"418"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"411","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 58808 chars]
	I0916 10:56:45.348816  157008 system_pods.go:86] 8 kube-system pods found
	I0916 10:56:45.348837  157008 system_pods.go:89] "coredns-7c65d6cfc9-ft9gh" [8052b6a1-7257-44d4-a318-740afd039d2c] Running
	I0916 10:56:45.348843  157008 system_pods.go:89] "etcd-multinode-079070" [fdcbee48-0d3b-47d7-acd2-298979345c7a] Running
	I0916 10:56:45.348847  157008 system_pods.go:89] "kindnet-flmdv" [91449e63-0ca3-4dc6-92ef-e3c5ab102dae] Running
	I0916 10:56:45.348851  157008 system_pods.go:89] "kube-apiserver-multinode-079070" [72784d35-4a94-476c-a4e9-dc3c6c3a8c46] Running
	I0916 10:56:45.348858  157008 system_pods.go:89] "kube-controller-manager-multinode-079070" [44a2f17c-654a-4d64-b474-70ce475c3afe] Running
	I0916 10:56:45.348864  157008 system_pods.go:89] "kube-proxy-2vhmt" [6f3faf85-04e9-4840-855d-dd1ef9d4e463] Running
	I0916 10:56:45.348870  157008 system_pods.go:89] "kube-scheduler-multinode-079070" [cc5dd2a3-a136-42b4-a6f9-11c733cb38c4] Running
	I0916 10:56:45.348874  157008 system_pods.go:89] "storage-provisioner" [43862f2e-c773-468d-ab03-8b0bc0633ad4] Running
	I0916 10:56:45.348881  157008 system_pods.go:126] duration metric: took 202.828654ms to wait for k8s-apps to be running ...
	I0916 10:56:45.348888  157008 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:56:45.348935  157008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:56:45.359950  157008 system_svc.go:56] duration metric: took 11.051162ms WaitForService to wait for kubelet
	I0916 10:56:45.359981  157008 kubeadm.go:582] duration metric: took 14.438390222s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:56:45.359997  157008 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:56:45.543416  157008 request.go:632] Waited for 183.343539ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes
	I0916 10:56:45.543515  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes
	I0916 10:56:45.543526  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:45.543537  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:45.543543  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:45.545923  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:45.545955  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:45.545965  157008 round_trippers.go:580]     Audit-Id: f26878ca-580e-458e-ba1e-a48fce241806
	I0916 10:56:45.545971  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:45.545976  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:45.545981  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:45.545987  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:45.545997  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:45 GMT
	I0916 10:56:45.546113  157008 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"418"},"items":[{"metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"manag
edFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1", [truncated 5074 chars]
	I0916 10:56:45.546471  157008 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:56:45.546492  157008 node_conditions.go:123] node cpu capacity is 8
	I0916 10:56:45.546508  157008 node_conditions.go:105] duration metric: took 186.505921ms to run NodePressure ...
	I0916 10:56:45.546521  157008 start.go:241] waiting for startup goroutines ...
	I0916 10:56:45.546532  157008 start.go:246] waiting for cluster config update ...
	I0916 10:56:45.546548  157008 start.go:255] writing updated cluster config ...
	I0916 10:56:45.548730  157008 out.go:201] 
	I0916 10:56:45.550196  157008 config.go:182] Loaded profile config "multinode-079070": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:56:45.550264  157008 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/config.json ...
	I0916 10:56:45.551810  157008 out.go:177] * Starting "multinode-079070-m02" worker node in "multinode-079070" cluster
	I0916 10:56:45.553282  157008 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 10:56:45.554392  157008 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:56:45.555340  157008 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:56:45.555357  157008 cache.go:56] Caching tarball of preloaded images
	I0916 10:56:45.555369  157008 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:56:45.555461  157008 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 10:56:45.555475  157008 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 10:56:45.555573  157008 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/config.json ...
	W0916 10:56:45.574716  157008 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:56:45.574739  157008 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:56:45.574828  157008 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:56:45.574844  157008 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:56:45.574848  157008 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:56:45.574857  157008 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:56:45.574864  157008 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:56:45.576034  157008 image.go:273] response: 
	I0916 10:56:45.626398  157008 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:56:45.626436  157008 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:56:45.626480  157008 start.go:360] acquireMachinesLock for multinode-079070-m02: {Name:mk1713c8fba020df744918162d1a483c7b41a015 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:56:45.626594  157008 start.go:364] duration metric: took 93.573µs to acquireMachinesLock for "multinode-079070-m02"
	I0916 10:56:45.626629  157008 start.go:93] Provisioning new machine with config: &{Name:multinode-079070 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-079070 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:false Worker:true}
	I0916 10:56:45.626715  157008 start.go:125] createHost starting for "m02" (driver="docker")
	I0916 10:56:45.628577  157008 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 10:56:45.628686  157008 start.go:159] libmachine.API.Create for "multinode-079070" (driver="docker")
	I0916 10:56:45.628719  157008 client.go:168] LocalClient.Create starting
	I0916 10:56:45.628809  157008 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem
	I0916 10:56:45.628844  157008 main.go:141] libmachine: Decoding PEM data...
	I0916 10:56:45.628859  157008 main.go:141] libmachine: Parsing certificate...
	I0916 10:56:45.628910  157008 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem
	I0916 10:56:45.628929  157008 main.go:141] libmachine: Decoding PEM data...
	I0916 10:56:45.628936  157008 main.go:141] libmachine: Parsing certificate...
	I0916 10:56:45.629156  157008 cli_runner.go:164] Run: docker network inspect multinode-079070 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:56:45.645613  157008 network_create.go:77] Found existing network {name:multinode-079070 subnet:0xc0014afb30 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 67 1] mtu:1500}
	I0916 10:56:45.645649  157008 kic.go:121] calculated static IP "192.168.67.3" for the "multinode-079070-m02" container
	I0916 10:56:45.645705  157008 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 10:56:45.662816  157008 cli_runner.go:164] Run: docker volume create multinode-079070-m02 --label name.minikube.sigs.k8s.io=multinode-079070-m02 --label created_by.minikube.sigs.k8s.io=true
	I0916 10:56:45.681356  157008 oci.go:103] Successfully created a docker volume multinode-079070-m02
	I0916 10:56:45.681428  157008 cli_runner.go:164] Run: docker run --rm --name multinode-079070-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-079070-m02 --entrypoint /usr/bin/test -v multinode-079070-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 10:56:46.179369  157008 oci.go:107] Successfully prepared a docker volume multinode-079070-m02
	I0916 10:56:46.179409  157008 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:56:46.179433  157008 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 10:56:46.179500  157008 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-079070-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 10:56:50.531696  157008 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-079070-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.352149496s)
	I0916 10:56:50.531729  157008 kic.go:203] duration metric: took 4.352293012s to extract preloaded images to volume ...
	W0916 10:56:50.531893  157008 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 10:56:50.532011  157008 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 10:56:50.581633  157008 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-079070-m02 --name multinode-079070-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-079070-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-079070-m02 --network multinode-079070 --ip 192.168.67.3 --volume multinode-079070-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 10:56:50.886437  157008 cli_runner.go:164] Run: docker container inspect multinode-079070-m02 --format={{.State.Running}}
	I0916 10:56:50.906368  157008 cli_runner.go:164] Run: docker container inspect multinode-079070-m02 --format={{.State.Status}}
	I0916 10:56:50.924478  157008 cli_runner.go:164] Run: docker exec multinode-079070-m02 stat /var/lib/dpkg/alternatives/iptables
	I0916 10:56:50.968626  157008 oci.go:144] the created container "multinode-079070-m02" has a running status.
	I0916 10:56:50.968664  157008 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070-m02/id_rsa...
	I0916 10:56:51.042731  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 10:56:51.042776  157008 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 10:56:51.063220  157008 cli_runner.go:164] Run: docker container inspect multinode-079070-m02 --format={{.State.Status}}
	I0916 10:56:51.080379  157008 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 10:56:51.080406  157008 kic_runner.go:114] Args: [docker exec --privileged multinode-079070-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 10:56:51.123842  157008 cli_runner.go:164] Run: docker container inspect multinode-079070-m02 --format={{.State.Status}}
	I0916 10:56:51.144968  157008 machine.go:93] provisionDockerMachine start ...
	I0916 10:56:51.145060  157008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070-m02
	I0916 10:56:51.163108  157008 main.go:141] libmachine: Using SSH client type: native
	I0916 10:56:51.163413  157008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32913 <nil> <nil>}
	I0916 10:56:51.163431  157008 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:56:51.164211  157008 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55248->127.0.0.1:32913: read: connection reset by peer
	I0916 10:56:54.295104  157008 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-079070-m02
	
	I0916 10:56:54.295133  157008 ubuntu.go:169] provisioning hostname "multinode-079070-m02"
	I0916 10:56:54.295195  157008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070-m02
	I0916 10:56:54.311975  157008 main.go:141] libmachine: Using SSH client type: native
	I0916 10:56:54.312178  157008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32913 <nil> <nil>}
	I0916 10:56:54.312197  157008 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-079070-m02 && echo "multinode-079070-m02" | sudo tee /etc/hostname
	I0916 10:56:54.454703  157008 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-079070-m02
	
	I0916 10:56:54.454767  157008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070-m02
	I0916 10:56:54.471812  157008 main.go:141] libmachine: Using SSH client type: native
	I0916 10:56:54.472033  157008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32913 <nil> <nil>}
	I0916 10:56:54.472054  157008 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-079070-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-079070-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-079070-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:56:54.607946  157008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:56:54.607977  157008 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 10:56:54.607999  157008 ubuntu.go:177] setting up certificates
	I0916 10:56:54.608012  157008 provision.go:84] configureAuth start
	I0916 10:56:54.608068  157008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-079070-m02
	I0916 10:56:54.624810  157008 provision.go:143] copyHostCerts
	I0916 10:56:54.624853  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:56:54.624889  157008 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 10:56:54.624898  157008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:56:54.624976  157008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 10:56:54.625066  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:56:54.625086  157008 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 10:56:54.625094  157008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:56:54.625135  157008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 10:56:54.625197  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:56:54.625221  157008 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 10:56:54.625230  157008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:56:54.625263  157008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 10:56:54.625338  157008 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.multinode-079070-m02 san=[127.0.0.1 192.168.67.3 localhost minikube multinode-079070-m02]
	I0916 10:56:54.842419  157008 provision.go:177] copyRemoteCerts
	I0916 10:56:54.842473  157008 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:56:54.842510  157008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070-m02
	I0916 10:56:54.859515  157008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070-m02/id_rsa Username:docker}
	I0916 10:56:54.956648  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:56:54.956771  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 10:56:54.980228  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:56:54.980305  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0916 10:56:55.003269  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:56:55.003367  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 10:56:55.027071  157008 provision.go:87] duration metric: took 419.04362ms to configureAuth
	I0916 10:56:55.027105  157008 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:56:55.027266  157008 config.go:182] Loaded profile config "multinode-079070": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:56:55.027277  157008 machine.go:96] duration metric: took 3.88228902s to provisionDockerMachine
	I0916 10:56:55.027285  157008 client.go:171] duration metric: took 9.398556633s to LocalClient.Create
	I0916 10:56:55.027302  157008 start.go:167] duration metric: took 9.398616763s to libmachine.API.Create "multinode-079070"
	I0916 10:56:55.027315  157008 start.go:293] postStartSetup for "multinode-079070-m02" (driver="docker")
	I0916 10:56:55.027326  157008 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:56:55.027376  157008 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:56:55.027423  157008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070-m02
	I0916 10:56:55.045390  157008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070-m02/id_rsa Username:docker}
	I0916 10:56:55.140601  157008 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:56:55.143611  157008 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.4 LTS"
	I0916 10:56:55.143627  157008 command_runner.go:130] > NAME="Ubuntu"
	I0916 10:56:55.143633  157008 command_runner.go:130] > VERSION_ID="22.04"
	I0916 10:56:55.143639  157008 command_runner.go:130] > VERSION="22.04.4 LTS (Jammy Jellyfish)"
	I0916 10:56:55.143646  157008 command_runner.go:130] > VERSION_CODENAME=jammy
	I0916 10:56:55.143653  157008 command_runner.go:130] > ID=ubuntu
	I0916 10:56:55.143662  157008 command_runner.go:130] > ID_LIKE=debian
	I0916 10:56:55.143669  157008 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0916 10:56:55.143678  157008 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0916 10:56:55.143686  157008 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0916 10:56:55.143695  157008 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0916 10:56:55.143702  157008 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0916 10:56:55.143794  157008 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:56:55.143819  157008 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:56:55.143832  157008 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:56:55.143843  157008 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:56:55.143862  157008 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 10:56:55.143922  157008 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 10:56:55.144015  157008 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 10:56:55.144026  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /etc/ssl/certs/111892.pem
	I0916 10:56:55.144137  157008 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:56:55.152200  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:56:55.175663  157008 start.go:296] duration metric: took 148.332655ms for postStartSetup
	I0916 10:56:55.176051  157008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-079070-m02
	I0916 10:56:55.193621  157008 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/config.json ...
	I0916 10:56:55.193888  157008 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:56:55.193928  157008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070-m02
	I0916 10:56:55.211158  157008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070-m02/id_rsa Username:docker}
	I0916 10:56:55.300676  157008 command_runner.go:130] > 31%
	I0916 10:56:55.300766  157008 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:56:55.305047  157008 command_runner.go:130] > 202G
	I0916 10:56:55.305233  157008 start.go:128] duration metric: took 9.678504602s to createHost
	I0916 10:56:55.305256  157008 start.go:83] releasing machines lock for "multinode-079070-m02", held for 9.67864523s
	I0916 10:56:55.305332  157008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-079070-m02
	I0916 10:56:55.326806  157008 out.go:177] * Found network options:
	I0916 10:56:55.328472  157008 out.go:177]   - NO_PROXY=192.168.67.2
	W0916 10:56:55.329913  157008 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:56:55.329993  157008 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 10:56:55.330067  157008 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:56:55.330102  157008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070-m02
	I0916 10:56:55.330128  157008 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:56:55.330185  157008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070-m02
	I0916 10:56:55.348534  157008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070-m02/id_rsa Username:docker}
	I0916 10:56:55.348869  157008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070-m02/id_rsa Username:docker}
	I0916 10:56:55.517890  157008 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0916 10:56:55.517961  157008 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0916 10:56:55.517972  157008 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0916 10:56:55.517983  157008 command_runner.go:130] > Device: efh/239d	Inode: 534561      Links: 1
	I0916 10:56:55.517997  157008 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:56:55.518007  157008 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0916 10:56:55.518018  157008 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0916 10:56:55.518029  157008 command_runner.go:130] > Change: 2024-09-16 10:23:17.243457347 +0000
	I0916 10:56:55.518035  157008 command_runner.go:130] >  Birth: 2024-09-16 10:23:17.243457347 +0000
	I0916 10:56:55.518112  157008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 10:56:55.542986  157008 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:56:55.543060  157008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:56:55.570356  157008 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0916 10:56:55.570428  157008 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 10:56:55.570442  157008 start.go:495] detecting cgroup driver to use...
	I0916 10:56:55.570475  157008 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:56:55.570521  157008 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 10:56:55.582185  157008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 10:56:55.592876  157008 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:56:55.592926  157008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:56:55.605320  157008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:56:55.618782  157008 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:56:55.693454  157008 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:56:55.777136  157008 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0916 10:56:55.777171  157008 docker.go:233] disabling docker service ...
	I0916 10:56:55.777222  157008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:56:55.795058  157008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:56:55.806099  157008 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:56:55.817155  157008 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0916 10:56:55.881737  157008 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:56:55.959443  157008 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0916 10:56:55.959510  157008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:56:55.970376  157008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:56:55.985389  157008 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0916 10:56:55.985465  157008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 10:56:55.995204  157008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:56:56.004736  157008 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:56:56.004802  157008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:56:56.014108  157008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:56:56.022860  157008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:56:56.032372  157008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:56:56.041762  157008 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:56:56.050177  157008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:56:56.059397  157008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:56:56.068681  157008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:56:56.078048  157008 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:56:56.085949  157008 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0916 10:56:56.086011  157008 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:56:56.094045  157008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:56:56.175772  157008 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 10:56:56.273047  157008 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 10:56:56.273118  157008 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 10:56:56.276536  157008 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I0916 10:56:56.276576  157008 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0916 10:56:56.276586  157008 command_runner.go:130] > Device: f8h/248d	Inode: 175         Links: 1
	I0916 10:56:56.276596  157008 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:56:56.276605  157008 command_runner.go:130] > Access: 2024-09-16 10:56:56.237445775 +0000
	I0916 10:56:56.276615  157008 command_runner.go:130] > Modify: 2024-09-16 10:56:56.237445775 +0000
	I0916 10:56:56.276625  157008 command_runner.go:130] > Change: 2024-09-16 10:56:56.237445775 +0000
	I0916 10:56:56.276635  157008 command_runner.go:130] >  Birth: -
	I0916 10:56:56.276664  157008 start.go:563] Will wait 60s for crictl version
	I0916 10:56:56.276715  157008 ssh_runner.go:195] Run: which crictl
	I0916 10:56:56.279690  157008 command_runner.go:130] > /usr/bin/crictl
	I0916 10:56:56.279801  157008 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:56:56.309682  157008 command_runner.go:130] > Version:  0.1.0
	I0916 10:56:56.309708  157008 command_runner.go:130] > RuntimeName:  containerd
	I0916 10:56:56.309717  157008 command_runner.go:130] > RuntimeVersion:  1.7.22
	I0916 10:56:56.309723  157008 command_runner.go:130] > RuntimeApiVersion:  v1
	I0916 10:56:56.311581  157008 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 10:56:56.311630  157008 ssh_runner.go:195] Run: containerd --version
	I0916 10:56:56.332608  157008 command_runner.go:130] > containerd containerd.io 1.7.22 7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c
	I0916 10:56:56.334053  157008 ssh_runner.go:195] Run: containerd --version
	I0916 10:56:56.356704  157008 command_runner.go:130] > containerd containerd.io 1.7.22 7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c
	I0916 10:56:56.360520  157008 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 10:56:56.363834  157008 out.go:177]   - env NO_PROXY=192.168.67.2
	I0916 10:56:56.366031  157008 cli_runner.go:164] Run: docker network inspect multinode-079070 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:56:56.384346  157008 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0916 10:56:56.388431  157008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:56:56.400156  157008 mustload.go:65] Loading cluster: multinode-079070
	I0916 10:56:56.400377  157008 config.go:182] Loaded profile config "multinode-079070": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:56:56.400592  157008 cli_runner.go:164] Run: docker container inspect multinode-079070 --format={{.State.Status}}
	I0916 10:56:56.419058  157008 host.go:66] Checking if "multinode-079070" exists ...
	I0916 10:56:56.419372  157008 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070 for IP: 192.168.67.3
	I0916 10:56:56.419386  157008 certs.go:194] generating shared ca certs ...
	I0916 10:56:56.419404  157008 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:56:56.419550  157008 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 10:56:56.419602  157008 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 10:56:56.419616  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:56:56.419634  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:56:56.419657  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:56:56.419670  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:56:56.419766  157008 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 10:56:56.419813  157008 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 10:56:56.419825  157008 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:56:56.419859  157008 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 10:56:56.419894  157008 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:56:56.419921  157008 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 10:56:56.419977  157008 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:56:56.420019  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem -> /usr/share/ca-certificates/11189.pem
	I0916 10:56:56.420050  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /usr/share/ca-certificates/111892.pem
	I0916 10:56:56.420068  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:56:56.420093  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:56:56.445256  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:56:56.469387  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:56:56.493622  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:56:56.517743  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 10:56:56.540533  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 10:56:56.564648  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:56:56.589157  157008 ssh_runner.go:195] Run: openssl version
	I0916 10:56:56.594895  157008 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0916 10:56:56.594997  157008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 10:56:56.604974  157008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 10:56:56.608597  157008 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 10:56:56.608638  157008 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 10:56:56.608695  157008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 10:56:56.615370  157008 command_runner.go:130] > 3ec20f2e
	I0916 10:56:56.615451  157008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:56:56.625401  157008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:56:56.635119  157008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:56:56.639267  157008 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:56:56.639332  157008 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:56:56.639382  157008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:56:56.646551  157008 command_runner.go:130] > b5213941
	I0916 10:56:56.646739  157008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:56:56.656780  157008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 10:56:56.667334  157008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 10:56:56.671420  157008 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 10:56:56.671465  157008 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 10:56:56.671518  157008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 10:56:56.678595  157008 command_runner.go:130] > 51391683
	I0916 10:56:56.678679  157008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 10:56:56.688744  157008 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:56:56.692399  157008 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:56:56.692449  157008 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:56:56.692492  157008 kubeadm.go:934] updating node {m02 192.168.67.3 8443 v1.31.1 containerd false true} ...
	I0916 10:56:56.692596  157008 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-079070-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-079070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:56:56.692664  157008 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:56:56.701739  157008 command_runner.go:130] > kubeadm
	I0916 10:56:56.701763  157008 command_runner.go:130] > kubectl
	I0916 10:56:56.701768  157008 command_runner.go:130] > kubelet
	I0916 10:56:56.701786  157008 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:56:56.701838  157008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0916 10:56:56.710811  157008 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0916 10:56:56.728467  157008 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:56:56.746427  157008 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0916 10:56:56.750239  157008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:56:56.761646  157008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:56:56.839074  157008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:56:56.853245  157008 host.go:66] Checking if "multinode-079070" exists ...
	I0916 10:56:56.853545  157008 start.go:317] joinCluster: &{Name:multinode-079070 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-079070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:56:56.853658  157008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 10:56:56.853716  157008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070
	I0916 10:56:56.873855  157008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070/id_rsa Username:docker}
	I0916 10:56:57.025381  157008 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token g05kzm.0mbgqu1p8k523k5h --discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 
	I0916 10:56:57.025446  157008 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:false Worker:true}
	I0916 10:56:57.025486  157008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token g05kzm.0mbgqu1p8k523k5h --discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=multinode-079070-m02"
	I0916 10:56:57.061284  157008 command_runner.go:130] > [preflight] Running pre-flight checks
	I0916 10:56:57.070829  157008 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0916 10:56:57.070862  157008 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 10:56:57.070870  157008 command_runner.go:130] > OS: Linux
	I0916 10:56:57.070879  157008 command_runner.go:130] > CGROUPS_CPU: enabled
	I0916 10:56:57.070888  157008 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0916 10:56:57.070895  157008 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0916 10:56:57.070903  157008 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0916 10:56:57.070909  157008 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0916 10:56:57.070929  157008 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0916 10:56:57.070940  157008 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0916 10:56:57.070948  157008 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0916 10:56:57.070955  157008 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0916 10:56:57.140017  157008 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0916 10:56:57.140049  157008 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0916 10:56:57.171565  157008 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:56:57.171657  157008 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:56:57.171675  157008 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0916 10:56:57.263400  157008 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:56:57.764408  157008 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.030642ms
	I0916 10:56:57.764444  157008 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0916 10:56:58.275591  157008 command_runner.go:130] > This node has joined the cluster:
	I0916 10:56:58.275619  157008 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0916 10:56:58.275629  157008 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0916 10:56:58.275639  157008 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0916 10:56:58.278537  157008 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 10:56:58.278582  157008 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:56:58.278611  157008 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token g05kzm.0mbgqu1p8k523k5h --discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=multinode-079070-m02": (1.253111083s)
	I0916 10:56:58.278638  157008 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 10:56:58.370323  157008 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0916 10:56:58.443859  157008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-079070-m02 minikube.k8s.io/updated_at=2024_09_16T10_56_58_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=multinode-079070 minikube.k8s.io/primary=false
	I0916 10:56:58.517626  157008 command_runner.go:130] > node/multinode-079070-m02 labeled
	I0916 10:56:58.517669  157008 start.go:319] duration metric: took 1.664126156s to joinCluster
	I0916 10:56:58.517728  157008 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:false Worker:true}
	I0916 10:56:58.518033  157008 config.go:182] Loaded profile config "multinode-079070": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:56:58.519730  157008 out.go:177] * Verifying Kubernetes components...
	I0916 10:56:58.521371  157008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:56:58.606241  157008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:56:58.619445  157008 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:56:58.619685  157008 kapi.go:59] client config for multinode-079070: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:56:58.619965  157008 node_ready.go:35] waiting up to 6m0s for node "multinode-079070-m02" to be "Ready" ...
	I0916 10:56:58.620039  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m02
	I0916 10:56:58.620044  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:58.620051  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:58.620057  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:58.622365  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:58.622383  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:58.622389  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:58.622393  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:58.622397  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:58.622400  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:58 GMT
	I0916 10:56:58.622406  157008 round_trippers.go:580]     Audit-Id: 9936bf44-7b7c-4713-9369-85e89a62f5b9
	I0916 10:56:58.622411  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:58.622582  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070-m02","uid":"5bc3423c-6b73-4003-b0b3-5aa501e974c0","resourceVersion":"457","creationTimestamp":"2024-09-16T10:56:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_56_58_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:57Z","fieldsType":"FieldsV1","fiel
dsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl": [truncated 4404 chars]
	I0916 10:56:59.120197  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m02
	I0916 10:56:59.120225  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:59.120236  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:59.120241  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:59.122563  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:59.122586  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:59.122594  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:59.122601  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:59.122607  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:59.122611  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:59.122620  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:59 GMT
	I0916 10:56:59.122625  157008 round_trippers.go:580]     Audit-Id: fc633423-9e11-4284-8f24-560f03599694
	I0916 10:56:59.122736  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070-m02","uid":"5bc3423c-6b73-4003-b0b3-5aa501e974c0","resourceVersion":"461","creationTimestamp":"2024-09-16T10:56:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_56_58_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:57Z","fieldsType":"FieldsV1","fiel
dsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl": [truncated 4809 chars]
	I0916 10:56:59.123038  157008 node_ready.go:49] node "multinode-079070-m02" has status "Ready":"True"
	I0916 10:56:59.123055  157008 node_ready.go:38] duration metric: took 503.07284ms for node "multinode-079070-m02" to be "Ready" ...
	I0916 10:56:59.123065  157008 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:56:59.123124  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:56:59.123131  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:59.123138  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:59.123142  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:59.126057  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:59.126083  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:59.126093  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:59.126099  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:59.126103  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:59 GMT
	I0916 10:56:59.126108  157008 round_trippers.go:580]     Audit-Id: 26235fe9-2148-430e-8713-fcf75bf03afd
	I0916 10:56:59.126112  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:59.126117  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:59.126602  157008 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"462"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"411","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 69157 chars]
	I0916 10:56:59.128684  157008 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:59.128788  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:59.128799  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:59.128809  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:59.128815  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:59.130878  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:59.130900  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:59.130909  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:59.130913  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:59.130917  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:59 GMT
	I0916 10:56:59.130922  157008 round_trippers.go:580]     Audit-Id: af410ee3-7bb1-465b-b0bf-bfd5b2616fdb
	I0916 10:56:59.130925  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:59.130931  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:59.131161  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"411","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6480 chars]
	I0916 10:56:59.131625  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:59.131640  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:59.131650  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:59.131655  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:59.133536  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:59.133557  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:59.133566  157008 round_trippers.go:580]     Audit-Id: 4d67c7a5-338a-494a-8b30-6bfc1a89dd7b
	I0916 10:56:59.133570  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:59.133573  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:59.133575  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:59.133580  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:59.133582  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:59 GMT
	I0916 10:56:59.133681  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:59.134042  157008 pod_ready.go:93] pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace has status "Ready":"True"
	I0916 10:56:59.134061  157008 pod_ready.go:82] duration metric: took 5.351776ms for pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:59.134077  157008 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:59.134147  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-079070
	I0916 10:56:59.134157  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:59.134168  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:59.134176  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:59.136173  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:59.136192  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:59.136199  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:59.136204  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:59.136208  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:59.136210  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:59.136214  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:59 GMT
	I0916 10:56:59.136220  157008 round_trippers.go:580]     Audit-Id: 33b898f1-b11b-4b70-b3a8-017c9822941a
	I0916 10:56:59.136382  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-079070","namespace":"kube-system","uid":"fdcbee48-0d3b-47d7-acd2-298979345c7a","resourceVersion":"400","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"4cd98cb286c25ab6542db09649b1ab0f","kubernetes.io/config.mirror":"4cd98cb286c25ab6542db09649b1ab0f","kubernetes.io/config.seen":"2024-09-16T10:56:25.844758522Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6440 chars]
	I0916 10:56:59.136786  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:59.136798  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:59.136805  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:59.136809  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:59.138469  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:59.138491  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:59.138501  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:59.138509  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:59 GMT
	I0916 10:56:59.138513  157008 round_trippers.go:580]     Audit-Id: 9a0d2a9b-4553-496b-9005-3de9392d37a2
	I0916 10:56:59.138516  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:59.138520  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:59.138525  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:59.138669  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:59.138974  157008 pod_ready.go:93] pod "etcd-multinode-079070" in "kube-system" namespace has status "Ready":"True"
	I0916 10:56:59.138989  157008 pod_ready.go:82] duration metric: took 4.902844ms for pod "etcd-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:59.139010  157008 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:59.139068  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-079070
	I0916 10:56:59.139076  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:59.139082  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:59.139089  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:59.140826  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:59.140841  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:59.140847  157008 round_trippers.go:580]     Audit-Id: a429f0bd-d016-4e91-895b-1ccf679fc242
	I0916 10:56:59.140850  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:59.140853  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:59.140858  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:59.140862  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:59.140865  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:59 GMT
	I0916 10:56:59.140980  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-079070","namespace":"kube-system","uid":"72784d35-4a94-476c-a4e9-dc3c6c3a8c46","resourceVersion":"397","creationTimestamp":"2024-09-16T10:56:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.mirror":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.seen":"2024-09-16T10:56:20.578394748Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8518 chars]
	I0916 10:56:59.141381  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:59.141393  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:59.141400  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:59.141405  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:59.142988  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:59.143008  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:59.143016  157008 round_trippers.go:580]     Audit-Id: f571d816-0825-4252-bccb-c6dd29f4e1b4
	I0916 10:56:59.143023  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:59.143028  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:59.143032  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:59.143036  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:59.143040  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:59 GMT
	I0916 10:56:59.143179  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:59.143495  157008 pod_ready.go:93] pod "kube-apiserver-multinode-079070" in "kube-system" namespace has status "Ready":"True"
	I0916 10:56:59.143511  157008 pod_ready.go:82] duration metric: took 4.489464ms for pod "kube-apiserver-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:59.143522  157008 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:59.143582  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-079070
	I0916 10:56:59.143597  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:59.143604  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:59.143613  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:59.145448  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:59.145469  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:59.145479  157008 round_trippers.go:580]     Audit-Id: 46aa8449-a877-4116-a4bc-1cfb4dd84f9f
	I0916 10:56:59.145485  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:59.145490  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:59.145495  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:59.145503  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:59.145516  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:59 GMT
	I0916 10:56:59.145644  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-079070","namespace":"kube-system","uid":"44a2f17c-654a-4d64-b474-70ce475c3afe","resourceVersion":"403","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"93bd5dba25d1e51504f9fc3f55fd27c8","kubernetes.io/config.mirror":"93bd5dba25d1e51504f9fc3f55fd27c8","kubernetes.io/config.seen":"2024-09-16T10:56:25.844752735Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8093 chars]
	I0916 10:56:59.146176  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:59.146192  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:59.146202  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:59.146214  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:59.147926  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:59.147946  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:59.147956  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:59.147962  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:59.147968  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:59 GMT
	I0916 10:56:59.147973  157008 round_trippers.go:580]     Audit-Id: 5ce70ec4-f46e-4d0b-8ddf-ce58e6b8aa93
	I0916 10:56:59.147977  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:59.147981  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:59.148121  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:59.148417  157008 pod_ready.go:93] pod "kube-controller-manager-multinode-079070" in "kube-system" namespace has status "Ready":"True"
	I0916 10:56:59.148434  157008 pod_ready.go:82] duration metric: took 4.901859ms for pod "kube-controller-manager-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:59.148443  157008 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2vhmt" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:59.320902  157008 request.go:632] Waited for 172.379789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2vhmt
	I0916 10:56:59.320967  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2vhmt
	I0916 10:56:59.320976  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:59.320987  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:59.320998  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:59.323200  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:59.323226  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:59.323235  157008 round_trippers.go:580]     Audit-Id: c849a5df-3fd2-4e46-aa74-f27996fd7032
	I0916 10:56:59.323238  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:59.323242  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:59.323246  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:59.323249  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:59.323253  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:59 GMT
	I0916 10:56:59.323418  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2vhmt","generateName":"kube-proxy-","namespace":"kube-system","uid":"6f3faf85-04e9-4840-855d-dd1ef9d4e463","resourceVersion":"383","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6175 chars]
	I0916 10:56:59.521320  157008 request.go:632] Waited for 197.428621ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:59.521424  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:59.521434  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:59.521446  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:59.521453  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:59.523960  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:59.523980  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:59.523987  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:59.523990  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:59 GMT
	I0916 10:56:59.523994  157008 round_trippers.go:580]     Audit-Id: 597df916-e2b4-4d08-86b0-bc689c536613
	I0916 10:56:59.523998  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:59.524004  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:59.524007  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:59.524096  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:59.524385  157008 pod_ready.go:93] pod "kube-proxy-2vhmt" in "kube-system" namespace has status "Ready":"True"
	I0916 10:56:59.524399  157008 pod_ready.go:82] duration metric: took 375.950399ms for pod "kube-proxy-2vhmt" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:59.524409  157008 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xkr65" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:59.720561  157008 request.go:632] Waited for 196.084388ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xkr65
	I0916 10:56:59.720653  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xkr65
	I0916 10:56:59.720664  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:59.720676  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:59.720684  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:59.722787  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:59.722812  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:59.722822  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:59.722827  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:59.722832  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:59.722836  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:59 GMT
	I0916 10:56:59.722841  157008 round_trippers.go:580]     Audit-Id: f8279b24-8ea5-41bc-8e46-1de936f01c7a
	I0916 10:56:59.722845  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:59.722975  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xkr65","generateName":"kube-proxy-","namespace":"kube-system","uid":"b8d1009a-f71f-4cb1-a2f0-510a2894874f","resourceVersion":"463","creationTimestamp":"2024-09-16T10:56:57Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6209 chars]
	I0916 10:56:59.920863  157008 request.go:632] Waited for 197.385392ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m02
	I0916 10:56:59.920936  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m02
	I0916 10:56:59.920948  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:59.920959  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:59.920968  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:59.923208  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:59.923246  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:59.923255  157008 round_trippers.go:580]     Audit-Id: 6021aae4-70fd-42ee-ac23-12cd73e86e3f
	I0916 10:56:59.923261  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:59.923266  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:59.923278  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:59.923282  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:59.923289  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:59 GMT
	I0916 10:56:59.923382  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070-m02","uid":"5bc3423c-6b73-4003-b0b3-5aa501e974c0","resourceVersion":"461","creationTimestamp":"2024-09-16T10:56:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_56_58_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:57Z","fieldsType":"FieldsV1","fiel
dsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl": [truncated 4809 chars]
	I0916 10:57:00.120950  157008 request.go:632] Waited for 95.315232ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xkr65
	I0916 10:57:00.121021  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xkr65
	I0916 10:57:00.121027  157008 round_trippers.go:469] Request Headers:
	I0916 10:57:00.121036  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:00.121041  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:00.124280  157008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:57:00.124308  157008 round_trippers.go:577] Response Headers:
	I0916 10:57:00.124318  157008 round_trippers.go:580]     Audit-Id: 6db4b81e-d235-49df-9b53-1f099e6c503a
	I0916 10:57:00.124326  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:00.124331  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:00.124337  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:57:00.124342  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:57:00.124348  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:00 GMT
	I0916 10:57:00.124480  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xkr65","generateName":"kube-proxy-","namespace":"kube-system","uid":"b8d1009a-f71f-4cb1-a2f0-510a2894874f","resourceVersion":"463","creationTimestamp":"2024-09-16T10:56:57Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6209 chars]
	I0916 10:57:00.320297  157008 request.go:632] Waited for 195.238107ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m02
	I0916 10:57:00.320370  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m02
	I0916 10:57:00.320377  157008 round_trippers.go:469] Request Headers:
	I0916 10:57:00.320387  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:00.320395  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:00.322800  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:00.322822  157008 round_trippers.go:577] Response Headers:
	I0916 10:57:00.322831  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:00 GMT
	I0916 10:57:00.322837  157008 round_trippers.go:580]     Audit-Id: d54911a8-c4b8-4dd5-9147-832b8562e5c2
	I0916 10:57:00.322843  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:00.322848  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:00.322854  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:57:00.322860  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:57:00.322979  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070-m02","uid":"5bc3423c-6b73-4003-b0b3-5aa501e974c0","resourceVersion":"461","creationTimestamp":"2024-09-16T10:56:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_56_58_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:57Z","fieldsType":"FieldsV1","fiel
dsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl": [truncated 4809 chars]
	I0916 10:57:00.525428  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xkr65
	I0916 10:57:00.525455  157008 round_trippers.go:469] Request Headers:
	I0916 10:57:00.525463  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:00.525469  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:00.527839  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:00.527866  157008 round_trippers.go:577] Response Headers:
	I0916 10:57:00.527877  157008 round_trippers.go:580]     Audit-Id: ba062c1f-1886-4ab9-a7f1-035b943a8e99
	I0916 10:57:00.527882  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:00.527887  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:00.527891  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:57:00.527896  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:57:00.527924  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:00 GMT
	I0916 10:57:00.528161  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xkr65","generateName":"kube-proxy-","namespace":"kube-system","uid":"b8d1009a-f71f-4cb1-a2f0-510a2894874f","resourceVersion":"463","creationTimestamp":"2024-09-16T10:56:57Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6209 chars]
	I0916 10:57:00.720949  157008 request.go:632] Waited for 192.313677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m02
	I0916 10:57:00.721102  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m02
	I0916 10:57:00.721121  157008 round_trippers.go:469] Request Headers:
	I0916 10:57:00.721148  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:00.721161  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:00.725156  157008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:57:00.725186  157008 round_trippers.go:577] Response Headers:
	I0916 10:57:00.725195  157008 round_trippers.go:580]     Audit-Id: 3477703f-a059-4cd8-b356-8854f325621a
	I0916 10:57:00.725200  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:00.725206  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:00.725210  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:57:00.725215  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:57:00.725219  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:00 GMT
	I0916 10:57:00.725363  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070-m02","uid":"5bc3423c-6b73-4003-b0b3-5aa501e974c0","resourceVersion":"461","creationTimestamp":"2024-09-16T10:56:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_56_58_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:57Z","fieldsType":"FieldsV1","fiel
dsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl": [truncated 4809 chars]
	I0916 10:57:01.025509  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xkr65
	I0916 10:57:01.025532  157008 round_trippers.go:469] Request Headers:
	I0916 10:57:01.025549  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:01.025554  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:01.027576  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:01.027599  157008 round_trippers.go:577] Response Headers:
	I0916 10:57:01.027608  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:57:01.027617  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:01 GMT
	I0916 10:57:01.027623  157008 round_trippers.go:580]     Audit-Id: 3f612fa7-8160-4161-96bd-d1e225d5bec1
	I0916 10:57:01.027628  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:01.027634  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:01.027638  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:57:01.027792  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xkr65","generateName":"kube-proxy-","namespace":"kube-system","uid":"b8d1009a-f71f-4cb1-a2f0-510a2894874f","resourceVersion":"473","creationTimestamp":"2024-09-16T10:56:57Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6183 chars]
	I0916 10:57:01.120579  157008 request.go:632] Waited for 92.258964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m02
	I0916 10:57:01.120660  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m02
	I0916 10:57:01.120668  157008 round_trippers.go:469] Request Headers:
	I0916 10:57:01.120680  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:01.120689  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:01.123169  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:01.123198  157008 round_trippers.go:577] Response Headers:
	I0916 10:57:01.123211  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:01.123216  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:01.123221  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:57:01.123227  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:57:01.123233  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:01 GMT
	I0916 10:57:01.123238  157008 round_trippers.go:580]     Audit-Id: 1de0db49-a15c-4496-b38b-80dc984ea638
	I0916 10:57:01.123339  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070-m02","uid":"5bc3423c-6b73-4003-b0b3-5aa501e974c0","resourceVersion":"461","creationTimestamp":"2024-09-16T10:56:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_56_58_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:57Z","fieldsType":"FieldsV1","fiel
dsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl": [truncated 4809 chars]
	I0916 10:57:01.123709  157008 pod_ready.go:93] pod "kube-proxy-xkr65" in "kube-system" namespace has status "Ready":"True"
	I0916 10:57:01.123728  157008 pod_ready.go:82] duration metric: took 1.599312782s for pod "kube-proxy-xkr65" in "kube-system" namespace to be "Ready" ...
	I0916 10:57:01.123772  157008 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:57:01.321200  157008 request.go:632] Waited for 197.347273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 10:57:01.321278  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 10:57:01.321287  157008 round_trippers.go:469] Request Headers:
	I0916 10:57:01.321295  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:01.321301  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:01.323595  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:01.323619  157008 round_trippers.go:577] Response Headers:
	I0916 10:57:01.323627  157008 round_trippers.go:580]     Audit-Id: fefee779-b734-45a3-8a31-a4ca8cf296c3
	I0916 10:57:01.323632  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:01.323639  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:01.323643  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:57:01.323648  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:57:01.323651  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:01 GMT
	I0916 10:57:01.323808  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-079070","namespace":"kube-system","uid":"cc5dd2a3-a136-42b4-a6f9-11c733cb38c4","resourceVersion":"395","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.mirror":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.seen":"2024-09-16T10:56:25.844756911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4975 chars]
	I0916 10:57:01.520613  157008 request.go:632] Waited for 196.362213ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:57:01.520671  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:57:01.520676  157008 round_trippers.go:469] Request Headers:
	I0916 10:57:01.520683  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:01.520687  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:01.523097  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:01.523125  157008 round_trippers.go:577] Response Headers:
	I0916 10:57:01.523132  157008 round_trippers.go:580]     Audit-Id: af1059ea-ae61-4fec-bce6-3ee0b7aa31fb
	I0916 10:57:01.523140  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:01.523143  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:01.523146  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:57:01.523149  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:57:01.523153  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:01 GMT
	I0916 10:57:01.523389  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:57:01.523698  157008 pod_ready.go:93] pod "kube-scheduler-multinode-079070" in "kube-system" namespace has status "Ready":"True"
	I0916 10:57:01.523717  157008 pod_ready.go:82] duration metric: took 399.936355ms for pod "kube-scheduler-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:57:01.523731  157008 pod_ready.go:39] duration metric: took 2.400654396s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:57:01.523781  157008 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:57:01.523832  157008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:57:01.535208  157008 system_svc.go:56] duration metric: took 11.422237ms WaitForService to wait for kubelet
	I0916 10:57:01.535243  157008 kubeadm.go:582] duration metric: took 3.017488281s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:57:01.535262  157008 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:57:01.720683  157008 request.go:632] Waited for 185.351495ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes
	I0916 10:57:01.720756  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes
	I0916 10:57:01.720763  157008 round_trippers.go:469] Request Headers:
	I0916 10:57:01.720773  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:01.720779  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:01.723345  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:01.723368  157008 round_trippers.go:577] Response Headers:
	I0916 10:57:01.723378  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:01 GMT
	I0916 10:57:01.723384  157008 round_trippers.go:580]     Audit-Id: b53cf9d9-acc1-4546-8df1-8d8ea64f26a7
	I0916 10:57:01.723388  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:01.723392  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:01.723396  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:57:01.723399  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:57:01.723642  157008 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"477"},"items":[{"metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"manag
edFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1", [truncated 10875 chars]
	I0916 10:57:01.724142  157008 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:57:01.724161  157008 node_conditions.go:123] node cpu capacity is 8
	I0916 10:57:01.724173  157008 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:57:01.724181  157008 node_conditions.go:123] node cpu capacity is 8
	I0916 10:57:01.724188  157008 node_conditions.go:105] duration metric: took 188.919976ms to run NodePressure ...
	I0916 10:57:01.724204  157008 start.go:241] waiting for startup goroutines ...
	I0916 10:57:01.724240  157008 start.go:255] writing updated cluster config ...
	I0916 10:57:01.724528  157008 ssh_runner.go:195] Run: rm -f paused
	I0916 10:57:01.731531  157008 out.go:177] * Done! kubectl is now configured to use "multinode-079070" cluster and "default" namespace by default
	E0916 10:57:01.732850  157008 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	8414e0e62b35b       8c811b4aec35f       29 seconds ago       Running             busybox                   0                   10183dc0f9d0a       busybox-7dff88458-pjlvx
	8954864d99d22       c69fa2e9cbf5f       50 seconds ago       Running             coredns                   0                   fa69986f2f5d5       coredns-7c65d6cfc9-ft9gh
	269042fd7e065       6e38f40d628db       About a minute ago   Running             storage-provisioner       0                   097580079dfa7       storage-provisioner
	de61885ae0251       12968670680f4       About a minute ago   Running             kindnet-cni               0                   a9b3bc3ef2872       kindnet-flmdv
	809210a041e03       60c005f310ff3       About a minute ago   Running             kube-proxy                0                   d6e6b6a3008e8       kube-proxy-2vhmt
	941f1dc8e3837       175ffd71cce3d       About a minute ago   Running             kube-controller-manager   0                   84635e5713cec       kube-controller-manager-multinode-079070
	0bc7fe20ff6ae       2e96e5913fc06       About a minute ago   Running             etcd                      0                   a53811583dd27       etcd-multinode-079070
	5d29b7e4482f8       9aa1fad941575       About a minute ago   Running             kube-scheduler            0                   b33679bbe5cbf       kube-scheduler-multinode-079070
	411c657184dfd       6bab7719df100       About a minute ago   Running             kube-apiserver            0                   c43b3a5fe0f9f       kube-apiserver-multinode-079070
	
	
	==> containerd <==
	Sep 16 10:56:42 multinode-079070 containerd[863]: time="2024-09-16T10:56:42.893918371Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 10:56:42 multinode-079070 containerd[863]: time="2024-09-16T10:56:42.893933858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:56:42 multinode-079070 containerd[863]: time="2024-09-16T10:56:42.894036742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:56:42 multinode-079070 containerd[863]: time="2024-09-16T10:56:42.943242149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ft9gh,Uid:8052b6a1-7257-44d4-a318-740afd039d2c,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa69986f2f5d5faeb3b57e3dd348714100794668735a682dfbb154a829d8612d\""
	Sep 16 10:56:42 multinode-079070 containerd[863]: time="2024-09-16T10:56:42.946054211Z" level=info msg="CreateContainer within sandbox \"fa69986f2f5d5faeb3b57e3dd348714100794668735a682dfbb154a829d8612d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Sep 16 10:56:42 multinode-079070 containerd[863]: time="2024-09-16T10:56:42.958624686Z" level=info msg="CreateContainer within sandbox \"fa69986f2f5d5faeb3b57e3dd348714100794668735a682dfbb154a829d8612d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8954864d99d2239b93c763ac312c7353b37bfba4eb693480619025ea3402616f\""
	Sep 16 10:56:42 multinode-079070 containerd[863]: time="2024-09-16T10:56:42.959194391Z" level=info msg="StartContainer for \"8954864d99d2239b93c763ac312c7353b37bfba4eb693480619025ea3402616f\""
	Sep 16 10:56:43 multinode-079070 containerd[863]: time="2024-09-16T10:56:43.003984911Z" level=info msg="StartContainer for \"8954864d99d2239b93c763ac312c7353b37bfba4eb693480619025ea3402616f\" returns successfully"
	Sep 16 10:57:02 multinode-079070 containerd[863]: time="2024-09-16T10:57:02.668464026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-7dff88458-pjlvx,Uid:e697a697-12c1-405c-bc2e-fa881b5fd008,Namespace:default,Attempt:0,}"
	Sep 16 10:57:02 multinode-079070 containerd[863]: time="2024-09-16T10:57:02.705293950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 10:57:02 multinode-079070 containerd[863]: time="2024-09-16T10:57:02.705365581Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 10:57:02 multinode-079070 containerd[863]: time="2024-09-16T10:57:02.705377176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:57:02 multinode-079070 containerd[863]: time="2024-09-16T10:57:02.705466070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:57:02 multinode-079070 containerd[863]: time="2024-09-16T10:57:02.751280360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-7dff88458-pjlvx,Uid:e697a697-12c1-405c-bc2e-fa881b5fd008,Namespace:default,Attempt:0,} returns sandbox id \"10183dc0f9d0a512adcc7b4ca83b964d4c75224cc9c608e780553e39c4cb8d21\""
	Sep 16 10:57:02 multinode-079070 containerd[863]: time="2024-09-16T10:57:02.753499034Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 16 10:57:04 multinode-079070 containerd[863]: time="2024-09-16T10:57:04.714040465Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 10:57:04 multinode-079070 containerd[863]: time="2024-09-16T10:57:04.714991927Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28: active requests=0, bytes read=727667"
	Sep 16 10:57:04 multinode-079070 containerd[863]: time="2024-09-16T10:57:04.716562157Z" level=info msg="ImageCreate event name:\"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 10:57:04 multinode-079070 containerd[863]: time="2024-09-16T10:57:04.718928282Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 10:57:04 multinode-079070 containerd[863]: time="2024-09-16T10:57:04.719456183Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28\" with image id \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\", repo tag \"gcr.io/k8s-minikube/busybox:1.28\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\", size \"725911\" in 1.965911634s"
	Sep 16 10:57:04 multinode-079070 containerd[863]: time="2024-09-16T10:57:04.719505047Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\" returns image reference \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\""
	Sep 16 10:57:04 multinode-079070 containerd[863]: time="2024-09-16T10:57:04.721597208Z" level=info msg="CreateContainer within sandbox \"10183dc0f9d0a512adcc7b4ca83b964d4c75224cc9c608e780553e39c4cb8d21\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Sep 16 10:57:04 multinode-079070 containerd[863]: time="2024-09-16T10:57:04.733291033Z" level=info msg="CreateContainer within sandbox \"10183dc0f9d0a512adcc7b4ca83b964d4c75224cc9c608e780553e39c4cb8d21\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"8414e0e62b35baa7bf8703924991d3cd9f3e9132c0609f0ef74a8091678aefea\""
	Sep 16 10:57:04 multinode-079070 containerd[863]: time="2024-09-16T10:57:04.733905454Z" level=info msg="StartContainer for \"8414e0e62b35baa7bf8703924991d3cd9f3e9132c0609f0ef74a8091678aefea\""
	Sep 16 10:57:04 multinode-079070 containerd[863]: time="2024-09-16T10:57:04.805432362Z" level=info msg="StartContainer for \"8414e0e62b35baa7bf8703924991d3cd9f3e9132c0609f0ef74a8091678aefea\" returns successfully"
	
	
	==> coredns [8954864d99d2239b93c763ac312c7353b37bfba4eb693480619025ea3402616f] <==
	[INFO] 10.244.0.3:51056 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102475s
	[INFO] 10.244.1.2:41548 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000178899s
	[INFO] 10.244.1.2:39453 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001782363s
	[INFO] 10.244.1.2:56115 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000130511s
	[INFO] 10.244.1.2:37210 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000101251s
	[INFO] 10.244.1.2:55581 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001236938s
	[INFO] 10.244.1.2:35975 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081083s
	[INFO] 10.244.1.2:42877 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073809s
	[INFO] 10.244.1.2:41783 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000084902s
	[INFO] 10.244.0.3:55155 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116031s
	[INFO] 10.244.0.3:59444 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115061s
	[INFO] 10.244.0.3:34308 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088507s
	[INFO] 10.244.0.3:40765 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088438s
	[INFO] 10.244.1.2:59446 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204406s
	[INFO] 10.244.1.2:52620 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000138315s
	[INFO] 10.244.1.2:51972 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105158s
	[INFO] 10.244.1.2:47877 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087457s
	[INFO] 10.244.0.3:45741 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142885s
	[INFO] 10.244.0.3:32935 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000169213s
	[INFO] 10.244.0.3:49721 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000165206s
	[INFO] 10.244.0.3:45554 - 5 "PTR IN 1.67.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109895s
	[INFO] 10.244.1.2:44123 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168559s
	[INFO] 10.244.1.2:55322 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000107325s
	[INFO] 10.244.1.2:36098 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000102498s
	[INFO] 10.244.1.2:57704 - 5 "PTR IN 1.67.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000095141s
	
	
	==> describe nodes <==
	Name:               multinode-079070
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-079070
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=multinode-079070
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_56_26_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:56:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-079070
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:57:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:57:27 +0000   Mon, 16 Sep 2024 10:56:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:57:27 +0000   Mon, 16 Sep 2024 10:56:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:57:27 +0000   Mon, 16 Sep 2024 10:56:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:57:27 +0000   Mon, 16 Sep 2024 10:56:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    multinode-079070
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 36f88572435b40548db739493820dc2c
	  System UUID:                aacf5fc8-9d89-4df8-b6e3-7265bb86b554
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-pjlvx                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 coredns-7c65d6cfc9-ft9gh                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     63s
	  kube-system                 etcd-multinode-079070                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         68s
	  kube-system                 kindnet-flmdv                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      63s
	  kube-system                 kube-apiserver-multinode-079070             250m (3%)     0 (0%)      0 (0%)           0 (0%)         69s
	  kube-system                 kube-controller-manager-multinode-079070    200m (2%)     0 (0%)      0 (0%)           0 (0%)         68s
	  kube-system                 kube-proxy-2vhmt                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-scheduler-multinode-079070             100m (1%)     0 (0%)      0 (0%)           0 (0%)         68s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         62s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 61s   kube-proxy       
	  Normal   Starting                 69s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 69s   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  68s   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  68s   kubelet          Node multinode-079070 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    68s   kubelet          Node multinode-079070 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     68s   kubelet          Node multinode-079070 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           64s   node-controller  Node multinode-079070 event: Registered Node multinode-079070 in Controller
	
	
	Name:               multinode-079070-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-079070-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=multinode-079070
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_56_58_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:56:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-079070-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:57:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:57:28 +0000   Mon, 16 Sep 2024 10:56:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:57:28 +0000   Mon, 16 Sep 2024 10:56:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:57:28 +0000   Mon, 16 Sep 2024 10:56:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:57:28 +0000   Mon, 16 Sep 2024 10:56:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.3
	  Hostname:    multinode-079070-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 e82148e92f1f47e3b3415e006f73af99
	  System UUID:                230f6bd5-a1b9-46e1-be41-9ec64c608739
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-x6h7b    0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kindnet-fs5x4              100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      37s
	  kube-system                 kube-proxy-xkr65           0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%)  100m (1%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 33s                kube-proxy       
	  Warning  CgroupV1                 37s                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  37s (x2 over 37s)  kubelet          Node multinode-079070-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    37s (x2 over 37s)  kubelet          Node multinode-079070-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     37s (x2 over 37s)  kubelet          Node multinode-079070-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  37s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeReady                36s                kubelet          Node multinode-079070-m02 status is now: NodeReady
	  Normal   RegisteredNode           34s                node-controller  Node multinode-079070-m02 event: Registered Node multinode-079070-m02 in Controller
	
	
	Name:               multinode-079070-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-079070-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=multinode-079070
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_57_29_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:57:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-079070-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:57:30 +0000   Mon, 16 Sep 2024 10:57:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:57:30 +0000   Mon, 16 Sep 2024 10:57:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:57:30 +0000   Mon, 16 Sep 2024 10:57:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:57:30 +0000   Mon, 16 Sep 2024 10:57:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.4
	  Hostname:    multinode-079070-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 6c5be3213cc44d348e6d04cea0a83d80
	  System UUID:                63b9436a-6158-4245-a785-e7aa6a2fcca8
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-kxnzq       100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      5s
	  kube-system                 kube-proxy-9z4qh    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%)  100m (1%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 3s               kube-proxy       
	  Normal  NodeHasSufficientMemory  5s (x2 over 5s)  kubelet          Node multinode-079070-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5s (x2 over 5s)  kubelet          Node multinode-079070-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5s (x2 over 5s)  kubelet          Node multinode-079070-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5s               kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4s               node-controller  Node multinode-079070-m03 event: Registered Node multinode-079070-m03 in Controller
	  Normal  NodeReady                4s               kubelet          Node multinode-079070-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.095971] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-c95c64bb41bd
	[  +0.000006] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-c95c64bb41bd
	[  +0.000001] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +5.951420] net_ratelimit: 6 callbacks suppressed
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-c95c64bb41bd
	[  +0.000006] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +0.256004] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-c95c64bb41bd
	[  +0.000006] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-c95c64bb41bd
	[  +0.000002] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-c95c64bb41bd
	[  +0.000001] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +7.935271] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-c95c64bb41bd
	[  +0.000006] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-c95c64bb41bd
	[  +0.000001] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +0.003990] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-c95c64bb41bd
	[  +0.000004] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +0.255992] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-c95c64bb41bd
	[  +0.000006] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-c95c64bb41bd
	[  +0.000001] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-c95c64bb41bd
	[  +0.000001] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	
	
	==> etcd [0bc7fe20ff6ae92cd3f996cddadca6ddb2788e2f661cd3c4b2f9fb33045bed71] <==
	{"level":"info","ts":"2024-09-16T10:56:21.548252Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-09-16T10:56:21.548288Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-09-16T10:56:21.548321Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:56:21.548342Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:56:22.035573Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T10:56:22.035616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T10:56:22.035652Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2024-09-16T10:56:22.035677Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T10:56:22.035691Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-09-16T10:56:22.035701Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2024-09-16T10:56:22.035712Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-09-16T10:56:22.036773Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:multinode-079070 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:56:22.036773Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:56:22.036802Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:56:22.036801Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:56:22.037065Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:56:22.037130Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:56:22.037464Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:56:22.037668Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:56:22.037772Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:56:22.037989Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:56:22.037989Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:56:22.038884Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2024-09-16T10:56:22.038985Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:56:48.788053Z","caller":"traceutil/trace.go:171","msg":"trace[1037408987] transaction","detail":"{read_only:false; response_revision:421; number_of_response:1; }","duration":"200.765868ms","start":"2024-09-16T10:56:48.587270Z","end":"2024-09-16T10:56:48.788036Z","steps":["trace[1037408987] 'process raft request'  (duration: 200.648474ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:57:34 up 39 min,  0 users,  load average: 0.98, 1.33, 1.10
	Linux multinode-079070 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [de61885ae02518041c7aa7ce71f66fe6f83e66c09666b89a7765dd6c5955ef2e] <==
	I0916 10:56:33.120747       1 controller.go:374] Syncing nftables rules
	I0916 10:56:42.821443       1 main.go:295] Handling node with IPs: map[192.168.67.2:{}]
	I0916 10:56:42.821514       1 main.go:299] handling current node
	I0916 10:56:52.821087       1 main.go:295] Handling node with IPs: map[192.168.67.2:{}]
	I0916 10:56:52.821131       1 main.go:299] handling current node
	I0916 10:57:02.820282       1 main.go:295] Handling node with IPs: map[192.168.67.2:{}]
	I0916 10:57:02.820323       1 main.go:299] handling current node
	I0916 10:57:02.820338       1 main.go:295] Handling node with IPs: map[192.168.67.3:{}]
	I0916 10:57:02.820343       1 main.go:322] Node multinode-079070-m02 has CIDR [10.244.1.0/24] 
	I0916 10:57:02.820543       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.67.3 Flags: [] Table: 0} 
	I0916 10:57:12.827892       1 main.go:295] Handling node with IPs: map[192.168.67.2:{}]
	I0916 10:57:12.827929       1 main.go:299] handling current node
	I0916 10:57:12.827945       1 main.go:295] Handling node with IPs: map[192.168.67.3:{}]
	I0916 10:57:12.827950       1 main.go:322] Node multinode-079070-m02 has CIDR [10.244.1.0/24] 
	I0916 10:57:22.822406       1 main.go:295] Handling node with IPs: map[192.168.67.2:{}]
	I0916 10:57:22.822444       1 main.go:299] handling current node
	I0916 10:57:22.822468       1 main.go:295] Handling node with IPs: map[192.168.67.3:{}]
	I0916 10:57:22.822491       1 main.go:322] Node multinode-079070-m02 has CIDR [10.244.1.0/24] 
	I0916 10:57:32.820303       1 main.go:295] Handling node with IPs: map[192.168.67.2:{}]
	I0916 10:57:32.820363       1 main.go:299] handling current node
	I0916 10:57:32.820385       1 main.go:295] Handling node with IPs: map[192.168.67.3:{}]
	I0916 10:57:32.820394       1 main.go:322] Node multinode-079070-m02 has CIDR [10.244.1.0/24] 
	I0916 10:57:32.820565       1 main.go:295] Handling node with IPs: map[192.168.67.4:{}]
	I0916 10:57:32.820582       1 main.go:322] Node multinode-079070-m03 has CIDR [10.244.2.0/24] 
	I0916 10:57:32.820644       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.67.4 Flags: [] Table: 0} 
	
	
	==> kube-apiserver [411c657184dfd15c5a637bda842998291203948392b41c07d2e8b35719214e87] <==
	I0916 10:56:24.478924       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0916 10:56:24.483098       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0916 10:56:24.483123       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 10:56:24.887180       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:56:24.923351       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 10:56:25.030521       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0916 10:56:25.037379       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I0916 10:56:25.038608       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:56:25.042579       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:56:25.548706       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:56:25.953503       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:56:25.964413       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 10:56:25.974975       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:56:31.130667       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 10:56:31.150004       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0916 10:57:18.122976       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:35682: use of closed network connection
	E0916 10:57:18.268644       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:35704: use of closed network connection
	E0916 10:57:18.422165       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:35718: use of closed network connection
	E0916 10:57:18.568802       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:35742: use of closed network connection
	E0916 10:57:18.713040       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:35752: use of closed network connection
	E0916 10:57:18.854979       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:35772: use of closed network connection
	E0916 10:57:19.111050       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:35800: use of closed network connection
	E0916 10:57:19.253105       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:35822: use of closed network connection
	E0916 10:57:19.403005       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:35840: use of closed network connection
	E0916 10:57:19.547708       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:35870: use of closed network connection
	
	
	==> kube-controller-manager [941f1dc8e383770d56fc04131cd6e118a0b22f2035d16d7cd123273e0f80863c] <==
	I0916 10:56:58.744283       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m02"
	I0916 10:57:00.349486       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-079070-m02"
	I0916 10:57:02.363803       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="22.564249ms"
	I0916 10:57:02.368707       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="4.838562ms"
	I0916 10:57:02.368809       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="59.079µs"
	I0916 10:57:02.373194       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="61.291µs"
	I0916 10:57:02.377414       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="48.53µs"
	I0916 10:57:05.024697       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="5.41303ms"
	I0916 10:57:05.024803       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="59.231µs"
	I0916 10:57:17.736156       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="4.686451ms"
	I0916 10:57:17.736242       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="46.644µs"
	I0916 10:57:27.562728       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070"
	I0916 10:57:28.354952       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m02"
	I0916 10:57:29.133862       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-079070-m03\" does not exist"
	I0916 10:57:29.133865       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-079070-m02"
	I0916 10:57:29.139577       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-079070-m03" podCIDRs=["10.244.2.0/24"]
	I0916 10:57:29.139620       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m03"
	I0916 10:57:29.139698       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m03"
	I0916 10:57:29.145420       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m03"
	I0916 10:57:29.203923       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m03"
	I0916 10:57:29.443782       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m03"
	I0916 10:57:30.057860       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m03"
	I0916 10:57:30.057909       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-079070-m02"
	I0916 10:57:30.065546       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m03"
	I0916 10:57:30.353600       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-079070-m03"
	
	
	==> kube-proxy [809210a041e030e61062aa021eb36041df90e322c3257f94c546c420614699bc] <==
	I0916 10:56:32.029982       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:56:32.179672       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.67.2"]
	E0916 10:56:32.179750       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:56:32.234955       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:56:32.235009       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:56:32.237569       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:56:32.237995       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:56:32.238032       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:56:32.239678       1 config.go:199] "Starting service config controller"
	I0916 10:56:32.239727       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:56:32.239777       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:56:32.239783       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:56:32.240007       1 config.go:328] "Starting node config controller"
	I0916 10:56:32.240016       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:56:32.340062       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:56:32.340082       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:56:32.340144       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5d29b7e4482f874fecde10cfcd42e99ca36d060f25d2e8e7a8110ea495ea8583] <==
	W0916 10:56:23.626494       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 10:56:23.626538       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:56:23.626572       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:56:23.626619       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:56:23.626626       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 10:56:23.626673       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:56:23.626691       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:56:23.626732       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:56:23.626711       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 10:56:23.626782       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:56:24.460004       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:56:24.460050       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:56:24.468721       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:56:24.468769       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:56:24.515374       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:56:24.515416       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:56:24.539117       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 10:56:24.539157       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:56:24.708195       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 10:56:24.708249       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 10:56:24.711434       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:56:24.711474       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:56:24.728071       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:56:24.728136       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 10:56:25.122409       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:56:31 multinode-079070 kubelet[1627]: I0916 10:56:31.524491    1627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8fvl\" (UniqueName: \"kubernetes.io/projected/3bfb600a-3b88-4834-beac-acc911b78ef1-kube-api-access-j8fvl\") pod \"coredns-7c65d6cfc9-ql4g8\" (UID: \"3bfb600a-3b88-4834-beac-acc911b78ef1\") " pod="kube-system/coredns-7c65d6cfc9-ql4g8"
	Sep 16 10:56:31 multinode-079070 kubelet[1627]: I0916 10:56:31.524520    1627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnfv2\" (UniqueName: \"kubernetes.io/projected/8052b6a1-7257-44d4-a318-740afd039d2c-kube-api-access-nnfv2\") pod \"coredns-7c65d6cfc9-ft9gh\" (UID: \"8052b6a1-7257-44d4-a318-740afd039d2c\") " pod="kube-system/coredns-7c65d6cfc9-ft9gh"
	Sep 16 10:56:31 multinode-079070 kubelet[1627]: E0916 10:56:31.828082    1627 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"601406ecb567f37a2cd3807f254a99f738c45d74d3eb998856b79fc12b5a5c0e\": failed to find network info for sandbox \"601406ecb567f37a2cd3807f254a99f738c45d74d3eb998856b79fc12b5a5c0e\""
	Sep 16 10:56:31 multinode-079070 kubelet[1627]: E0916 10:56:31.828160    1627 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"601406ecb567f37a2cd3807f254a99f738c45d74d3eb998856b79fc12b5a5c0e\": failed to find network info for sandbox \"601406ecb567f37a2cd3807f254a99f738c45d74d3eb998856b79fc12b5a5c0e\"" pod="kube-system/coredns-7c65d6cfc9-ql4g8"
	Sep 16 10:56:31 multinode-079070 kubelet[1627]: E0916 10:56:31.849132    1627 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed42e1245f47ac4d19e8c2086a6823fb11aa6146a3f2b70e2a2da781d70e8bc4\": failed to find network info for sandbox \"ed42e1245f47ac4d19e8c2086a6823fb11aa6146a3f2b70e2a2da781d70e8bc4\""
	Sep 16 10:56:31 multinode-079070 kubelet[1627]: E0916 10:56:31.849223    1627 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed42e1245f47ac4d19e8c2086a6823fb11aa6146a3f2b70e2a2da781d70e8bc4\": failed to find network info for sandbox \"ed42e1245f47ac4d19e8c2086a6823fb11aa6146a3f2b70e2a2da781d70e8bc4\"" pod="kube-system/coredns-7c65d6cfc9-ft9gh"
	Sep 16 10:56:31 multinode-079070 kubelet[1627]: E0916 10:56:31.849254    1627 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed42e1245f47ac4d19e8c2086a6823fb11aa6146a3f2b70e2a2da781d70e8bc4\": failed to find network info for sandbox \"ed42e1245f47ac4d19e8c2086a6823fb11aa6146a3f2b70e2a2da781d70e8bc4\"" pod="kube-system/coredns-7c65d6cfc9-ft9gh"
	Sep 16 10:56:31 multinode-079070 kubelet[1627]: E0916 10:56:31.849316    1627 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-ft9gh_kube-system(8052b6a1-7257-44d4-a318-740afd039d2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-ft9gh_kube-system(8052b6a1-7257-44d4-a318-740afd039d2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed42e1245f47ac4d19e8c2086a6823fb11aa6146a3f2b70e2a2da781d70e8bc4\\\": failed to find network info for sandbox \\\"ed42e1245f47ac4d19e8c2086a6823fb11aa6146a3f2b70e2a2da781d70e8bc4\\\"\"" pod="kube-system/coredns-7c65d6cfc9-ft9gh" podUID="8052b6a1-7257-44d4-a318-740afd039d2c"
	Sep 16 10:56:32 multinode-079070 kubelet[1627]: I0916 10:56:32.134826    1627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8fvl\" (UniqueName: \"kubernetes.io/projected/3bfb600a-3b88-4834-beac-acc911b78ef1-kube-api-access-j8fvl\") pod \"3bfb600a-3b88-4834-beac-acc911b78ef1\" (UID: \"3bfb600a-3b88-4834-beac-acc911b78ef1\") "
	Sep 16 10:56:32 multinode-079070 kubelet[1627]: I0916 10:56:32.134913    1627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bfb600a-3b88-4834-beac-acc911b78ef1-config-volume\") pod \"3bfb600a-3b88-4834-beac-acc911b78ef1\" (UID: \"3bfb600a-3b88-4834-beac-acc911b78ef1\") "
	Sep 16 10:56:32 multinode-079070 kubelet[1627]: I0916 10:56:32.134985    1627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vbbr\" (UniqueName: \"kubernetes.io/projected/43862f2e-c773-468d-ab03-8b0bc0633ad4-kube-api-access-8vbbr\") pod \"storage-provisioner\" (UID: \"43862f2e-c773-468d-ab03-8b0bc0633ad4\") " pod="kube-system/storage-provisioner"
	Sep 16 10:56:32 multinode-079070 kubelet[1627]: I0916 10:56:32.135018    1627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/43862f2e-c773-468d-ab03-8b0bc0633ad4-tmp\") pod \"storage-provisioner\" (UID: \"43862f2e-c773-468d-ab03-8b0bc0633ad4\") " pod="kube-system/storage-provisioner"
	Sep 16 10:56:32 multinode-079070 kubelet[1627]: I0916 10:56:32.135359    1627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bfb600a-3b88-4834-beac-acc911b78ef1-config-volume" (OuterVolumeSpecName: "config-volume") pod "3bfb600a-3b88-4834-beac-acc911b78ef1" (UID: "3bfb600a-3b88-4834-beac-acc911b78ef1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Sep 16 10:56:32 multinode-079070 kubelet[1627]: I0916 10:56:32.137072    1627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bfb600a-3b88-4834-beac-acc911b78ef1-kube-api-access-j8fvl" (OuterVolumeSpecName: "kube-api-access-j8fvl") pod "3bfb600a-3b88-4834-beac-acc911b78ef1" (UID: "3bfb600a-3b88-4834-beac-acc911b78ef1"). InnerVolumeSpecName "kube-api-access-j8fvl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:56:32 multinode-079070 kubelet[1627]: I0916 10:56:32.235816    1627 reconciler_common.go:288] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bfb600a-3b88-4834-beac-acc911b78ef1-config-volume\") on node \"multinode-079070\" DevicePath \"\""
	Sep 16 10:56:32 multinode-079070 kubelet[1627]: I0916 10:56:32.235869    1627 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-j8fvl\" (UniqueName: \"kubernetes.io/projected/3bfb600a-3b88-4834-beac-acc911b78ef1-kube-api-access-j8fvl\") on node \"multinode-079070\" DevicePath \"\""
	Sep 16 10:56:32 multinode-079070 kubelet[1627]: I0916 10:56:32.949310    1627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=0.949290109 podStartE2EDuration="949.290109ms" podCreationTimestamp="2024-09-16 10:56:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:56:32.949138814 +0000 UTC m=+7.174161919" watchObservedRunningTime="2024-09-16 10:56:32.949290109 +0000 UTC m=+7.174313215"
	Sep 16 10:56:32 multinode-079070 kubelet[1627]: I0916 10:56:32.960140    1627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-flmdv" podStartSLOduration=1.960118484 podStartE2EDuration="1.960118484s" podCreationTimestamp="2024-09-16 10:56:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:56:32.959956196 +0000 UTC m=+7.184979302" watchObservedRunningTime="2024-09-16 10:56:32.960118484 +0000 UTC m=+7.185141588"
	Sep 16 10:56:32 multinode-079070 kubelet[1627]: I0916 10:56:32.970526    1627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2vhmt" podStartSLOduration=1.970499873 podStartE2EDuration="1.970499873s" podCreationTimestamp="2024-09-16 10:56:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:56:32.970065634 +0000 UTC m=+7.195088740" watchObservedRunningTime="2024-09-16 10:56:32.970499873 +0000 UTC m=+7.195522979"
	Sep 16 10:56:33 multinode-079070 kubelet[1627]: I0916 10:56:33.861371    1627 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bfb600a-3b88-4834-beac-acc911b78ef1" path="/var/lib/kubelet/pods/3bfb600a-3b88-4834-beac-acc911b78ef1/volumes"
	Sep 16 10:56:36 multinode-079070 kubelet[1627]: I0916 10:56:36.446666    1627 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 16 10:56:36 multinode-079070 kubelet[1627]: I0916 10:56:36.447540    1627 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 16 10:56:43 multinode-079070 kubelet[1627]: I0916 10:56:43.977616    1627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-ft9gh" podStartSLOduration=12.977590048 podStartE2EDuration="12.977590048s" podCreationTimestamp="2024-09-16 10:56:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:56:43.977398293 +0000 UTC m=+18.202421390" watchObservedRunningTime="2024-09-16 10:56:43.977590048 +0000 UTC m=+18.202613185"
	Sep 16 10:57:02 multinode-079070 kubelet[1627]: I0916 10:57:02.501783    1627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2hmg\" (UniqueName: \"kubernetes.io/projected/e697a697-12c1-405c-bc2e-fa881b5fd008-kube-api-access-q2hmg\") pod \"busybox-7dff88458-pjlvx\" (UID: \"e697a697-12c1-405c-bc2e-fa881b5fd008\") " pod="default/busybox-7dff88458-pjlvx"
	Sep 16 10:57:05 multinode-079070 kubelet[1627]: I0916 10:57:05.019767    1627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-7dff88458-pjlvx" podStartSLOduration=1.052026777 podStartE2EDuration="3.019715123s" podCreationTimestamp="2024-09-16 10:57:02 +0000 UTC" firstStartedPulling="2024-09-16 10:57:02.752765721 +0000 UTC m=+36.977788820" lastFinishedPulling="2024-09-16 10:57:04.720454068 +0000 UTC m=+38.945477166" observedRunningTime="2024-09-16 10:57:05.019580623 +0000 UTC m=+39.244603730" watchObservedRunningTime="2024-09-16 10:57:05.019715123 +0000 UTC m=+39.244738229"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-079070 -n multinode-079070
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-079070 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context multinode-079070 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (529.721µs)
helpers_test.go:263: kubectl --context multinode-079070 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestMultiNode/serial/MultiNodeLabels (2.00s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-079070 node start m03 -v=7 --alsologtostderr: (7.858487738s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
multinode_test.go:306: (dbg) Non-zero exit: kubectl get nodes: fork/exec /usr/local/bin/kubectl: exec format error (518.744µs)
multinode_test.go:308: failed to kubectl get nodes. args "kubectl get nodes" : fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-079070
helpers_test.go:235: (dbg) docker inspect multinode-079070:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1f3af6522540e0cf7a676c3192a1437f3ee712969b647e0c5ff8f6e5d39943b2",
	        "Created": "2024-09-16T10:56:12.200290899Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 157680,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:56:12.309897613Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/1f3af6522540e0cf7a676c3192a1437f3ee712969b647e0c5ff8f6e5d39943b2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1f3af6522540e0cf7a676c3192a1437f3ee712969b647e0c5ff8f6e5d39943b2/hostname",
	        "HostsPath": "/var/lib/docker/containers/1f3af6522540e0cf7a676c3192a1437f3ee712969b647e0c5ff8f6e5d39943b2/hosts",
	        "LogPath": "/var/lib/docker/containers/1f3af6522540e0cf7a676c3192a1437f3ee712969b647e0c5ff8f6e5d39943b2/1f3af6522540e0cf7a676c3192a1437f3ee712969b647e0c5ff8f6e5d39943b2-json.log",
	        "Name": "/multinode-079070",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-079070:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-079070",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e5e11918fc04456f16b46e3608935d02c293129393f819ada075321189caa2ad-init/diff:/var/lib/docker/overlay2/e391a31d00d345931d18da68f8a7d7338830f6d9b98fe203461225d03a65d594/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e5e11918fc04456f16b46e3608935d02c293129393f819ada075321189caa2ad/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e5e11918fc04456f16b46e3608935d02c293129393f819ada075321189caa2ad/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e5e11918fc04456f16b46e3608935d02c293129393f819ada075321189caa2ad/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-079070",
	                "Source": "/var/lib/docker/volumes/multinode-079070/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-079070",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-079070",
	                "name.minikube.sigs.k8s.io": "multinode-079070",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a560079fd7a6ca102362f0cdf2062b82a677a42b7b5efbb4988b26509a1f350a",
	            "SandboxKey": "/var/run/docker/netns/a560079fd7a6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32908"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32909"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32912"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32910"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32911"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-079070": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null,
	                    "NetworkID": "49585fce923a48b44636990469ad4decadcc5b1b88fcdd63ced7ebb1e3971b52",
	                    "EndpointID": "01c8b09cda6dc7f6b7f0ccee5666ccccb7fa2d2fc265a3505bf1c12e7ef0dc1b",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "multinode-079070",
	                        "1f3af6522540"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-079070 -n multinode-079070
helpers_test.go:244: <<< TestMultiNode/serial/StartAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-079070 logs -n 25: (1.230245698s)
helpers_test.go:252: TestMultiNode/serial/StartAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-079070 cp multinode-079070:/home/docker/cp-test.txt                           | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | multinode-079070-m03:/home/docker/cp-test_multinode-079070_multinode-079070-m03.txt     |                  |         |         |                     |                     |
	| ssh     | multinode-079070 ssh -n                                                                 | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | multinode-079070 sudo cat                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-079070 ssh -n multinode-079070-m03 sudo cat                                   | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | /home/docker/cp-test_multinode-079070_multinode-079070-m03.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-079070 cp testdata/cp-test.txt                                                | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | multinode-079070-m02:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-079070 ssh -n                                                                 | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | multinode-079070-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-079070 cp multinode-079070-m02:/home/docker/cp-test.txt                       | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile475858721/001/cp-test_multinode-079070-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-079070 ssh -n                                                                 | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | multinode-079070-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-079070 cp multinode-079070-m02:/home/docker/cp-test.txt                       | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | multinode-079070:/home/docker/cp-test_multinode-079070-m02_multinode-079070.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-079070 ssh -n                                                                 | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | multinode-079070-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-079070 ssh -n multinode-079070 sudo cat                                       | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | /home/docker/cp-test_multinode-079070-m02_multinode-079070.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-079070 cp multinode-079070-m02:/home/docker/cp-test.txt                       | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | multinode-079070-m03:/home/docker/cp-test_multinode-079070-m02_multinode-079070-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-079070 ssh -n                                                                 | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | multinode-079070-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-079070 ssh -n multinode-079070-m03 sudo cat                                   | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | /home/docker/cp-test_multinode-079070-m02_multinode-079070-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-079070 cp testdata/cp-test.txt                                                | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | multinode-079070-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-079070 ssh -n                                                                 | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | multinode-079070-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-079070 cp multinode-079070-m03:/home/docker/cp-test.txt                       | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile475858721/001/cp-test_multinode-079070-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-079070 ssh -n                                                                 | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | multinode-079070-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-079070 cp multinode-079070-m03:/home/docker/cp-test.txt                       | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | multinode-079070:/home/docker/cp-test_multinode-079070-m03_multinode-079070.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-079070 ssh -n                                                                 | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | multinode-079070-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-079070 ssh -n multinode-079070 sudo cat                                       | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | /home/docker/cp-test_multinode-079070-m03_multinode-079070.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-079070 cp multinode-079070-m03:/home/docker/cp-test.txt                       | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | multinode-079070-m02:/home/docker/cp-test_multinode-079070-m03_multinode-079070-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-079070 ssh -n                                                                 | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | multinode-079070-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-079070 ssh -n multinode-079070-m02 sudo cat                                   | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | /home/docker/cp-test_multinode-079070-m03_multinode-079070-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-079070 node stop m03                                                          | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	| node    | multinode-079070 node start                                                             | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:56:06
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:56:06.855156  157008 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:56:06.855263  157008 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:56:06.855270  157008 out.go:358] Setting ErrFile to fd 2...
	I0916 10:56:06.855274  157008 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:56:06.855452  157008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 10:56:06.856103  157008 out.go:352] Setting JSON to false
	I0916 10:56:06.857043  157008 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2311,"bootTime":1726481856,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:56:06.857147  157008 start.go:139] virtualization: kvm guest
	I0916 10:56:06.859338  157008 out.go:177] * [multinode-079070] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:56:06.861126  157008 notify.go:220] Checking for updates...
	I0916 10:56:06.861141  157008 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:56:06.862675  157008 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:56:06.864295  157008 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:56:06.865662  157008 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 10:56:06.866835  157008 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:56:06.868151  157008 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:56:06.869617  157008 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:56:06.892121  157008 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:56:06.892220  157008 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:56:06.943619  157008 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2024-09-16 10:56:06.934377277 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:56:06.943724  157008 docker.go:318] overlay module found
	I0916 10:56:06.945405  157008 out.go:177] * Using the docker driver based on user configuration
	I0916 10:56:06.946509  157008 start.go:297] selected driver: docker
	I0916 10:56:06.946521  157008 start.go:901] validating driver "docker" against <nil>
	I0916 10:56:06.946533  157008 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:56:06.947259  157008 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:56:06.995087  157008 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:42 SystemTime:2024-09-16 10:56:06.986178566 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:56:06.995247  157008 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:56:06.995479  157008 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:56:06.997194  157008 out.go:177] * Using Docker driver with root privileges
	I0916 10:56:06.998684  157008 cni.go:84] Creating CNI manager for ""
	I0916 10:56:06.998744  157008 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0916 10:56:06.998754  157008 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 10:56:06.998838  157008 start.go:340] cluster config:
	{Name:multinode-079070 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-079070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:56:07.000232  157008 out.go:177] * Starting "multinode-079070" primary control-plane node in "multinode-079070" cluster
	I0916 10:56:07.001648  157008 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 10:56:07.002874  157008 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:56:07.004023  157008 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:56:07.004052  157008 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:56:07.004064  157008 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4
	I0916 10:56:07.004088  157008 cache.go:56] Caching tarball of preloaded images
	I0916 10:56:07.004166  157008 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 10:56:07.004180  157008 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 10:56:07.004506  157008 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/config.json ...
	I0916 10:56:07.004528  157008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/config.json: {Name:mk1da92c3cc279d70ea91ed70bd44957fd57d510 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0916 10:56:07.023941  157008 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:56:07.023962  157008 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:56:07.024032  157008 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:56:07.024049  157008 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:56:07.024053  157008 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:56:07.024059  157008 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:56:07.024066  157008 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:56:07.025089  157008 image.go:273] response: 
	I0916 10:56:07.076745  157008 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:56:07.076789  157008 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:56:07.076819  157008 start.go:360] acquireMachinesLock for multinode-079070: {Name:mka8d048a8e19e1d22189c5e81470c7f2336c084 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:56:07.076924  157008 start.go:364] duration metric: took 86.301µs to acquireMachinesLock for "multinode-079070"
	I0916 10:56:07.076948  157008 start.go:93] Provisioning new machine with config: &{Name:multinode-079070 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-079070 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 10:56:07.077038  157008 start.go:125] createHost starting for "" (driver="docker")
	I0916 10:56:07.078848  157008 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 10:56:07.079090  157008 start.go:159] libmachine.API.Create for "multinode-079070" (driver="docker")
	I0916 10:56:07.079122  157008 client.go:168] LocalClient.Create starting
	I0916 10:56:07.079181  157008 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem
	I0916 10:56:07.079213  157008 main.go:141] libmachine: Decoding PEM data...
	I0916 10:56:07.079230  157008 main.go:141] libmachine: Parsing certificate...
	I0916 10:56:07.079285  157008 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem
	I0916 10:56:07.079306  157008 main.go:141] libmachine: Decoding PEM data...
	I0916 10:56:07.079316  157008 main.go:141] libmachine: Parsing certificate...
	I0916 10:56:07.079616  157008 cli_runner.go:164] Run: docker network inspect multinode-079070 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 10:56:07.096186  157008 cli_runner.go:211] docker network inspect multinode-079070 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 10:56:07.096287  157008 network_create.go:284] running [docker network inspect multinode-079070] to gather additional debugging logs...
	I0916 10:56:07.096307  157008 cli_runner.go:164] Run: docker network inspect multinode-079070
	W0916 10:56:07.112374  157008 cli_runner.go:211] docker network inspect multinode-079070 returned with exit code 1
	I0916 10:56:07.112412  157008 network_create.go:287] error running [docker network inspect multinode-079070]: docker network inspect multinode-079070: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-079070 not found
	I0916 10:56:07.112424  157008 network_create.go:289] output of [docker network inspect multinode-079070]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-079070 not found
	
	** /stderr **
	I0916 10:56:07.112556  157008 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:56:07.129968  157008 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c95c64bb41bd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:bd:76:ab:c4} reservation:<nil>}
	I0916 10:56:07.130501  157008 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fad43aa9929b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:94:3f:61:fd} reservation:<nil>}
	I0916 10:56:07.131035  157008 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a9df90}
	I0916 10:56:07.131069  157008 network_create.go:124] attempt to create docker network multinode-079070 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0916 10:56:07.131117  157008 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-079070 multinode-079070
	I0916 10:56:07.190981  157008 network_create.go:108] docker network multinode-079070 192.168.67.0/24 created
	I0916 10:56:07.191010  157008 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-079070" container
	I0916 10:56:07.191075  157008 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 10:56:07.207629  157008 cli_runner.go:164] Run: docker volume create multinode-079070 --label name.minikube.sigs.k8s.io=multinode-079070 --label created_by.minikube.sigs.k8s.io=true
	I0916 10:56:07.224927  157008 oci.go:103] Successfully created a docker volume multinode-079070
	I0916 10:56:07.225051  157008 cli_runner.go:164] Run: docker run --rm --name multinode-079070-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-079070 --entrypoint /usr/bin/test -v multinode-079070:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 10:56:07.759087  157008 oci.go:107] Successfully prepared a docker volume multinode-079070
	I0916 10:56:07.759157  157008 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:56:07.759182  157008 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 10:56:07.759253  157008 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-079070:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 10:56:12.136896  157008 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-079070:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.377562571s)
	I0916 10:56:12.136945  157008 kic.go:203] duration metric: took 4.377757648s to extract preloaded images to volume ...
	W0916 10:56:12.137124  157008 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 10:56:12.137277  157008 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 10:56:12.185030  157008 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-079070 --name multinode-079070 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-079070 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-079070 --network multinode-079070 --ip 192.168.67.2 --volume multinode-079070:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 10:56:12.503339  157008 cli_runner.go:164] Run: docker container inspect multinode-079070 --format={{.State.Running}}
	I0916 10:56:12.521228  157008 cli_runner.go:164] Run: docker container inspect multinode-079070 --format={{.State.Status}}
	I0916 10:56:12.539581  157008 cli_runner.go:164] Run: docker exec multinode-079070 stat /var/lib/dpkg/alternatives/iptables
	I0916 10:56:12.584105  157008 oci.go:144] the created container "multinode-079070" has a running status.
	I0916 10:56:12.584140  157008 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070/id_rsa...
	I0916 10:56:12.775237  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 10:56:12.775302  157008 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 10:56:12.799502  157008 cli_runner.go:164] Run: docker container inspect multinode-079070 --format={{.State.Status}}
	I0916 10:56:12.818341  157008 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 10:56:12.818363  157008 kic_runner.go:114] Args: [docker exec --privileged multinode-079070 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 10:56:12.930383  157008 cli_runner.go:164] Run: docker container inspect multinode-079070 --format={{.State.Status}}
	I0916 10:56:12.950563  157008 machine.go:93] provisionDockerMachine start ...
	I0916 10:56:12.950646  157008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070
	I0916 10:56:12.973362  157008 main.go:141] libmachine: Using SSH client type: native
	I0916 10:56:12.973701  157008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32908 <nil> <nil>}
	I0916 10:56:12.973720  157008 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:56:13.143400  157008 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-079070
	
	I0916 10:56:13.143431  157008 ubuntu.go:169] provisioning hostname "multinode-079070"
	I0916 10:56:13.143493  157008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070
	I0916 10:56:13.167054  157008 main.go:141] libmachine: Using SSH client type: native
	I0916 10:56:13.167313  157008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32908 <nil> <nil>}
	I0916 10:56:13.167337  157008 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-079070 && echo "multinode-079070" | sudo tee /etc/hostname
	I0916 10:56:13.318706  157008 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-079070
	
	I0916 10:56:13.318789  157008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070
	I0916 10:56:13.335511  157008 main.go:141] libmachine: Using SSH client type: native
	I0916 10:56:13.335747  157008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32908 <nil> <nil>}
	I0916 10:56:13.335776  157008 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-079070' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-079070/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-079070' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:56:13.468115  157008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:56:13.468148  157008 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 10:56:13.468177  157008 ubuntu.go:177] setting up certificates
	I0916 10:56:13.468190  157008 provision.go:84] configureAuth start
	I0916 10:56:13.468242  157008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-079070
	I0916 10:56:13.485691  157008 provision.go:143] copyHostCerts
	I0916 10:56:13.485731  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:56:13.485767  157008 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 10:56:13.485777  157008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:56:13.485837  157008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 10:56:13.485915  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:56:13.485934  157008 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 10:56:13.485941  157008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:56:13.485967  157008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 10:56:13.486014  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:56:13.486033  157008 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 10:56:13.486039  157008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:56:13.486060  157008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 10:56:13.486112  157008 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.multinode-079070 san=[127.0.0.1 192.168.67.2 localhost minikube multinode-079070]
	I0916 10:56:13.600716  157008 provision.go:177] copyRemoteCerts
	I0916 10:56:13.600789  157008 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:56:13.600824  157008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070
	I0916 10:56:13.617706  157008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070/id_rsa Username:docker}
	I0916 10:56:13.712339  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:56:13.712404  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 10:56:13.734571  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:56:13.734631  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0916 10:56:13.756544  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:56:13.756620  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:56:13.778902  157008 provision.go:87] duration metric: took 310.700375ms to configureAuth
	I0916 10:56:13.778931  157008 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:56:13.779104  157008 config.go:182] Loaded profile config "multinode-079070": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:56:13.779116  157008 machine.go:96] duration metric: took 828.530064ms to provisionDockerMachine
	I0916 10:56:13.779125  157008 client.go:171] duration metric: took 6.699995187s to LocalClient.Create
	I0916 10:56:13.779164  157008 start.go:167] duration metric: took 6.700059073s to libmachine.API.Create "multinode-079070"
	I0916 10:56:13.779180  157008 start.go:293] postStartSetup for "multinode-079070" (driver="docker")
	I0916 10:56:13.779193  157008 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:56:13.779247  157008 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:56:13.779295  157008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070
	I0916 10:56:13.796444  157008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070/id_rsa Username:docker}
	I0916 10:56:13.892329  157008 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:56:13.895193  157008 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.4 LTS"
	I0916 10:56:13.895212  157008 command_runner.go:130] > NAME="Ubuntu"
	I0916 10:56:13.895218  157008 command_runner.go:130] > VERSION_ID="22.04"
	I0916 10:56:13.895223  157008 command_runner.go:130] > VERSION="22.04.4 LTS (Jammy Jellyfish)"
	I0916 10:56:13.895228  157008 command_runner.go:130] > VERSION_CODENAME=jammy
	I0916 10:56:13.895232  157008 command_runner.go:130] > ID=ubuntu
	I0916 10:56:13.895246  157008 command_runner.go:130] > ID_LIKE=debian
	I0916 10:56:13.895252  157008 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0916 10:56:13.895257  157008 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0916 10:56:13.895262  157008 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0916 10:56:13.895271  157008 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0916 10:56:13.895277  157008 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0916 10:56:13.895332  157008 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:56:13.895355  157008 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:56:13.895362  157008 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:56:13.895368  157008 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:56:13.895379  157008 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 10:56:13.895426  157008 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 10:56:13.895517  157008 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 10:56:13.895529  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /etc/ssl/certs/111892.pem
	I0916 10:56:13.895631  157008 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:56:13.903481  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:56:13.925525  157008 start.go:296] duration metric: took 146.331045ms for postStartSetup
	I0916 10:56:13.925862  157008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-079070
	I0916 10:56:13.942731  157008 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/config.json ...
	I0916 10:56:13.943004  157008 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:56:13.943050  157008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070
	I0916 10:56:13.959728  157008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070/id_rsa Username:docker}
	I0916 10:56:14.048589  157008 command_runner.go:130] > 31%
	I0916 10:56:14.048685  157008 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:56:14.052463  157008 command_runner.go:130] > 203G
	I0916 10:56:14.052661  157008 start.go:128] duration metric: took 6.975610001s to createHost
	I0916 10:56:14.052678  157008 start.go:83] releasing machines lock for "multinode-079070", held for 6.975744478s
	I0916 10:56:14.052730  157008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-079070
	I0916 10:56:14.069077  157008 ssh_runner.go:195] Run: cat /version.json
	I0916 10:56:14.069154  157008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070
	I0916 10:56:14.069094  157008 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:56:14.069266  157008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070
	I0916 10:56:14.086861  157008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070/id_rsa Username:docker}
	I0916 10:56:14.087891  157008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070/id_rsa Username:docker}
	I0916 10:56:14.175251  157008 command_runner.go:130] > {"iso_version": "v1.34.0-1726281733-19643", "kicbase_version": "v0.0.45-1726358845-19644", "minikube_version": "v1.34.0", "commit": "f890713149c79cf50e25c13e6a5c0470aa0f0450"}
	I0916 10:56:14.175374  157008 ssh_runner.go:195] Run: systemctl --version
	I0916 10:56:14.250692  157008 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0916 10:56:14.250757  157008 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.12)
	I0916 10:56:14.250786  157008 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0916 10:56:14.250864  157008 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:56:14.255286  157008 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0916 10:56:14.255315  157008 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0916 10:56:14.255324  157008 command_runner.go:130] > Device: 35h/53d	Inode: 534561      Links: 1
	I0916 10:56:14.255332  157008 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:56:14.255341  157008 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0916 10:56:14.255348  157008 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0916 10:56:14.255356  157008 command_runner.go:130] > Change: 2024-09-16 10:23:17.243457347 +0000
	I0916 10:56:14.255364  157008 command_runner.go:130] >  Birth: 2024-09-16 10:23:17.243457347 +0000
	I0916 10:56:14.255583  157008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 10:56:14.278852  157008 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:56:14.278929  157008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:56:14.304421  157008 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0916 10:56:14.304476  157008 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 10:56:14.304486  157008 start.go:495] detecting cgroup driver to use...
	I0916 10:56:14.304515  157008 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:56:14.304550  157008 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 10:56:14.315391  157008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 10:56:14.325823  157008 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:56:14.325875  157008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:56:14.337764  157008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:56:14.349981  157008 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:56:14.424880  157008 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:56:14.437969  157008 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0916 10:56:14.505325  157008 docker.go:233] disabling docker service ...
	I0916 10:56:14.505381  157008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:56:14.522467  157008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:56:14.532669  157008 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:56:14.612746  157008 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0916 10:56:14.612821  157008 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:56:14.693144  157008 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0916 10:56:14.693227  157008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:56:14.703590  157008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:56:14.716972  157008 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0916 10:56:14.717841  157008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 10:56:14.726833  157008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:56:14.735526  157008 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:56:14.735593  157008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:56:14.744272  157008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:56:14.753048  157008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:56:14.762010  157008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:56:14.771227  157008 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:56:14.780036  157008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:56:14.789074  157008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:56:14.797916  157008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:56:14.807028  157008 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:56:14.813737  157008 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0916 10:56:14.814419  157008 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:56:14.821900  157008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:56:14.900926  157008 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 10:56:14.994947  157008 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 10:56:14.995012  157008 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 10:56:14.998453  157008 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I0916 10:56:14.998478  157008 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0916 10:56:14.998488  157008 command_runner.go:130] > Device: 40h/64d	Inode: 175         Links: 1
	I0916 10:56:14.998507  157008 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:56:14.998519  157008 command_runner.go:130] > Access: 2024-09-16 10:56:14.961807921 +0000
	I0916 10:56:14.998527  157008 command_runner.go:130] > Modify: 2024-09-16 10:56:14.961807921 +0000
	I0916 10:56:14.998531  157008 command_runner.go:130] > Change: 2024-09-16 10:56:14.961807921 +0000
	I0916 10:56:14.998535  157008 command_runner.go:130] >  Birth: -
	I0916 10:56:14.998552  157008 start.go:563] Will wait 60s for crictl version
	I0916 10:56:14.998604  157008 ssh_runner.go:195] Run: which crictl
	I0916 10:56:15.001804  157008 command_runner.go:130] > /usr/bin/crictl
	I0916 10:56:15.001870  157008 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:56:15.031721  157008 command_runner.go:130] > Version:  0.1.0
	I0916 10:56:15.031759  157008 command_runner.go:130] > RuntimeName:  containerd
	I0916 10:56:15.031768  157008 command_runner.go:130] > RuntimeVersion:  1.7.22
	I0916 10:56:15.031775  157008 command_runner.go:130] > RuntimeApiVersion:  v1
	I0916 10:56:15.033635  157008 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 10:56:15.033711  157008 ssh_runner.go:195] Run: containerd --version
	I0916 10:56:15.054975  157008 command_runner.go:130] > containerd containerd.io 1.7.22 7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c
	I0916 10:56:15.055045  157008 ssh_runner.go:195] Run: containerd --version
	I0916 10:56:15.076262  157008 command_runner.go:130] > containerd containerd.io 1.7.22 7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c
	I0916 10:56:15.078620  157008 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 10:56:15.080057  157008 cli_runner.go:164] Run: docker network inspect multinode-079070 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:56:15.096411  157008 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0916 10:56:15.099798  157008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:56:15.109798  157008 kubeadm.go:883] updating cluster {Name:multinode-079070 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-079070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:56:15.109910  157008 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:56:15.109953  157008 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:56:15.138875  157008 command_runner.go:130] > {
	I0916 10:56:15.138898  157008 command_runner.go:130] >   "images": [
	I0916 10:56:15.138905  157008 command_runner.go:130] >     {
	I0916 10:56:15.138917  157008 command_runner.go:130] >       "id": "sha256:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 10:56:15.138931  157008 command_runner.go:130] >       "repoTags": [
	I0916 10:56:15.138940  157008 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 10:56:15.138946  157008 command_runner.go:130] >       ],
	I0916 10:56:15.138953  157008 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:15.138963  157008 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 10:56:15.138970  157008 command_runner.go:130] >       ],
	I0916 10:56:15.138976  157008 command_runner.go:130] >       "size": "36793393",
	I0916 10:56:15.138985  157008 command_runner.go:130] >       "uid": null,
	I0916 10:56:15.138992  157008 command_runner.go:130] >       "username": "",
	I0916 10:56:15.139002  157008 command_runner.go:130] >       "spec": null,
	I0916 10:56:15.139008  157008 command_runner.go:130] >       "pinned": false
	I0916 10:56:15.139017  157008 command_runner.go:130] >     },
	I0916 10:56:15.139022  157008 command_runner.go:130] >     {
	I0916 10:56:15.139037  157008 command_runner.go:130] >       "id": "sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 10:56:15.139047  157008 command_runner.go:130] >       "repoTags": [
	I0916 10:56:15.139054  157008 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 10:56:15.139060  157008 command_runner.go:130] >       ],
	I0916 10:56:15.139066  157008 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:15.139082  157008 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0916 10:56:15.139090  157008 command_runner.go:130] >       ],
	I0916 10:56:15.139096  157008 command_runner.go:130] >       "size": "9058936",
	I0916 10:56:15.139106  157008 command_runner.go:130] >       "uid": null,
	I0916 10:56:15.139112  157008 command_runner.go:130] >       "username": "",
	I0916 10:56:15.139121  157008 command_runner.go:130] >       "spec": null,
	I0916 10:56:15.139128  157008 command_runner.go:130] >       "pinned": false
	I0916 10:56:15.139135  157008 command_runner.go:130] >     },
	I0916 10:56:15.139139  157008 command_runner.go:130] >     {
	I0916 10:56:15.139148  157008 command_runner.go:130] >       "id": "sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 10:56:15.139157  157008 command_runner.go:130] >       "repoTags": [
	I0916 10:56:15.139171  157008 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 10:56:15.139180  157008 command_runner.go:130] >       ],
	I0916 10:56:15.139186  157008 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:15.139200  157008 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"
	I0916 10:56:15.139214  157008 command_runner.go:130] >       ],
	I0916 10:56:15.139223  157008 command_runner.go:130] >       "size": "18562039",
	I0916 10:56:15.139227  157008 command_runner.go:130] >       "uid": null,
	I0916 10:56:15.139231  157008 command_runner.go:130] >       "username": "nonroot",
	I0916 10:56:15.139241  157008 command_runner.go:130] >       "spec": null,
	I0916 10:56:15.139248  157008 command_runner.go:130] >       "pinned": false
	I0916 10:56:15.139257  157008 command_runner.go:130] >     },
	I0916 10:56:15.139262  157008 command_runner.go:130] >     {
	I0916 10:56:15.139273  157008 command_runner.go:130] >       "id": "sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 10:56:15.139282  157008 command_runner.go:130] >       "repoTags": [
	I0916 10:56:15.139291  157008 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 10:56:15.139299  157008 command_runner.go:130] >       ],
	I0916 10:56:15.139305  157008 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:15.139321  157008 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 10:56:15.139330  157008 command_runner.go:130] >       ],
	I0916 10:56:15.139338  157008 command_runner.go:130] >       "size": "56909194",
	I0916 10:56:15.139346  157008 command_runner.go:130] >       "uid": {
	I0916 10:56:15.139353  157008 command_runner.go:130] >         "value": "0"
	I0916 10:56:15.139379  157008 command_runner.go:130] >       },
	I0916 10:56:15.139392  157008 command_runner.go:130] >       "username": "",
	I0916 10:56:15.139399  157008 command_runner.go:130] >       "spec": null,
	I0916 10:56:15.139404  157008 command_runner.go:130] >       "pinned": false
	I0916 10:56:15.139412  157008 command_runner.go:130] >     },
	I0916 10:56:15.139418  157008 command_runner.go:130] >     {
	I0916 10:56:15.139428  157008 command_runner.go:130] >       "id": "sha256:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 10:56:15.139438  157008 command_runner.go:130] >       "repoTags": [
	I0916 10:56:15.139446  157008 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 10:56:15.139454  157008 command_runner.go:130] >       ],
	I0916 10:56:15.139461  157008 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:15.139481  157008 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 10:56:15.139488  157008 command_runner.go:130] >       ],
	I0916 10:56:15.139493  157008 command_runner.go:130] >       "size": "28047142",
	I0916 10:56:15.139502  157008 command_runner.go:130] >       "uid": {
	I0916 10:56:15.139510  157008 command_runner.go:130] >         "value": "0"
	I0916 10:56:15.139521  157008 command_runner.go:130] >       },
	I0916 10:56:15.139528  157008 command_runner.go:130] >       "username": "",
	I0916 10:56:15.139534  157008 command_runner.go:130] >       "spec": null,
	I0916 10:56:15.139543  157008 command_runner.go:130] >       "pinned": false
	I0916 10:56:15.139549  157008 command_runner.go:130] >     },
	I0916 10:56:15.139557  157008 command_runner.go:130] >     {
	I0916 10:56:15.139568  157008 command_runner.go:130] >       "id": "sha256:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 10:56:15.139575  157008 command_runner.go:130] >       "repoTags": [
	I0916 10:56:15.139584  157008 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 10:56:15.139593  157008 command_runner.go:130] >       ],
	I0916 10:56:15.139601  157008 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:15.139615  157008 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1"
	I0916 10:56:15.139623  157008 command_runner.go:130] >       ],
	I0916 10:56:15.139631  157008 command_runner.go:130] >       "size": "26221554",
	I0916 10:56:15.139639  157008 command_runner.go:130] >       "uid": {
	I0916 10:56:15.139646  157008 command_runner.go:130] >         "value": "0"
	I0916 10:56:15.139653  157008 command_runner.go:130] >       },
	I0916 10:56:15.139657  157008 command_runner.go:130] >       "username": "",
	I0916 10:56:15.139665  157008 command_runner.go:130] >       "spec": null,
	I0916 10:56:15.139671  157008 command_runner.go:130] >       "pinned": false
	I0916 10:56:15.139679  157008 command_runner.go:130] >     },
	I0916 10:56:15.139685  157008 command_runner.go:130] >     {
	I0916 10:56:15.139695  157008 command_runner.go:130] >       "id": "sha256:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 10:56:15.139704  157008 command_runner.go:130] >       "repoTags": [
	I0916 10:56:15.139711  157008 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 10:56:15.139720  157008 command_runner.go:130] >       ],
	I0916 10:56:15.139726  157008 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:15.139756  157008 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"
	I0916 10:56:15.139765  157008 command_runner.go:130] >       ],
	I0916 10:56:15.139773  157008 command_runner.go:130] >       "size": "30211884",
	I0916 10:56:15.139782  157008 command_runner.go:130] >       "uid": null,
	I0916 10:56:15.139789  157008 command_runner.go:130] >       "username": "",
	I0916 10:56:15.139799  157008 command_runner.go:130] >       "spec": null,
	I0916 10:56:15.139806  157008 command_runner.go:130] >       "pinned": false
	I0916 10:56:15.139813  157008 command_runner.go:130] >     },
	I0916 10:56:15.139818  157008 command_runner.go:130] >     {
	I0916 10:56:15.139825  157008 command_runner.go:130] >       "id": "sha256:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 10:56:15.139831  157008 command_runner.go:130] >       "repoTags": [
	I0916 10:56:15.139842  157008 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 10:56:15.139849  157008 command_runner.go:130] >       ],
	I0916 10:56:15.139858  157008 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:15.139870  157008 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"
	I0916 10:56:15.139878  157008 command_runner.go:130] >       ],
	I0916 10:56:15.139887  157008 command_runner.go:130] >       "size": "20177215",
	I0916 10:56:15.139896  157008 command_runner.go:130] >       "uid": {
	I0916 10:56:15.139902  157008 command_runner.go:130] >         "value": "0"
	I0916 10:56:15.139908  157008 command_runner.go:130] >       },
	I0916 10:56:15.139912  157008 command_runner.go:130] >       "username": "",
	I0916 10:56:15.139918  157008 command_runner.go:130] >       "spec": null,
	I0916 10:56:15.139927  157008 command_runner.go:130] >       "pinned": false
	I0916 10:56:15.139933  157008 command_runner.go:130] >     },
	I0916 10:56:15.139941  157008 command_runner.go:130] >     {
	I0916 10:56:15.139951  157008 command_runner.go:130] >       "id": "sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 10:56:15.139960  157008 command_runner.go:130] >       "repoTags": [
	I0916 10:56:15.139967  157008 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 10:56:15.139975  157008 command_runner.go:130] >       ],
	I0916 10:56:15.139982  157008 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:15.139992  157008 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 10:56:15.139999  157008 command_runner.go:130] >       ],
	I0916 10:56:15.140006  157008 command_runner.go:130] >       "size": "320368",
	I0916 10:56:15.140015  157008 command_runner.go:130] >       "uid": {
	I0916 10:56:15.140022  157008 command_runner.go:130] >         "value": "65535"
	I0916 10:56:15.140030  157008 command_runner.go:130] >       },
	I0916 10:56:15.140036  157008 command_runner.go:130] >       "username": "",
	I0916 10:56:15.140046  157008 command_runner.go:130] >       "spec": null,
	I0916 10:56:15.140054  157008 command_runner.go:130] >       "pinned": true
	I0916 10:56:15.140062  157008 command_runner.go:130] >     }
	I0916 10:56:15.140068  157008 command_runner.go:130] >   ]
	I0916 10:56:15.140075  157008 command_runner.go:130] > }
	I0916 10:56:15.141136  157008 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 10:56:15.141152  157008 containerd.go:534] Images already preloaded, skipping extraction
	I0916 10:56:15.141194  157008 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:56:15.173374  157008 command_runner.go:130] > {
	I0916 10:56:15.173399  157008 command_runner.go:130] >   "images": [
	I0916 10:56:15.173404  157008 command_runner.go:130] >     {
	I0916 10:56:15.173412  157008 command_runner.go:130] >       "id": "sha256:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 10:56:15.173417  157008 command_runner.go:130] >       "repoTags": [
	I0916 10:56:15.173422  157008 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 10:56:15.173425  157008 command_runner.go:130] >       ],
	I0916 10:56:15.173430  157008 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:15.173442  157008 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 10:56:15.173447  157008 command_runner.go:130] >       ],
	I0916 10:56:15.173454  157008 command_runner.go:130] >       "size": "36793393",
	I0916 10:56:15.173459  157008 command_runner.go:130] >       "uid": null,
	I0916 10:56:15.173465  157008 command_runner.go:130] >       "username": "",
	I0916 10:56:15.173473  157008 command_runner.go:130] >       "spec": null,
	I0916 10:56:15.173479  157008 command_runner.go:130] >       "pinned": false
	I0916 10:56:15.173485  157008 command_runner.go:130] >     },
	I0916 10:56:15.173490  157008 command_runner.go:130] >     {
	I0916 10:56:15.173501  157008 command_runner.go:130] >       "id": "sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 10:56:15.173507  157008 command_runner.go:130] >       "repoTags": [
	I0916 10:56:15.173514  157008 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 10:56:15.173517  157008 command_runner.go:130] >       ],
	I0916 10:56:15.173522  157008 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:15.173529  157008 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0916 10:56:15.173533  157008 command_runner.go:130] >       ],
	I0916 10:56:15.173539  157008 command_runner.go:130] >       "size": "9058936",
	I0916 10:56:15.173545  157008 command_runner.go:130] >       "uid": null,
	I0916 10:56:15.173556  157008 command_runner.go:130] >       "username": "",
	I0916 10:56:15.173563  157008 command_runner.go:130] >       "spec": null,
	I0916 10:56:15.173573  157008 command_runner.go:130] >       "pinned": false
	I0916 10:56:15.173578  157008 command_runner.go:130] >     },
	I0916 10:56:15.173602  157008 command_runner.go:130] >     {
	I0916 10:56:15.173617  157008 command_runner.go:130] >       "id": "sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 10:56:15.173623  157008 command_runner.go:130] >       "repoTags": [
	I0916 10:56:15.173631  157008 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 10:56:15.173639  157008 command_runner.go:130] >       ],
	I0916 10:56:15.173649  157008 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:15.173663  157008 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"
	I0916 10:56:15.173672  157008 command_runner.go:130] >       ],
	I0916 10:56:15.173682  157008 command_runner.go:130] >       "size": "18562039",
	I0916 10:56:15.173692  157008 command_runner.go:130] >       "uid": null,
	I0916 10:56:15.173701  157008 command_runner.go:130] >       "username": "nonroot",
	I0916 10:56:15.173710  157008 command_runner.go:130] >       "spec": null,
	I0916 10:56:15.173716  157008 command_runner.go:130] >       "pinned": false
	I0916 10:56:15.173720  157008 command_runner.go:130] >     },
	I0916 10:56:15.173729  157008 command_runner.go:130] >     {
	I0916 10:56:15.173742  157008 command_runner.go:130] >       "id": "sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 10:56:15.173752  157008 command_runner.go:130] >       "repoTags": [
	I0916 10:56:15.173763  157008 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 10:56:15.173771  157008 command_runner.go:130] >       ],
	I0916 10:56:15.173779  157008 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:15.173793  157008 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 10:56:15.173801  157008 command_runner.go:130] >       ],
	I0916 10:56:15.173808  157008 command_runner.go:130] >       "size": "56909194",
	I0916 10:56:15.173812  157008 command_runner.go:130] >       "uid": {
	I0916 10:56:15.173821  157008 command_runner.go:130] >         "value": "0"
	I0916 10:56:15.173829  157008 command_runner.go:130] >       },
	I0916 10:56:15.173839  157008 command_runner.go:130] >       "username": "",
	I0916 10:56:15.173848  157008 command_runner.go:130] >       "spec": null,
	I0916 10:56:15.173857  157008 command_runner.go:130] >       "pinned": false
	I0916 10:56:15.173866  157008 command_runner.go:130] >     },
	I0916 10:56:15.173874  157008 command_runner.go:130] >     {
	I0916 10:56:15.173889  157008 command_runner.go:130] >       "id": "sha256:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 10:56:15.173896  157008 command_runner.go:130] >       "repoTags": [
	I0916 10:56:15.173904  157008 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 10:56:15.173913  157008 command_runner.go:130] >       ],
	I0916 10:56:15.173923  157008 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:15.173941  157008 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 10:56:15.173950  157008 command_runner.go:130] >       ],
	I0916 10:56:15.173960  157008 command_runner.go:130] >       "size": "28047142",
	I0916 10:56:15.173969  157008 command_runner.go:130] >       "uid": {
	I0916 10:56:15.173978  157008 command_runner.go:130] >         "value": "0"
	I0916 10:56:15.173986  157008 command_runner.go:130] >       },
	I0916 10:56:15.173994  157008 command_runner.go:130] >       "username": "",
	I0916 10:56:15.173998  157008 command_runner.go:130] >       "spec": null,
	I0916 10:56:15.174007  157008 command_runner.go:130] >       "pinned": false
	I0916 10:56:15.174015  157008 command_runner.go:130] >     },
	I0916 10:56:15.174021  157008 command_runner.go:130] >     {
	I0916 10:56:15.174034  157008 command_runner.go:130] >       "id": "sha256:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 10:56:15.174043  157008 command_runner.go:130] >       "repoTags": [
	I0916 10:56:15.174054  157008 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 10:56:15.174062  157008 command_runner.go:130] >       ],
	I0916 10:56:15.174072  157008 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:15.174085  157008 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1"
	I0916 10:56:15.174091  157008 command_runner.go:130] >       ],
	I0916 10:56:15.174097  157008 command_runner.go:130] >       "size": "26221554",
	I0916 10:56:15.174106  157008 command_runner.go:130] >       "uid": {
	I0916 10:56:15.174115  157008 command_runner.go:130] >         "value": "0"
	I0916 10:56:15.174121  157008 command_runner.go:130] >       },
	I0916 10:56:15.174131  157008 command_runner.go:130] >       "username": "",
	I0916 10:56:15.174140  157008 command_runner.go:130] >       "spec": null,
	I0916 10:56:15.174149  157008 command_runner.go:130] >       "pinned": false
	I0916 10:56:15.174157  157008 command_runner.go:130] >     },
	I0916 10:56:15.174165  157008 command_runner.go:130] >     {
	I0916 10:56:15.174176  157008 command_runner.go:130] >       "id": "sha256:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 10:56:15.174184  157008 command_runner.go:130] >       "repoTags": [
	I0916 10:56:15.174189  157008 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 10:56:15.174200  157008 command_runner.go:130] >       ],
	I0916 10:56:15.174210  157008 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:15.174224  157008 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"
	I0916 10:56:15.174233  157008 command_runner.go:130] >       ],
	I0916 10:56:15.174242  157008 command_runner.go:130] >       "size": "30211884",
	I0916 10:56:15.174251  157008 command_runner.go:130] >       "uid": null,
	I0916 10:56:15.174261  157008 command_runner.go:130] >       "username": "",
	I0916 10:56:15.174269  157008 command_runner.go:130] >       "spec": null,
	I0916 10:56:15.174274  157008 command_runner.go:130] >       "pinned": false
	I0916 10:56:15.174278  157008 command_runner.go:130] >     },
	I0916 10:56:15.174286  157008 command_runner.go:130] >     {
	I0916 10:56:15.174299  157008 command_runner.go:130] >       "id": "sha256:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 10:56:15.174306  157008 command_runner.go:130] >       "repoTags": [
	I0916 10:56:15.174317  157008 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 10:56:15.174325  157008 command_runner.go:130] >       ],
	I0916 10:56:15.174335  157008 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:15.174353  157008 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"
	I0916 10:56:15.174361  157008 command_runner.go:130] >       ],
	I0916 10:56:15.174368  157008 command_runner.go:130] >       "size": "20177215",
	I0916 10:56:15.174372  157008 command_runner.go:130] >       "uid": {
	I0916 10:56:15.174381  157008 command_runner.go:130] >         "value": "0"
	I0916 10:56:15.174389  157008 command_runner.go:130] >       },
	I0916 10:56:15.174399  157008 command_runner.go:130] >       "username": "",
	I0916 10:56:15.174408  157008 command_runner.go:130] >       "spec": null,
	I0916 10:56:15.174417  157008 command_runner.go:130] >       "pinned": false
	I0916 10:56:15.174426  157008 command_runner.go:130] >     },
	I0916 10:56:15.174434  157008 command_runner.go:130] >     {
	I0916 10:56:15.174447  157008 command_runner.go:130] >       "id": "sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 10:56:15.174454  157008 command_runner.go:130] >       "repoTags": [
	I0916 10:56:15.174459  157008 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 10:56:15.174466  157008 command_runner.go:130] >       ],
	I0916 10:56:15.174476  157008 command_runner.go:130] >       "repoDigests": [
	I0916 10:56:15.174490  157008 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 10:56:15.174500  157008 command_runner.go:130] >       ],
	I0916 10:56:15.174509  157008 command_runner.go:130] >       "size": "320368",
	I0916 10:56:15.174518  157008 command_runner.go:130] >       "uid": {
	I0916 10:56:15.174527  157008 command_runner.go:130] >         "value": "65535"
	I0916 10:56:15.174535  157008 command_runner.go:130] >       },
	I0916 10:56:15.174542  157008 command_runner.go:130] >       "username": "",
	I0916 10:56:15.174547  157008 command_runner.go:130] >       "spec": null,
	I0916 10:56:15.174555  157008 command_runner.go:130] >       "pinned": true
	I0916 10:56:15.174563  157008 command_runner.go:130] >     }
	I0916 10:56:15.174569  157008 command_runner.go:130] >   ]
	I0916 10:56:15.174578  157008 command_runner.go:130] > }
	I0916 10:56:15.174716  157008 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 10:56:15.174727  157008 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:56:15.174735  157008 kubeadm.go:934] updating node { 192.168.67.2 8443 v1.31.1 containerd true true} ...
	I0916 10:56:15.174844  157008 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-079070 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-079070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:56:15.174914  157008 ssh_runner.go:195] Run: sudo crictl info
	I0916 10:56:15.205300  157008 command_runner.go:130] > {
	I0916 10:56:15.205321  157008 command_runner.go:130] >   "status": {
	I0916 10:56:15.205327  157008 command_runner.go:130] >     "conditions": [
	I0916 10:56:15.205330  157008 command_runner.go:130] >       {
	I0916 10:56:15.205336  157008 command_runner.go:130] >         "type": "RuntimeReady",
	I0916 10:56:15.205346  157008 command_runner.go:130] >         "status": true,
	I0916 10:56:15.205350  157008 command_runner.go:130] >         "reason": "",
	I0916 10:56:15.205356  157008 command_runner.go:130] >         "message": ""
	I0916 10:56:15.205365  157008 command_runner.go:130] >       },
	I0916 10:56:15.205371  157008 command_runner.go:130] >       {
	I0916 10:56:15.205377  157008 command_runner.go:130] >         "type": "NetworkReady",
	I0916 10:56:15.205385  157008 command_runner.go:130] >         "status": true,
	I0916 10:56:15.205394  157008 command_runner.go:130] >         "reason": "",
	I0916 10:56:15.205403  157008 command_runner.go:130] >         "message": ""
	I0916 10:56:15.205407  157008 command_runner.go:130] >       },
	I0916 10:56:15.205410  157008 command_runner.go:130] >       {
	I0916 10:56:15.205418  157008 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings",
	I0916 10:56:15.205422  157008 command_runner.go:130] >         "status": true,
	I0916 10:56:15.205428  157008 command_runner.go:130] >         "reason": "",
	I0916 10:56:15.205432  157008 command_runner.go:130] >         "message": ""
	I0916 10:56:15.205438  157008 command_runner.go:130] >       }
	I0916 10:56:15.205441  157008 command_runner.go:130] >     ]
	I0916 10:56:15.205445  157008 command_runner.go:130] >   },
	I0916 10:56:15.205451  157008 command_runner.go:130] >   "cniconfig": {
	I0916 10:56:15.205460  157008 command_runner.go:130] >     "PluginDirs": [
	I0916 10:56:15.205473  157008 command_runner.go:130] >       "/opt/cni/bin"
	I0916 10:56:15.205489  157008 command_runner.go:130] >     ],
	I0916 10:56:15.205502  157008 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I0916 10:56:15.205510  157008 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I0916 10:56:15.205514  157008 command_runner.go:130] >     "Prefix": "eth",
	I0916 10:56:15.205521  157008 command_runner.go:130] >     "Networks": [
	I0916 10:56:15.205524  157008 command_runner.go:130] >       {
	I0916 10:56:15.205529  157008 command_runner.go:130] >         "Config": {
	I0916 10:56:15.205533  157008 command_runner.go:130] >           "Name": "cni-loopback",
	I0916 10:56:15.205540  157008 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I0916 10:56:15.205544  157008 command_runner.go:130] >           "Plugins": [
	I0916 10:56:15.205554  157008 command_runner.go:130] >             {
	I0916 10:56:15.205565  157008 command_runner.go:130] >               "Network": {
	I0916 10:56:15.205572  157008 command_runner.go:130] >                 "type": "loopback",
	I0916 10:56:15.205583  157008 command_runner.go:130] >                 "ipam": {},
	I0916 10:56:15.205593  157008 command_runner.go:130] >                 "dns": {}
	I0916 10:56:15.205602  157008 command_runner.go:130] >               },
	I0916 10:56:15.205613  157008 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I0916 10:56:15.205621  157008 command_runner.go:130] >             }
	I0916 10:56:15.205628  157008 command_runner.go:130] >           ],
	I0916 10:56:15.205640  157008 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I0916 10:56:15.205649  157008 command_runner.go:130] >         },
	I0916 10:56:15.205659  157008 command_runner.go:130] >         "IFName": "lo"
	I0916 10:56:15.205667  157008 command_runner.go:130] >       },
	I0916 10:56:15.205676  157008 command_runner.go:130] >       {
	I0916 10:56:15.205682  157008 command_runner.go:130] >         "Config": {
	I0916 10:56:15.205691  157008 command_runner.go:130] >           "Name": "loopback",
	I0916 10:56:15.205702  157008 command_runner.go:130] >           "CNIVersion": "1.0.0",
	I0916 10:56:15.205710  157008 command_runner.go:130] >           "Plugins": [
	I0916 10:56:15.205718  157008 command_runner.go:130] >             {
	I0916 10:56:15.205723  157008 command_runner.go:130] >               "Network": {
	I0916 10:56:15.205731  157008 command_runner.go:130] >                 "cniVersion": "1.0.0",
	I0916 10:56:15.205740  157008 command_runner.go:130] >                 "name": "loopback",
	I0916 10:56:15.205751  157008 command_runner.go:130] >                 "type": "loopback",
	I0916 10:56:15.205768  157008 command_runner.go:130] >                 "ipam": {},
	I0916 10:56:15.205783  157008 command_runner.go:130] >                 "dns": {}
	I0916 10:56:15.205793  157008 command_runner.go:130] >               },
	I0916 10:56:15.205807  157008 command_runner.go:130] >               "Source": "{\"cniVersion\":\"1.0.0\",\"name\":\"loopback\",\"type\":\"loopback\"}"
	I0916 10:56:15.205815  157008 command_runner.go:130] >             }
	I0916 10:56:15.205821  157008 command_runner.go:130] >           ],
	I0916 10:56:15.205841  157008 command_runner.go:130] >           "Source": "{\"cniVersion\":\"1.0.0\",\"name\":\"loopback\",\"plugins\":[{\"cniVersion\":\"1.0.0\",\"name\":\"loopback\",\"type\":\"loopback\"}]}"
	I0916 10:56:15.205851  157008 command_runner.go:130] >         },
	I0916 10:56:15.205858  157008 command_runner.go:130] >         "IFName": "eth0"
	I0916 10:56:15.205866  157008 command_runner.go:130] >       }
	I0916 10:56:15.205872  157008 command_runner.go:130] >     ]
	I0916 10:56:15.205879  157008 command_runner.go:130] >   },
	I0916 10:56:15.205888  157008 command_runner.go:130] >   "config": {
	I0916 10:56:15.205897  157008 command_runner.go:130] >     "containerd": {
	I0916 10:56:15.205907  157008 command_runner.go:130] >       "snapshotter": "overlayfs",
	I0916 10:56:15.205917  157008 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I0916 10:56:15.205924  157008 command_runner.go:130] >       "defaultRuntime": {
	I0916 10:56:15.205929  157008 command_runner.go:130] >         "runtimeType": "",
	I0916 10:56:15.205940  157008 command_runner.go:130] >         "runtimePath": "",
	I0916 10:56:15.205950  157008 command_runner.go:130] >         "runtimeEngine": "",
	I0916 10:56:15.205958  157008 command_runner.go:130] >         "PodAnnotations": null,
	I0916 10:56:15.205976  157008 command_runner.go:130] >         "ContainerAnnotations": null,
	I0916 10:56:15.205986  157008 command_runner.go:130] >         "runtimeRoot": "",
	I0916 10:56:15.205995  157008 command_runner.go:130] >         "options": null,
	I0916 10:56:15.206007  157008 command_runner.go:130] >         "privileged_without_host_devices": false,
	I0916 10:56:15.206018  157008 command_runner.go:130] >         "privileged_without_host_devices_all_devices_allowed": false,
	I0916 10:56:15.206025  157008 command_runner.go:130] >         "baseRuntimeSpec": "",
	I0916 10:56:15.206031  157008 command_runner.go:130] >         "cniConfDir": "",
	I0916 10:56:15.206041  157008 command_runner.go:130] >         "cniMaxConfNum": 0,
	I0916 10:56:15.206050  157008 command_runner.go:130] >         "snapshotter": "",
	I0916 10:56:15.206058  157008 command_runner.go:130] >         "sandboxMode": ""
	I0916 10:56:15.206066  157008 command_runner.go:130] >       },
	I0916 10:56:15.206076  157008 command_runner.go:130] >       "untrustedWorkloadRuntime": {
	I0916 10:56:15.206101  157008 command_runner.go:130] >         "runtimeType": "",
	I0916 10:56:15.206110  157008 command_runner.go:130] >         "runtimePath": "",
	I0916 10:56:15.206118  157008 command_runner.go:130] >         "runtimeEngine": "",
	I0916 10:56:15.206123  157008 command_runner.go:130] >         "PodAnnotations": null,
	I0916 10:56:15.206132  157008 command_runner.go:130] >         "ContainerAnnotations": null,
	I0916 10:56:15.206141  157008 command_runner.go:130] >         "runtimeRoot": "",
	I0916 10:56:15.206151  157008 command_runner.go:130] >         "options": null,
	I0916 10:56:15.206164  157008 command_runner.go:130] >         "privileged_without_host_devices": false,
	I0916 10:56:15.206176  157008 command_runner.go:130] >         "privileged_without_host_devices_all_devices_allowed": false,
	I0916 10:56:15.206185  157008 command_runner.go:130] >         "baseRuntimeSpec": "",
	I0916 10:56:15.206195  157008 command_runner.go:130] >         "cniConfDir": "",
	I0916 10:56:15.206203  157008 command_runner.go:130] >         "cniMaxConfNum": 0,
	I0916 10:56:15.206209  157008 command_runner.go:130] >         "snapshotter": "",
	I0916 10:56:15.206213  157008 command_runner.go:130] >         "sandboxMode": ""
	I0916 10:56:15.206221  157008 command_runner.go:130] >       },
	I0916 10:56:15.206235  157008 command_runner.go:130] >       "runtimes": {
	I0916 10:56:15.206245  157008 command_runner.go:130] >         "runc": {
	I0916 10:56:15.206256  157008 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I0916 10:56:15.206265  157008 command_runner.go:130] >           "runtimePath": "",
	I0916 10:56:15.206275  157008 command_runner.go:130] >           "runtimeEngine": "",
	I0916 10:56:15.206284  157008 command_runner.go:130] >           "PodAnnotations": null,
	I0916 10:56:15.206291  157008 command_runner.go:130] >           "ContainerAnnotations": null,
	I0916 10:56:15.206299  157008 command_runner.go:130] >           "runtimeRoot": "",
	I0916 10:56:15.206302  157008 command_runner.go:130] >           "options": {
	I0916 10:56:15.206310  157008 command_runner.go:130] >             "SystemdCgroup": false
	I0916 10:56:15.206319  157008 command_runner.go:130] >           },
	I0916 10:56:15.206327  157008 command_runner.go:130] >           "privileged_without_host_devices": false,
	I0916 10:56:15.206373  157008 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I0916 10:56:15.206388  157008 command_runner.go:130] >           "baseRuntimeSpec": "",
	I0916 10:56:15.206394  157008 command_runner.go:130] >           "cniConfDir": "",
	I0916 10:56:15.206401  157008 command_runner.go:130] >           "cniMaxConfNum": 0,
	I0916 10:56:15.206408  157008 command_runner.go:130] >           "snapshotter": "",
	I0916 10:56:15.206415  157008 command_runner.go:130] >           "sandboxMode": "podsandbox"
	I0916 10:56:15.206424  157008 command_runner.go:130] >         }
	I0916 10:56:15.206429  157008 command_runner.go:130] >       },
	I0916 10:56:15.206436  157008 command_runner.go:130] >       "noPivot": false,
	I0916 10:56:15.206447  157008 command_runner.go:130] >       "disableSnapshotAnnotations": true,
	I0916 10:56:15.206454  157008 command_runner.go:130] >       "discardUnpackedLayers": true,
	I0916 10:56:15.206465  157008 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I0916 10:56:15.206473  157008 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false
	I0916 10:56:15.206481  157008 command_runner.go:130] >     },
	I0916 10:56:15.206485  157008 command_runner.go:130] >     "cni": {
	I0916 10:56:15.206494  157008 command_runner.go:130] >       "binDir": "/opt/cni/bin",
	I0916 10:56:15.206505  157008 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I0916 10:56:15.206515  157008 command_runner.go:130] >       "maxConfNum": 1,
	I0916 10:56:15.206522  157008 command_runner.go:130] >       "setupSerially": false,
	I0916 10:56:15.206531  157008 command_runner.go:130] >       "confTemplate": "",
	I0916 10:56:15.206540  157008 command_runner.go:130] >       "ipPref": ""
	I0916 10:56:15.206549  157008 command_runner.go:130] >     },
	I0916 10:56:15.206559  157008 command_runner.go:130] >     "registry": {
	I0916 10:56:15.206571  157008 command_runner.go:130] >       "configPath": "/etc/containerd/certs.d",
	I0916 10:56:15.206580  157008 command_runner.go:130] >       "mirrors": null,
	I0916 10:56:15.206587  157008 command_runner.go:130] >       "configs": null,
	I0916 10:56:15.206594  157008 command_runner.go:130] >       "auths": null,
	I0916 10:56:15.206602  157008 command_runner.go:130] >       "headers": null
	I0916 10:56:15.206612  157008 command_runner.go:130] >     },
	I0916 10:56:15.206618  157008 command_runner.go:130] >     "imageDecryption": {
	I0916 10:56:15.206637  157008 command_runner.go:130] >       "keyModel": "node"
	I0916 10:56:15.206645  157008 command_runner.go:130] >     },
	I0916 10:56:15.206653  157008 command_runner.go:130] >     "disableTCPService": true,
	I0916 10:56:15.206663  157008 command_runner.go:130] >     "streamServerAddress": "",
	I0916 10:56:15.206672  157008 command_runner.go:130] >     "streamServerPort": "10010",
	I0916 10:56:15.206680  157008 command_runner.go:130] >     "streamIdleTimeout": "4h0m0s",
	I0916 10:56:15.206687  157008 command_runner.go:130] >     "enableSelinux": false,
	I0916 10:56:15.206694  157008 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I0916 10:56:15.206710  157008 command_runner.go:130] >     "sandboxImage": "registry.k8s.io/pause:3.10",
	I0916 10:56:15.206722  157008 command_runner.go:130] >     "statsCollectPeriod": 10,
	I0916 10:56:15.206732  157008 command_runner.go:130] >     "systemdCgroup": false,
	I0916 10:56:15.206742  157008 command_runner.go:130] >     "enableTLSStreaming": false,
	I0916 10:56:15.206751  157008 command_runner.go:130] >     "x509KeyPairStreaming": {
	I0916 10:56:15.206761  157008 command_runner.go:130] >       "tlsCertFile": "",
	I0916 10:56:15.206768  157008 command_runner.go:130] >       "tlsKeyFile": ""
	I0916 10:56:15.206771  157008 command_runner.go:130] >     },
	I0916 10:56:15.206777  157008 command_runner.go:130] >     "maxContainerLogSize": 16384,
	I0916 10:56:15.206787  157008 command_runner.go:130] >     "disableCgroup": false,
	I0916 10:56:15.206797  157008 command_runner.go:130] >     "disableApparmor": false,
	I0916 10:56:15.206804  157008 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I0916 10:56:15.206814  157008 command_runner.go:130] >     "maxConcurrentDownloads": 3,
	I0916 10:56:15.206824  157008 command_runner.go:130] >     "disableProcMount": false,
	I0916 10:56:15.206834  157008 command_runner.go:130] >     "unsetSeccompProfile": "",
	I0916 10:56:15.206844  157008 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I0916 10:56:15.206854  157008 command_runner.go:130] >     "disableHugetlbController": true,
	I0916 10:56:15.206864  157008 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I0916 10:56:15.206871  157008 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I0916 10:56:15.206877  157008 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I0916 10:56:15.206888  157008 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I0916 10:56:15.206898  157008 command_runner.go:130] >     "enableUnprivilegedICMP": false,
	I0916 10:56:15.206905  157008 command_runner.go:130] >     "enableCDI": false,
	I0916 10:56:15.206915  157008 command_runner.go:130] >     "cdiSpecDirs": [
	I0916 10:56:15.206924  157008 command_runner.go:130] >       "/etc/cdi",
	I0916 10:56:15.206933  157008 command_runner.go:130] >       "/var/run/cdi"
	I0916 10:56:15.206941  157008 command_runner.go:130] >     ],
	I0916 10:56:15.206952  157008 command_runner.go:130] >     "imagePullProgressTimeout": "5m0s",
	I0916 10:56:15.206961  157008 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I0916 10:56:15.206969  157008 command_runner.go:130] >     "imagePullWithSyncFs": false,
	I0916 10:56:15.206975  157008 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I0916 10:56:15.206985  157008 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I0916 10:56:15.206997  157008 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I0916 10:56:15.207007  157008 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I0916 10:56:15.207019  157008 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri"
	I0916 10:56:15.207027  157008 command_runner.go:130] >   },
	I0916 10:56:15.207034  157008 command_runner.go:130] >   "golang": "go1.22.7",
	I0916 10:56:15.207044  157008 command_runner.go:130] >   "lastCNILoadStatus": "OK",
	I0916 10:56:15.207055  157008 command_runner.go:130] >   "lastCNILoadStatus.default": "OK"
	I0916 10:56:15.207062  157008 command_runner.go:130] > }
	I0916 10:56:15.207858  157008 cni.go:84] Creating CNI manager for ""
	I0916 10:56:15.207876  157008 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 10:56:15.207885  157008 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:56:15.207904  157008 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-079070 NodeName:multinode-079070 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:56:15.208019  157008 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "multinode-079070"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:56:15.208068  157008 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:56:15.215694  157008 command_runner.go:130] > kubeadm
	I0916 10:56:15.215718  157008 command_runner.go:130] > kubectl
	I0916 10:56:15.215725  157008 command_runner.go:130] > kubelet
	I0916 10:56:15.216368  157008 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:56:15.216431  157008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:56:15.224456  157008 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0916 10:56:15.241056  157008 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:56:15.257712  157008 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2170 bytes)
	I0916 10:56:15.275465  157008 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0916 10:56:15.279078  157008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:56:15.290481  157008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:56:15.369272  157008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:56:15.381993  157008 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070 for IP: 192.168.67.2
	I0916 10:56:15.382015  157008 certs.go:194] generating shared ca certs ...
	I0916 10:56:15.382033  157008 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:56:15.382191  157008 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 10:56:15.382253  157008 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 10:56:15.382266  157008 certs.go:256] generating profile certs ...
	I0916 10:56:15.382344  157008 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/client.key
	I0916 10:56:15.382363  157008 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/client.crt with IP's: []
	I0916 10:56:15.890361  157008 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/client.crt ...
	I0916 10:56:15.890397  157008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/client.crt: {Name:mke77f19dd9f1aa14d60b0b2a0a9ccea8a327db7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:56:15.890605  157008 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/client.key ...
	I0916 10:56:15.890622  157008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/client.key: {Name:mkb98bd48c6b5f4f7b008ccbf89314aa876a0d64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:56:15.890727  157008 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/apiserver.key.5aac267e
	I0916 10:56:15.890743  157008 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/apiserver.crt.5aac267e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.67.2]
	I0916 10:56:16.123421  157008 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/apiserver.crt.5aac267e ...
	I0916 10:56:16.123454  157008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/apiserver.crt.5aac267e: {Name:mk080ec82addec1a87e312f5523e395a1817fa15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:56:16.123654  157008 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/apiserver.key.5aac267e ...
	I0916 10:56:16.123672  157008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/apiserver.key.5aac267e: {Name:mkc108633e0515ffb371d90ff0bbaa0a5c33d482 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:56:16.123793  157008 certs.go:381] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/apiserver.crt.5aac267e -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/apiserver.crt
	I0916 10:56:16.123877  157008 certs.go:385] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/apiserver.key.5aac267e -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/apiserver.key
	I0916 10:56:16.123982  157008 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/proxy-client.key
	I0916 10:56:16.124001  157008 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/proxy-client.crt with IP's: []
	I0916 10:56:16.327344  157008 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/proxy-client.crt ...
	I0916 10:56:16.327374  157008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/proxy-client.crt: {Name:mkc1cfc8a9cd4f01cded61bcfa2e37fb4a0e6ae6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:56:16.327537  157008 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/proxy-client.key ...
	I0916 10:56:16.327550  157008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/proxy-client.key: {Name:mk0df2d3d9d6721ce4f6b0e843e07f616c6a4e76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:56:16.327620  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:56:16.327638  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:56:16.327649  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:56:16.327665  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:56:16.327678  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:56:16.327690  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:56:16.327704  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:56:16.327718  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:56:16.327793  157008 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 10:56:16.327837  157008 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 10:56:16.327847  157008 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:56:16.327868  157008 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 10:56:16.327892  157008 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:56:16.327912  157008 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 10:56:16.327949  157008 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:56:16.327974  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem -> /usr/share/ca-certificates/11189.pem
	I0916 10:56:16.328002  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /usr/share/ca-certificates/111892.pem
	I0916 10:56:16.328019  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:56:16.328586  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:56:16.350654  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:56:16.372438  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:56:16.394606  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:56:16.417148  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 10:56:16.438859  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:56:16.460596  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:56:16.483188  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:56:16.505562  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 10:56:16.527152  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 10:56:16.549292  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:56:16.571712  157008 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:56:16.589014  157008 ssh_runner.go:195] Run: openssl version
	I0916 10:56:16.593976  157008 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0916 10:56:16.594042  157008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 10:56:16.602722  157008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 10:56:16.606139  157008 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 10:56:16.606168  157008 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 10:56:16.606201  157008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 10:56:16.612378  157008 command_runner.go:130] > 3ec20f2e
	I0916 10:56:16.612615  157008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:56:16.621138  157008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:56:16.629549  157008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:56:16.632756  157008 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:56:16.632814  157008 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:56:16.632860  157008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:56:16.638696  157008 command_runner.go:130] > b5213941
	I0916 10:56:16.638950  157008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:56:16.647440  157008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 10:56:16.655969  157008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 10:56:16.659028  157008 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 10:56:16.659046  157008 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 10:56:16.659083  157008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 10:56:16.665289  157008 command_runner.go:130] > 51391683
	I0916 10:56:16.665377  157008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 10:56:16.674236  157008 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:56:16.677262  157008 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:56:16.677331  157008 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:56:16.677369  157008 kubeadm.go:392] StartCluster: {Name:multinode-079070 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-079070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:56:16.677439  157008 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 10:56:16.677479  157008 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:56:16.711086  157008 cri.go:89] found id: ""
	I0916 10:56:16.711145  157008 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:56:16.718546  157008 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0916 10:56:16.718576  157008 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0916 10:56:16.718586  157008 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0916 10:56:16.719225  157008 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 10:56:16.726999  157008 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 10:56:16.727062  157008 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 10:56:16.735027  157008 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0916 10:56:16.735052  157008 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0916 10:56:16.735059  157008 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0916 10:56:16.735068  157008 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:56:16.735105  157008 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 10:56:16.735117  157008 kubeadm.go:157] found existing configuration files:
	
	I0916 10:56:16.735155  157008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 10:56:16.742888  157008 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:56:16.742944  157008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 10:56:16.742983  157008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 10:56:16.750593  157008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 10:56:16.758166  157008 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:56:16.758208  157008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 10:56:16.758254  157008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 10:56:16.765870  157008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 10:56:16.773621  157008 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:56:16.773665  157008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 10:56:16.773704  157008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 10:56:16.781238  157008 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 10:56:16.788983  157008 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:56:16.789036  157008 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 10:56:16.789085  157008 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 10:56:16.796590  157008 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 10:56:16.831101  157008 kubeadm.go:310] W0916 10:56:16.830474    1135 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:56:16.831136  157008 command_runner.go:130] ! W0916 10:56:16.830474    1135 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:56:16.831532  157008 kubeadm.go:310] W0916 10:56:16.831041    1135 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:56:16.831558  157008 command_runner.go:130] ! W0916 10:56:16.831041    1135 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 10:56:16.848622  157008 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 10:56:16.848665  157008 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 10:56:16.900277  157008 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:56:16.900308  157008 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:56:26.545599  157008 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 10:56:26.545632  157008 command_runner.go:130] > [init] Using Kubernetes version: v1.31.1
	I0916 10:56:26.545678  157008 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 10:56:26.545707  157008 command_runner.go:130] > [preflight] Running pre-flight checks
	I0916 10:56:26.545831  157008 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 10:56:26.545841  157008 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0916 10:56:26.545886  157008 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 10:56:26.545894  157008 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 10:56:26.545922  157008 kubeadm.go:310] OS: Linux
	I0916 10:56:26.545928  157008 command_runner.go:130] > OS: Linux
	I0916 10:56:26.545979  157008 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 10:56:26.545989  157008 command_runner.go:130] > CGROUPS_CPU: enabled
	I0916 10:56:26.546046  157008 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 10:56:26.546057  157008 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0916 10:56:26.546121  157008 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 10:56:26.546132  157008 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0916 10:56:26.546226  157008 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 10:56:26.546246  157008 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0916 10:56:26.546317  157008 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 10:56:26.546329  157008 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0916 10:56:26.546409  157008 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 10:56:26.546426  157008 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0916 10:56:26.546489  157008 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 10:56:26.546498  157008 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0916 10:56:26.546584  157008 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 10:56:26.546597  157008 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0916 10:56:26.546662  157008 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 10:56:26.546669  157008 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0916 10:56:26.546734  157008 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:56:26.546741  157008 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 10:56:26.546818  157008 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:56:26.546825  157008 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 10:56:26.546941  157008 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 10:56:26.546954  157008 command_runner.go:130] > [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 10:56:26.547015  157008 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:56:26.547099  157008 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 10:56:26.548928  157008 out.go:235]   - Generating certificates and keys ...
	I0916 10:56:26.549008  157008 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0916 10:56:26.549017  157008 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 10:56:26.549086  157008 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0916 10:56:26.549094  157008 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 10:56:26.549175  157008 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 10:56:26.549183  157008 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 10:56:26.549258  157008 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0916 10:56:26.549267  157008 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 10:56:26.549352  157008 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0916 10:56:26.549359  157008 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 10:56:26.549404  157008 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0916 10:56:26.549410  157008 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 10:56:26.549453  157008 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0916 10:56:26.549459  157008 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 10:56:26.549571  157008 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-079070] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0916 10:56:26.549581  157008 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-079070] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0916 10:56:26.549642  157008 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0916 10:56:26.549649  157008 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 10:56:26.549807  157008 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-079070] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0916 10:56:26.549826  157008 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-079070] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0916 10:56:26.549911  157008 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 10:56:26.549918  157008 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 10:56:26.549970  157008 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 10:56:26.549975  157008 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 10:56:26.550017  157008 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0916 10:56:26.550023  157008 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 10:56:26.550068  157008 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:56:26.550074  157008 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 10:56:26.550119  157008 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:56:26.550122  157008 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 10:56:26.550168  157008 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 10:56:26.550171  157008 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 10:56:26.550215  157008 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:56:26.550221  157008 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 10:56:26.550311  157008 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:56:26.550320  157008 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 10:56:26.550364  157008 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:56:26.550374  157008 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 10:56:26.550479  157008 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:56:26.550485  157008 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 10:56:26.550558  157008 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:56:26.550567  157008 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 10:56:26.552267  157008 out.go:235]   - Booting up control plane ...
	I0916 10:56:26.552374  157008 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:56:26.552390  157008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 10:56:26.552473  157008 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:56:26.552480  157008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 10:56:26.552534  157008 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:56:26.552541  157008 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 10:56:26.552641  157008 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:56:26.552658  157008 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:56:26.552750  157008 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:56:26.552766  157008 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:56:26.552809  157008 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0916 10:56:26.552816  157008 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 10:56:26.552963  157008 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 10:56:26.552970  157008 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 10:56:26.553096  157008 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:56:26.553104  157008 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:56:26.553153  157008 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.715009ms
	I0916 10:56:26.553159  157008 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.715009ms
	I0916 10:56:26.553219  157008 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 10:56:26.553225  157008 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 10:56:26.553271  157008 command_runner.go:130] > [api-check] The API server is healthy after 4.50164473s
	I0916 10:56:26.553277  157008 kubeadm.go:310] [api-check] The API server is healthy after 4.50164473s
	I0916 10:56:26.553371  157008 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:56:26.553377  157008 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 10:56:26.553499  157008 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:56:26.553506  157008 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 10:56:26.553569  157008 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:56:26.553578  157008 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 10:56:26.553761  157008 command_runner.go:130] > [mark-control-plane] Marking the node multinode-079070 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:56:26.553771  157008 kubeadm.go:310] [mark-control-plane] Marking the node multinode-079070 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 10:56:26.553845  157008 command_runner.go:130] > [bootstrap-token] Using token: rkgcgy.5qjb792nhey505s7
	I0916 10:56:26.553856  157008 kubeadm.go:310] [bootstrap-token] Using token: rkgcgy.5qjb792nhey505s7
	I0916 10:56:26.555547  157008 out.go:235]   - Configuring RBAC rules ...
	I0916 10:56:26.555682  157008 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:56:26.555693  157008 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 10:56:26.555826  157008 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:56:26.555837  157008 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 10:56:26.555970  157008 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:56:26.555980  157008 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 10:56:26.556143  157008 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:56:26.556155  157008 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 10:56:26.556249  157008 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:56:26.556256  157008 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 10:56:26.556349  157008 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:56:26.556364  157008 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 10:56:26.556457  157008 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:56:26.556463  157008 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 10:56:26.556500  157008 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0916 10:56:26.556505  157008 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 10:56:26.556543  157008 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0916 10:56:26.556549  157008 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 10:56:26.556553  157008 kubeadm.go:310] 
	I0916 10:56:26.556644  157008 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0916 10:56:26.556654  157008 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 10:56:26.556658  157008 kubeadm.go:310] 
	I0916 10:56:26.556727  157008 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0916 10:56:26.556732  157008 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 10:56:26.556740  157008 kubeadm.go:310] 
	I0916 10:56:26.556766  157008 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0916 10:56:26.556772  157008 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 10:56:26.556831  157008 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:56:26.556840  157008 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 10:56:26.556882  157008 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:56:26.556887  157008 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 10:56:26.556892  157008 kubeadm.go:310] 
	I0916 10:56:26.556938  157008 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0916 10:56:26.556947  157008 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 10:56:26.556951  157008 kubeadm.go:310] 
	I0916 10:56:26.557000  157008 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:56:26.557010  157008 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 10:56:26.557014  157008 kubeadm.go:310] 
	I0916 10:56:26.557073  157008 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0916 10:56:26.557080  157008 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 10:56:26.557141  157008 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:56:26.557149  157008 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 10:56:26.557255  157008 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:56:26.557267  157008 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 10:56:26.557273  157008 kubeadm.go:310] 
	I0916 10:56:26.557382  157008 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:56:26.557389  157008 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 10:56:26.557458  157008 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0916 10:56:26.557466  157008 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 10:56:26.557470  157008 kubeadm.go:310] 
	I0916 10:56:26.557541  157008 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token rkgcgy.5qjb792nhey505s7 \
	I0916 10:56:26.557547  157008 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rkgcgy.5qjb792nhey505s7 \
	I0916 10:56:26.557653  157008 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 \
	I0916 10:56:26.557667  157008 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 \
	I0916 10:56:26.557707  157008 command_runner.go:130] > 	--control-plane 
	I0916 10:56:26.557716  157008 kubeadm.go:310] 	--control-plane 
	I0916 10:56:26.557726  157008 kubeadm.go:310] 
	I0916 10:56:26.557846  157008 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:56:26.557855  157008 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 10:56:26.557862  157008 kubeadm.go:310] 
	I0916 10:56:26.557990  157008 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token rkgcgy.5qjb792nhey505s7 \
	I0916 10:56:26.557998  157008 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rkgcgy.5qjb792nhey505s7 \
	I0916 10:56:26.558130  157008 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 
	I0916 10:56:26.558159  157008 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 
	I0916 10:56:26.558167  157008 cni.go:84] Creating CNI manager for ""
	I0916 10:56:26.558177  157008 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0916 10:56:26.560007  157008 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 10:56:26.561514  157008 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 10:56:26.565197  157008 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0916 10:56:26.565217  157008 command_runner.go:130] >   Size: 4085020   	Blocks: 7992       IO Block: 4096   regular file
	I0916 10:56:26.565223  157008 command_runner.go:130] > Device: 35h/53d	Inode: 538361      Links: 1
	I0916 10:56:26.565230  157008 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:56:26.565236  157008 command_runner.go:130] > Access: 2023-12-04 16:39:01.000000000 +0000
	I0916 10:56:26.565241  157008 command_runner.go:130] > Modify: 2023-12-04 16:39:01.000000000 +0000
	I0916 10:56:26.565246  157008 command_runner.go:130] > Change: 2024-09-16 10:23:17.639492271 +0000
	I0916 10:56:26.565252  157008 command_runner.go:130] >  Birth: 2024-09-16 10:23:17.615490154 +0000
	I0916 10:56:26.565328  157008 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 10:56:26.565341  157008 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 10:56:26.581928  157008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 10:56:26.749921  157008 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0916 10:56:26.755312  157008 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0916 10:56:26.761273  157008 command_runner.go:130] > serviceaccount/kindnet created
	I0916 10:56:26.770548  157008 command_runner.go:130] > daemonset.apps/kindnet created
	I0916 10:56:26.773772  157008 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 10:56:26.773832  157008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:56:26.773844  157008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-079070 minikube.k8s.io/updated_at=2024_09_16T10_56_26_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=multinode-079070 minikube.k8s.io/primary=true
	I0916 10:56:26.781089  157008 command_runner.go:130] > -16
	I0916 10:56:26.781169  157008 ops.go:34] apiserver oom_adj: -16
	I0916 10:56:26.851211  157008 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0916 10:56:26.855705  157008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:56:26.863415  157008 command_runner.go:130] > node/multinode-079070 labeled
	I0916 10:56:26.932966  157008 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 10:56:27.356650  157008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:56:27.418775  157008 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 10:56:27.856414  157008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:56:27.921088  157008 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 10:56:28.356742  157008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:56:28.421191  157008 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 10:56:28.856593  157008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:56:28.920587  157008 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 10:56:29.355888  157008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:56:29.418112  157008 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 10:56:29.856494  157008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:56:29.921216  157008 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 10:56:30.355824  157008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:56:30.420433  157008 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0916 10:56:30.855935  157008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 10:56:30.917980  157008 command_runner.go:130] > NAME      SECRETS   AGE
	I0916 10:56:30.918001  157008 command_runner.go:130] > default   0         0s
	I0916 10:56:30.920595  157008 kubeadm.go:1113] duration metric: took 4.146831321s to wait for elevateKubeSystemPrivileges
	I0916 10:56:30.920623  157008 kubeadm.go:394] duration metric: took 14.243257616s to StartCluster
	I0916 10:56:30.920648  157008 settings.go:142] acquiring lock: {Name:mk5f140da961bb985c566f732d611f3e238f1f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:56:30.920708  157008 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:56:30.921341  157008 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:56:30.921560  157008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 10:56:30.921569  157008 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 10:56:30.921632  157008 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 10:56:30.921715  157008 addons.go:69] Setting storage-provisioner=true in profile "multinode-079070"
	I0916 10:56:30.921732  157008 addons.go:234] Setting addon storage-provisioner=true in "multinode-079070"
	I0916 10:56:30.921749  157008 addons.go:69] Setting default-storageclass=true in profile "multinode-079070"
	I0916 10:56:30.921768  157008 host.go:66] Checking if "multinode-079070" exists ...
	I0916 10:56:30.921781  157008 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-079070"
	I0916 10:56:30.921817  157008 config.go:182] Loaded profile config "multinode-079070": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:56:30.922110  157008 cli_runner.go:164] Run: docker container inspect multinode-079070 --format={{.State.Status}}
	I0916 10:56:30.922249  157008 cli_runner.go:164] Run: docker container inspect multinode-079070 --format={{.State.Status}}
	I0916 10:56:30.923572  157008 out.go:177] * Verifying Kubernetes components...
	I0916 10:56:30.925109  157008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:56:30.944133  157008 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:56:30.944364  157008 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 10:56:30.944358  157008 kapi.go:59] client config for multinode-079070: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:56:30.944941  157008 addons.go:234] Setting addon default-storageclass=true in "multinode-079070"
	I0916 10:56:30.944971  157008 host.go:66] Checking if "multinode-079070" exists ...
	I0916 10:56:30.945310  157008 cli_runner.go:164] Run: docker container inspect multinode-079070 --format={{.State.Status}}
	I0916 10:56:30.945511  157008 cert_rotation.go:140] Starting client certificate rotation controller
	I0916 10:56:30.946317  157008 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:56:30.946339  157008 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 10:56:30.946394  157008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070
	I0916 10:56:30.973774  157008 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 10:56:30.973800  157008 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 10:56:30.973860  157008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070
	I0916 10:56:30.980682  157008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070/id_rsa Username:docker}
	I0916 10:56:30.997301  157008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070/id_rsa Username:docker}
	I0916 10:56:31.040185  157008 command_runner.go:130] > apiVersion: v1
	I0916 10:56:31.040213  157008 command_runner.go:130] > data:
	I0916 10:56:31.040219  157008 command_runner.go:130] >   Corefile: |
	I0916 10:56:31.040224  157008 command_runner.go:130] >     .:53 {
	I0916 10:56:31.040230  157008 command_runner.go:130] >         errors
	I0916 10:56:31.040235  157008 command_runner.go:130] >         health {
	I0916 10:56:31.040242  157008 command_runner.go:130] >            lameduck 5s
	I0916 10:56:31.040249  157008 command_runner.go:130] >         }
	I0916 10:56:31.040254  157008 command_runner.go:130] >         ready
	I0916 10:56:31.040264  157008 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0916 10:56:31.040275  157008 command_runner.go:130] >            pods insecure
	I0916 10:56:31.040286  157008 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0916 10:56:31.040293  157008 command_runner.go:130] >            ttl 30
	I0916 10:56:31.040298  157008 command_runner.go:130] >         }
	I0916 10:56:31.040305  157008 command_runner.go:130] >         prometheus :9153
	I0916 10:56:31.040322  157008 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0916 10:56:31.040334  157008 command_runner.go:130] >            max_concurrent 1000
	I0916 10:56:31.040339  157008 command_runner.go:130] >         }
	I0916 10:56:31.040345  157008 command_runner.go:130] >         cache 30
	I0916 10:56:31.040351  157008 command_runner.go:130] >         loop
	I0916 10:56:31.040358  157008 command_runner.go:130] >         reload
	I0916 10:56:31.040367  157008 command_runner.go:130] >         loadbalance
	I0916 10:56:31.040375  157008 command_runner.go:130] >     }
	I0916 10:56:31.040383  157008 command_runner.go:130] > kind: ConfigMap
	I0916 10:56:31.040392  157008 command_runner.go:130] > metadata:
	I0916 10:56:31.040404  157008 command_runner.go:130] >   creationTimestamp: "2024-09-16T10:56:25Z"
	I0916 10:56:31.040414  157008 command_runner.go:130] >   name: coredns
	I0916 10:56:31.040422  157008 command_runner.go:130] >   namespace: kube-system
	I0916 10:56:31.040431  157008 command_runner.go:130] >   resourceVersion: "230"
	I0916 10:56:31.040442  157008 command_runner.go:130] >   uid: 61333853-db84-4ece-9b85-fe8b8c445fe7
	I0916 10:56:31.044393  157008 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 10:56:31.122783  157008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:56:31.321465  157008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 10:56:31.321529  157008 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 10:56:31.552916  157008 command_runner.go:130] > configmap/coredns replaced
	I0916 10:56:31.552954  157008 start.go:971] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS's ConfigMap
	I0916 10:56:31.553368  157008 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:56:31.553390  157008 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:56:31.553581  157008 kapi.go:59] client config for multinode-079070: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:56:31.553718  157008 kapi.go:59] client config for multinode-079070: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:56:31.553807  157008 node_ready.go:35] waiting up to 6m0s for node "multinode-079070" to be "Ready" ...
	I0916 10:56:31.553888  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:31.553896  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:31.553903  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:31.553907  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:31.554077  157008 round_trippers.go:463] GET https://192.168.67.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0916 10:56:31.554091  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:31.554102  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:31.554114  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:31.627866  157008 round_trippers.go:574] Response Status: 200 OK in 73 milliseconds
	I0916 10:56:31.627956  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:31.627982  157008 round_trippers.go:580]     Audit-Id: 51185b8b-8f43-4861-b97d-1b9d042a2f64
	I0916 10:56:31.627991  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:31.627997  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:31.628001  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:31.628006  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:31.628009  157008 round_trippers.go:580]     Content-Length: 291
	I0916 10:56:31.628042  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:31 GMT
	I0916 10:56:31.628069  157008 request.go:1351] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5cb441ca-082b-4eb4-a4cd-53c25782759a","resourceVersion":"344","creationTimestamp":"2024-09-16T10:56:25Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0916 10:56:31.628270  157008 round_trippers.go:574] Response Status: 200 OK in 74 milliseconds
	I0916 10:56:31.628290  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:31.628300  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:31.628304  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:31.628311  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:31 GMT
	I0916 10:56:31.628315  157008 round_trippers.go:580]     Audit-Id: 0e11a036-6be7-479e-b7e9-2de2a6190cd8
	I0916 10:56:31.628319  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:31.628323  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:31.628511  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"312","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:31.628598  157008 request.go:1351] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5cb441ca-082b-4eb4-a4cd-53c25782759a","resourceVersion":"344","creationTimestamp":"2024-09-16T10:56:25Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0916 10:56:31.628662  157008 round_trippers.go:463] PUT https://192.168.67.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0916 10:56:31.628670  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:31.628680  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:31.628686  157008 round_trippers.go:473]     Content-Type: application/json
	I0916 10:56:31.628692  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:31.629346  157008 node_ready.go:49] node "multinode-079070" has status "Ready":"True"
	I0916 10:56:31.629363  157008 node_ready.go:38] duration metric: took 75.539349ms for node "multinode-079070" to be "Ready" ...
	I0916 10:56:31.629373  157008 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:56:31.629420  157008 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 10:56:31.629433  157008 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 10:56:31.629491  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:56:31.629497  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:31.629508  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:31.629514  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:31.634812  157008 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0916 10:56:31.634836  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:31.634845  157008 round_trippers.go:580]     Audit-Id: 446936f7-c9fa-4538-a97e-d9b0b4dbffec
	I0916 10:56:31.634851  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:31.634856  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:31.634860  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:31.634865  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:31.634871  157008 round_trippers.go:580]     Content-Length: 291
	I0916 10:56:31.634877  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:31 GMT
	I0916 10:56:31.634904  157008 request.go:1351] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5cb441ca-082b-4eb4-a4cd-53c25782759a","resourceVersion":"347","creationTimestamp":"2024-09-16T10:56:25Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0916 10:56:31.635415  157008 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:56:31.635433  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:31.635442  157008 round_trippers.go:580]     Audit-Id: 43853a5b-f579-401f-b5c9-2ee3f298e9cb
	I0916 10:56:31.635446  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:31.635451  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:31.635456  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:31.635460  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:31.635464  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:31 GMT
	I0916 10:56:31.636786  157008 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"346"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 61465 chars]
	I0916 10:56:31.641995  157008 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:31.642157  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:31.642183  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:31.642203  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:31.642217  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:31.644661  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:31.644680  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:31.644688  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:31.644693  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:31.644700  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:31 GMT
	I0916 10:56:31.644704  157008 round_trippers.go:580]     Audit-Id: 2623bbb1-2d9a-4028-aa38-e7debef1e200
	I0916 10:56:31.644708  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:31.644712  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:31.644851  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:31.645455  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:31.645476  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:31.645488  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:31.645496  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:31.648810  157008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:56:31.648826  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:31.648833  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:31.648837  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:31.648841  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:31 GMT
	I0916 10:56:31.648846  157008 round_trippers.go:580]     Audit-Id: 660d80fa-17b4-4ed1-8159-38e17c40fa38
	I0916 10:56:31.648867  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:31.648877  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:31.649248  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"312","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:31.972548  157008 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0916 10:56:32.022566  157008 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0916 10:56:32.030400  157008 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0916 10:56:32.039038  157008 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0916 10:56:32.047162  157008 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0916 10:56:32.054416  157008 round_trippers.go:463] GET https://192.168.67.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0916 10:56:32.054444  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:32.054456  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:32.054461  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:32.056621  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:32.056648  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:32.056658  157008 round_trippers.go:580]     Content-Length: 291
	I0916 10:56:32.056664  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:32 GMT
	I0916 10:56:32.056669  157008 round_trippers.go:580]     Audit-Id: 98922aa7-f49e-4e44-adf3-12632ccf9cf5
	I0916 10:56:32.056675  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:32.056680  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:32.056692  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:32.056704  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:32.056734  157008 request.go:1351] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"5cb441ca-082b-4eb4-a4cd-53c25782759a","resourceVersion":"357","creationTimestamp":"2024-09-16T10:56:25Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0916 10:56:32.056744  157008 command_runner.go:130] > pod/storage-provisioner created
	I0916 10:56:32.056849  157008 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-079070" context rescaled to 1 replicas
	I0916 10:56:32.062302  157008 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0916 10:56:32.062453  157008 round_trippers.go:463] GET https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0916 10:56:32.062470  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:32.062481  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:32.062488  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:32.064682  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:32.064708  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:32.064717  157008 round_trippers.go:580]     Audit-Id: b3d35512-1e1a-450f-8dbb-a6eea9961197
	I0916 10:56:32.064723  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:32.064729  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:32.064734  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:32.064738  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:32.064742  157008 round_trippers.go:580]     Content-Length: 1273
	I0916 10:56:32.064746  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:32 GMT
	I0916 10:56:32.064780  157008 request.go:1351] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"370"},"items":[{"metadata":{"name":"standard","uid":"5f2cf213-a251-482f-97a5-f1e644f2e8ce","resourceVersion":"349","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0916 10:56:32.065262  157008 request.go:1351] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"5f2cf213-a251-482f-97a5-f1e644f2e8ce","resourceVersion":"349","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0916 10:56:32.065340  157008 round_trippers.go:463] PUT https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0916 10:56:32.065355  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:32.065365  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:32.065373  157008 round_trippers.go:473]     Content-Type: application/json
	I0916 10:56:32.065379  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:32.129019  157008 round_trippers.go:574] Response Status: 200 OK in 63 milliseconds
	I0916 10:56:32.129049  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:32.129059  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:32.129066  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:32.129070  157008 round_trippers.go:580]     Content-Length: 1220
	I0916 10:56:32.129074  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:32 GMT
	I0916 10:56:32.129079  157008 round_trippers.go:580]     Audit-Id: 1e443822-5287-4100-a3f6-8394ffb54563
	I0916 10:56:32.129083  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:32.129112  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:32.129393  157008 request.go:1351] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"5f2cf213-a251-482f-97a5-f1e644f2e8ce","resourceVersion":"349","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0916 10:56:32.131114  157008 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0916 10:56:32.132342  157008 addons.go:510] duration metric: took 1.210706788s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 10:56:32.142636  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:32.142659  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:32.142671  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:32.142677  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:32.145184  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:32.145212  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:32.145222  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:32.145227  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:32.145232  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:32.145237  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:32 GMT
	I0916 10:56:32.145241  157008 round_trippers.go:580]     Audit-Id: 69eae447-8caa-43f1-af36-1a3c5c6e846f
	I0916 10:56:32.145244  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:32.150096  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:32.151140  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:32.151161  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:32.151171  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:32.151177  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:32.154690  157008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:56:32.154705  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:32.154712  157008 round_trippers.go:580]     Audit-Id: 0047e009-782b-4643-9062-94b8d551a82e
	I0916 10:56:32.154715  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:32.154718  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:32.154721  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:32.154723  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:32.154726  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:32 GMT
	I0916 10:56:32.154898  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"312","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:32.642474  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:32.642496  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:32.642504  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:32.642508  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:32.644888  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:32.644909  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:32.644915  157008 round_trippers.go:580]     Audit-Id: 5b94c7c5-19d4-4ee6-89fe-bcfdd621bfec
	I0916 10:56:32.644919  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:32.644923  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:32.644926  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:32.644938  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:32.644942  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:32 GMT
	I0916 10:56:32.645124  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:32.645665  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:32.645682  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:32.645690  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:32.645694  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:32.647448  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:32.647467  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:32.647475  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:32.647480  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:32.647483  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:32.647487  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:32.647490  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:32 GMT
	I0916 10:56:32.647494  157008 round_trippers.go:580]     Audit-Id: 6acd9f16-52f1-46a1-b390-f1942b0abdac
	I0916 10:56:32.647603  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"312","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:33.142223  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:33.142245  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:33.142253  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:33.142257  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:33.144492  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:33.144513  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:33.144520  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:33 GMT
	I0916 10:56:33.144523  157008 round_trippers.go:580]     Audit-Id: 8054e685-51c7-4899-8b50-34e1ee7c903b
	I0916 10:56:33.144526  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:33.144528  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:33.144531  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:33.144533  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:33.144741  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:33.145213  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:33.145232  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:33.145241  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:33.145246  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:33.147109  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:33.147127  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:33.147136  157008 round_trippers.go:580]     Audit-Id: 75b4fee9-8b18-4680-9c74-1c82385fa12a
	I0916 10:56:33.147139  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:33.147142  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:33.147145  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:33.147147  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:33.147150  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:33 GMT
	I0916 10:56:33.147263  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"312","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:33.642964  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:33.642989  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:33.643001  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:33.643007  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:33.645190  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:33.645210  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:33.645219  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:33 GMT
	I0916 10:56:33.645225  157008 round_trippers.go:580]     Audit-Id: 8fa16d23-90bd-43b6-a601-65b154b1d4fc
	I0916 10:56:33.645229  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:33.645233  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:33.645237  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:33.645242  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:33.645418  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:33.645886  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:33.645903  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:33.645913  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:33.645919  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:33.647644  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:33.647660  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:33.647666  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:33.647670  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:33 GMT
	I0916 10:56:33.647674  157008 round_trippers.go:580]     Audit-Id: e6537a1e-d810-40d7-8dbd-b88d44a28624
	I0916 10:56:33.647676  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:33.647680  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:33.647683  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:33.647842  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"312","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:33.648146  157008 pod_ready.go:103] pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace has status "Ready":"False"
	I0916 10:56:34.142496  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:34.142518  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:34.142531  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:34.142539  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:34.144972  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:34.144993  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:34.145003  157008 round_trippers.go:580]     Audit-Id: e7381edd-9b91-4222-8325-456b90d96f77
	I0916 10:56:34.145011  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:34.145016  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:34.145019  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:34.145022  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:34.145027  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:34 GMT
	I0916 10:56:34.145215  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:34.145723  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:34.145738  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:34.145745  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:34.145751  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:34.147626  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:34.147645  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:34.147653  157008 round_trippers.go:580]     Audit-Id: 336760eb-2369-4d0d-9c21-43da9df1c17f
	I0916 10:56:34.147659  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:34.147664  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:34.147668  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:34.147695  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:34.147704  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:34 GMT
	I0916 10:56:34.147819  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"312","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:34.642360  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:34.642382  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:34.642390  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:34.642394  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:34.644646  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:34.644663  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:34.644672  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:34.644679  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:34.644685  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:34.644690  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:34 GMT
	I0916 10:56:34.644694  157008 round_trippers.go:580]     Audit-Id: 8f5d2094-4df7-464a-af25-356f4aa2d209
	I0916 10:56:34.644698  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:34.644864  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:34.645352  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:34.645365  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:34.645372  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:34.645376  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:34.647030  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:34.647045  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:34.647051  157008 round_trippers.go:580]     Audit-Id: 357886cd-0a6b-4abd-88ef-45638f6b15e1
	I0916 10:56:34.647054  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:34.647058  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:34.647064  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:34.647068  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:34.647071  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:34 GMT
	I0916 10:56:34.647247  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"312","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:35.142882  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:35.142907  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:35.142915  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:35.142920  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:35.145165  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:35.145185  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:35.145195  157008 round_trippers.go:580]     Audit-Id: 407c2dc4-abd0-4a01-ae00-17f731397130
	I0916 10:56:35.145201  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:35.145205  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:35.145211  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:35.145217  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:35.145221  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:35 GMT
	I0916 10:56:35.145369  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:35.145805  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:35.145816  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:35.145824  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:35.145827  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:35.147496  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:35.147512  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:35.147517  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:35 GMT
	I0916 10:56:35.147598  157008 round_trippers.go:580]     Audit-Id: 9283b9fa-9fd1-4121-9154-d2c15f23c59a
	I0916 10:56:35.147614  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:35.147621  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:35.147626  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:35.147631  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:35.147730  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"312","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:35.642344  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:35.642366  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:35.642373  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:35.642377  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:35.644664  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:35.644690  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:35.644702  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:35.644707  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:35.644711  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:35.644717  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:35.644720  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:35 GMT
	I0916 10:56:35.644723  157008 round_trippers.go:580]     Audit-Id: 99f7bdb0-c491-48ae-b869-9bd95ea9e71b
	I0916 10:56:35.644833  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:35.645260  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:35.645273  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:35.645280  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:35.645284  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:35.647086  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:35.647101  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:35.647108  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:35.647112  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:35 GMT
	I0916 10:56:35.647114  157008 round_trippers.go:580]     Audit-Id: c211c660-502c-42a9-b4c9-b408e079465b
	I0916 10:56:35.647117  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:35.647120  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:35.647123  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:35.647256  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"312","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:36.142911  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:36.142934  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:36.142942  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:36.142946  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:36.145213  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:36.145236  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:36.145245  157008 round_trippers.go:580]     Audit-Id: 78fe43db-8f88-44e7-bb2c-21fb2b6cb58b
	I0916 10:56:36.145250  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:36.145255  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:36.145259  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:36.145264  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:36.145267  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:36 GMT
	I0916 10:56:36.145412  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:36.145900  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:36.145914  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:36.145921  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:36.145926  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:36.147657  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:36.147671  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:36.147678  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:36 GMT
	I0916 10:56:36.147682  157008 round_trippers.go:580]     Audit-Id: 6b3f4435-56cc-48e7-b362-c549e5237d88
	I0916 10:56:36.147685  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:36.147689  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:36.147691  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:36.147696  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:36.147857  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"312","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:36.148125  157008 pod_ready.go:103] pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace has status "Ready":"False"
	I0916 10:56:36.642550  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:36.642574  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:36.642582  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:36.642586  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:36.645022  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:36.645051  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:36.645061  157008 round_trippers.go:580]     Audit-Id: 799f67ff-f54f-4a15-8940-318d582a7b9f
	I0916 10:56:36.645067  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:36.645073  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:36.645079  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:36.645086  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:36.645090  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:36 GMT
	I0916 10:56:36.645316  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:36.645795  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:36.645814  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:36.645823  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:36.645830  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:36.647699  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:36.647718  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:36.647728  157008 round_trippers.go:580]     Audit-Id: 7c0f1549-8fbe-4365-bc32-3097f6a77717
	I0916 10:56:36.647746  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:36.647751  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:36.647755  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:36.647759  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:36.647763  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:36 GMT
	I0916 10:56:36.647899  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:37.142452  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:37.142479  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:37.142489  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:37.142495  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:37.144769  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:37.144793  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:37.144800  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:37.144805  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:37.144809  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:37.144813  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:37.144816  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:37 GMT
	I0916 10:56:37.144820  157008 round_trippers.go:580]     Audit-Id: 2c9cc07a-3013-472a-a752-fed7ab9e817c
	I0916 10:56:37.145020  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:37.145532  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:37.145550  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:37.145557  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:37.145561  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:37.147371  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:37.147386  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:37.147392  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:37.147396  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:37.147399  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:37.147403  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:37.147406  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:37 GMT
	I0916 10:56:37.147409  157008 round_trippers.go:580]     Audit-Id: 22f7bc0d-20f5-45e1-91b5-3da35c74ac71
	I0916 10:56:37.147537  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:37.643232  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:37.643257  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:37.643268  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:37.643274  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:37.645538  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:37.645561  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:37.645569  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:37 GMT
	I0916 10:56:37.645574  157008 round_trippers.go:580]     Audit-Id: 76a91815-41bd-485b-8b59-4264bdbeefb6
	I0916 10:56:37.645581  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:37.645586  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:37.645593  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:37.645599  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:37.645771  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:37.646349  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:37.646367  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:37.646377  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:37.646383  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:37.648034  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:37.648055  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:37.648062  157008 round_trippers.go:580]     Audit-Id: c56787a8-cc48-48bc-9191-497663001f45
	I0916 10:56:37.648065  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:37.648069  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:37.648073  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:37.648076  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:37.648079  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:37 GMT
	I0916 10:56:37.648217  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:38.142890  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:38.142912  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:38.142920  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:38.142924  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:38.145351  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:38.145378  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:38.145387  157008 round_trippers.go:580]     Audit-Id: dab850a8-08d5-47f3-b3ab-7d1db3e8aa1c
	I0916 10:56:38.145392  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:38.145395  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:38.145398  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:38.145402  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:38.145408  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:38 GMT
	I0916 10:56:38.145600  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:38.146040  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:38.146055  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:38.146067  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:38.146074  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:38.147918  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:38.147938  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:38.147945  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:38.147949  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:38 GMT
	I0916 10:56:38.147954  157008 round_trippers.go:580]     Audit-Id: 74a6db3a-0e4a-4eb4-877b-85a0b5569740
	I0916 10:56:38.147958  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:38.147961  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:38.147964  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:38.148138  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:38.148437  157008 pod_ready.go:103] pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace has status "Ready":"False"
	I0916 10:56:38.642732  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:38.642752  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:38.642759  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:38.642762  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:38.644965  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:38.644986  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:38.644995  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:38.645000  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:38.645006  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:38.645021  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:38.645028  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:38 GMT
	I0916 10:56:38.645032  157008 round_trippers.go:580]     Audit-Id: e4d71c8a-1bc4-48b3-be36-8396d0758057
	I0916 10:56:38.645234  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:38.645757  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:38.645773  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:38.645779  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:38.645782  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:38.647400  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:38.647420  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:38.647429  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:38.647435  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:38.647442  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:38 GMT
	I0916 10:56:38.647447  157008 round_trippers.go:580]     Audit-Id: 09569ea1-6bdb-44d1-8497-4e5e61e7bb6d
	I0916 10:56:38.647452  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:38.647457  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:38.647564  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:39.143252  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:39.143280  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:39.143291  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:39.143297  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:39.145422  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:39.145443  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:39.145454  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:39.145461  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:39.145467  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:39.145472  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:39.145478  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:39 GMT
	I0916 10:56:39.145485  157008 round_trippers.go:580]     Audit-Id: 442474b2-b52a-4ca2-9bb0-7e1fc453e12f
	I0916 10:56:39.145698  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:39.146154  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:39.146172  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:39.146179  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:39.146182  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:39.147860  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:39.147875  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:39.147881  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:39.147884  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:39 GMT
	I0916 10:56:39.147887  157008 round_trippers.go:580]     Audit-Id: 1fe38ef4-1582-478d-bb85-7e66944d8580
	I0916 10:56:39.147890  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:39.147893  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:39.147895  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:39.148018  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:39.642669  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:39.642689  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:39.642697  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:39.642703  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:39.645064  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:39.645090  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:39.645100  157008 round_trippers.go:580]     Audit-Id: e9897d13-7c61-4ae0-80a2-5fab644839e5
	I0916 10:56:39.645106  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:39.645110  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:39.645114  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:39.645118  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:39.645123  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:39 GMT
	I0916 10:56:39.645283  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:39.645720  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:39.645733  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:39.645740  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:39.645743  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:39.649244  157008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:56:39.649262  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:39.649271  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:39 GMT
	I0916 10:56:39.649275  157008 round_trippers.go:580]     Audit-Id: c97461af-803f-475a-8436-3b1c370b135e
	I0916 10:56:39.649280  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:39.649283  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:39.649285  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:39.649289  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:39.649417  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:40.143022  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:40.143043  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:40.143052  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:40.143056  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:40.145373  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:40.145390  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:40.145397  157008 round_trippers.go:580]     Audit-Id: 8bc3c74e-e536-4d48-b687-9246f0f84bd7
	I0916 10:56:40.145402  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:40.145406  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:40.145408  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:40.145411  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:40.145413  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:40 GMT
	I0916 10:56:40.145598  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:40.146167  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:40.146181  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:40.146189  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:40.146196  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:40.148098  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:40.148117  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:40.148126  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:40.148131  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:40.148136  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:40.148140  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:40.148146  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:40 GMT
	I0916 10:56:40.148151  157008 round_trippers.go:580]     Audit-Id: 5dc070cf-aa48-44c7-a6f6-17ded03df785
	I0916 10:56:40.148326  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:40.148611  157008 pod_ready.go:103] pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace has status "Ready":"False"
	I0916 10:56:40.642964  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:40.642984  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:40.642992  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:40.642997  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:40.645424  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:40.645448  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:40.645460  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:40.645466  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:40 GMT
	I0916 10:56:40.645483  157008 round_trippers.go:580]     Audit-Id: 45d0a64d-0dc0-4a32-94de-df1e680cd584
	I0916 10:56:40.645492  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:40.645497  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:40.645506  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:40.645720  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:40.646160  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:40.646174  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:40.646181  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:40.646186  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:40.647827  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:40.647847  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:40.647855  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:40.647863  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:40.647870  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:40.647874  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:40 GMT
	I0916 10:56:40.647877  157008 round_trippers.go:580]     Audit-Id: 7e478325-d236-4bb4-ba42-24d90136f6da
	I0916 10:56:40.647881  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:40.648015  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:41.142934  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:41.142959  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:41.142971  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:41.142977  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:41.145157  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:41.145175  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:41.145182  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:41.145185  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:41.145187  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:41.145190  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:41.145192  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:41 GMT
	I0916 10:56:41.145196  157008 round_trippers.go:580]     Audit-Id: 66ed0539-cbb2-4f81-8a3f-2a9e17642fd8
	I0916 10:56:41.145470  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:41.146032  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:41.146049  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:41.146059  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:41.146064  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:41.147871  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:41.147896  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:41.147906  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:41.147912  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:41 GMT
	I0916 10:56:41.147916  157008 round_trippers.go:580]     Audit-Id: bf8ea6ac-a4ca-454b-8c94-f58b5d38d2f5
	I0916 10:56:41.147919  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:41.147922  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:41.147925  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:41.148050  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:41.642699  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:41.642720  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:41.642729  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:41.642733  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:41.645098  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:41.645122  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:41.645130  157008 round_trippers.go:580]     Audit-Id: 880c64a5-729d-4843-b5c4-e4a615acade3
	I0916 10:56:41.645135  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:41.645139  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:41.645143  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:41.645147  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:41.645150  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:41 GMT
	I0916 10:56:41.645337  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:41.645971  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:41.645991  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:41.646002  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:41.646007  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:41.647892  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:41.647912  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:41.647919  157008 round_trippers.go:580]     Audit-Id: cee4b896-dc55-4db2-8e82-79941468f1b1
	I0916 10:56:41.647922  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:41.647926  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:41.647928  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:41.647932  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:41.647935  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:41 GMT
	I0916 10:56:41.648107  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:42.142866  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:42.142888  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:42.142897  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:42.142900  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:42.145254  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:42.145276  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:42.145284  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:42.145290  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:42.145294  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:42.145298  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:42 GMT
	I0916 10:56:42.145302  157008 round_trippers.go:580]     Audit-Id: c8e2a663-8e72-4c44-98ba-1985318a55fb
	I0916 10:56:42.145305  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:42.145426  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:42.145891  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:42.145905  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:42.145912  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:42.145916  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:42.147587  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:42.147609  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:42.147616  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:42.147620  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:42.147624  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:42.147626  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:42 GMT
	I0916 10:56:42.147629  157008 round_trippers.go:580]     Audit-Id: b3d62189-418c-4ad6-a27b-858a4c72209a
	I0916 10:56:42.147634  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:42.147925  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:42.642575  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:42.642600  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:42.642613  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:42.642618  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:42.644849  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:42.644867  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:42.644873  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:42.644877  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:42.644881  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:42.644883  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:42.644886  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:42 GMT
	I0916 10:56:42.644889  157008 round_trippers.go:580]     Audit-Id: 190c79eb-c977-443a-aa61-ca45d56ca3ac
	I0916 10:56:42.645072  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:42.645534  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:42.645548  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:42.645556  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:42.645560  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:42.647272  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:42.647293  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:42.647302  157008 round_trippers.go:580]     Audit-Id: 2b929a02-e11b-421a-841f-968f9fe1a429
	I0916 10:56:42.647313  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:42.647325  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:42.647330  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:42.647337  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:42.647343  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:42 GMT
	I0916 10:56:42.647424  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:42.647812  157008 pod_ready.go:103] pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace has status "Ready":"False"
	I0916 10:56:43.143080  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:43.143101  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:43.143112  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:43.143118  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:43.145287  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:43.145319  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:43.145329  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:43 GMT
	I0916 10:56:43.145334  157008 round_trippers.go:580]     Audit-Id: 67756c19-6cb9-412d-adc5-03e47fff2c5a
	I0916 10:56:43.145339  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:43.145343  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:43.145349  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:43.145359  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:43.145569  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:43.146006  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:43.146017  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:43.146024  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:43.146028  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:43.147834  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:43.147857  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:43.147867  157008 round_trippers.go:580]     Audit-Id: 395753cd-e690-4930-96f5-2daf875c1fd9
	I0916 10:56:43.147872  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:43.147876  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:43.147881  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:43.147885  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:43.147889  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:43 GMT
	I0916 10:56:43.147978  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:43.642597  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:43.642622  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:43.642632  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:43.642640  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:43.645073  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:43.645094  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:43.645099  157008 round_trippers.go:580]     Audit-Id: d08fecb1-a4f4-48d2-8780-b07e986801cc
	I0916 10:56:43.645102  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:43.645109  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:43.645114  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:43.645118  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:43.645123  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:43 GMT
	I0916 10:56:43.645364  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"345","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6363 chars]
	I0916 10:56:43.645847  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:43.645861  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:43.645868  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:43.645871  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:43.647621  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:43.647644  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:43.647653  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:43 GMT
	I0916 10:56:43.647656  157008 round_trippers.go:580]     Audit-Id: 8e6f6780-f356-4f84-afc4-d4512f3a49d7
	I0916 10:56:43.647660  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:43.647664  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:43.647667  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:43.647671  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:43.647822  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:44.142464  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:44.142494  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:44.142502  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:44.142505  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:44.144755  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:44.144777  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:44.144783  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:44.144788  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:44.144792  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:44.144795  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:44.144799  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:44 GMT
	I0916 10:56:44.144802  157008 round_trippers.go:580]     Audit-Id: f77168c7-e6df-4b08-989c-911b1e9cda12
	I0916 10:56:44.144905  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"411","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6480 chars]
	I0916 10:56:44.145462  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:44.145482  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:44.145493  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:44.145498  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:44.147326  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:44.147347  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:44.147356  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:44.147362  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:44.147365  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:44.147367  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:44 GMT
	I0916 10:56:44.147371  157008 round_trippers.go:580]     Audit-Id: d51edc01-e188-4104-8c51-823ed1e940ef
	I0916 10:56:44.147373  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:44.147525  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:44.147882  157008 pod_ready.go:93] pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace has status "Ready":"True"
	I0916 10:56:44.147900  157008 pod_ready.go:82] duration metric: took 12.505831153s for pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:44.147910  157008 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ql4g8" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:44.147979  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ql4g8
	I0916 10:56:44.147988  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:44.147999  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:44.148007  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:44.149787  157008 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0916 10:56:44.149803  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:44.149809  157008 round_trippers.go:580]     Audit-Id: 60bef53e-3bf4-4412-a111-cdaea4798b44
	I0916 10:56:44.149813  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:44.149817  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:44.149821  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:44.149832  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:44.149836  157008 round_trippers.go:580]     Content-Length: 216
	I0916 10:56:44.149843  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:44 GMT
	I0916 10:56:44.149870  157008 request.go:1351] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods \"coredns-7c65d6cfc9-ql4g8\" not found","reason":"NotFound","details":{"name":"coredns-7c65d6cfc9-ql4g8","kind":"pods"},"code":404}
	I0916 10:56:44.150037  157008 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-ql4g8" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-ql4g8" not found
	I0916 10:56:44.150054  157008 pod_ready.go:82] duration metric: took 2.137296ms for pod "coredns-7c65d6cfc9-ql4g8" in "kube-system" namespace to be "Ready" ...
	E0916 10:56:44.150063  157008 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-ql4g8" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-ql4g8" not found
	I0916 10:56:44.150070  157008 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:44.150125  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-079070
	I0916 10:56:44.150136  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:44.150147  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:44.150159  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:44.152022  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:44.152043  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:44.152051  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:44.152057  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:44.152063  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:44.152069  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:44.152075  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:44 GMT
	I0916 10:56:44.152079  157008 round_trippers.go:580]     Audit-Id: 7818f6dd-4220-4c59-b1e7-1c05c7e61fd6
	I0916 10:56:44.152224  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-079070","namespace":"kube-system","uid":"fdcbee48-0d3b-47d7-acd2-298979345c7a","resourceVersion":"400","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"4cd98cb286c25ab6542db09649b1ab0f","kubernetes.io/config.mirror":"4cd98cb286c25ab6542db09649b1ab0f","kubernetes.io/config.seen":"2024-09-16T10:56:25.844758522Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6440 chars]
	I0916 10:56:44.152628  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:44.152642  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:44.152649  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:44.152653  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:44.154262  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:44.154279  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:44.154285  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:44.154289  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:44.154292  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:44.154295  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:44.154298  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:44 GMT
	I0916 10:56:44.154300  157008 round_trippers.go:580]     Audit-Id: 6f8c9198-ce5a-43ce-954a-2d68287215ba
	I0916 10:56:44.154411  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:44.154691  157008 pod_ready.go:93] pod "etcd-multinode-079070" in "kube-system" namespace has status "Ready":"True"
	I0916 10:56:44.154708  157008 pod_ready.go:82] duration metric: took 4.632679ms for pod "etcd-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:44.154721  157008 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:44.154775  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-079070
	I0916 10:56:44.154782  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:44.154789  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:44.154793  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:44.156659  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:44.156676  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:44.156682  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:44.156686  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:44.156689  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:44.156693  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:44.156696  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:44 GMT
	I0916 10:56:44.156699  157008 round_trippers.go:580]     Audit-Id: adfef526-ccfc-44c2-a102-dd2e2a752f99
	I0916 10:56:44.156899  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-079070","namespace":"kube-system","uid":"72784d35-4a94-476c-a4e9-dc3c6c3a8c46","resourceVersion":"397","creationTimestamp":"2024-09-16T10:56:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.mirror":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.seen":"2024-09-16T10:56:20.578394748Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8518 chars]
	I0916 10:56:44.157356  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:44.157372  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:44.157381  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:44.157390  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:44.158999  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:44.159012  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:44.159018  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:44 GMT
	I0916 10:56:44.159021  157008 round_trippers.go:580]     Audit-Id: 14d97088-b923-40d0-84d0-e1cdb103c15b
	I0916 10:56:44.159024  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:44.159027  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:44.159030  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:44.159033  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:44.159125  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:44.159412  157008 pod_ready.go:93] pod "kube-apiserver-multinode-079070" in "kube-system" namespace has status "Ready":"True"
	I0916 10:56:44.159435  157008 pod_ready.go:82] duration metric: took 4.699671ms for pod "kube-apiserver-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:44.159444  157008 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:44.159496  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-079070
	I0916 10:56:44.159503  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:44.159510  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:44.159514  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:44.161267  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:44.161286  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:44.161295  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:44.161300  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:44.161304  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:44.161308  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:44 GMT
	I0916 10:56:44.161312  157008 round_trippers.go:580]     Audit-Id: 826f35b8-e90b-4694-906b-458f3b78a215
	I0916 10:56:44.161317  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:44.161423  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-079070","namespace":"kube-system","uid":"44a2f17c-654a-4d64-b474-70ce475c3afe","resourceVersion":"403","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"93bd5dba25d1e51504f9fc3f55fd27c8","kubernetes.io/config.mirror":"93bd5dba25d1e51504f9fc3f55fd27c8","kubernetes.io/config.seen":"2024-09-16T10:56:25.844752735Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8093 chars]
	I0916 10:56:44.161845  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:44.161865  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:44.161874  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:44.161880  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:44.163467  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:44.163490  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:44.163499  157008 round_trippers.go:580]     Audit-Id: 480dda20-4693-4a50-9b5a-922519db13af
	I0916 10:56:44.163505  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:44.163509  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:44.163514  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:44.163518  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:44.163526  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:44 GMT
	I0916 10:56:44.163618  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:44.163956  157008 pod_ready.go:93] pod "kube-controller-manager-multinode-079070" in "kube-system" namespace has status "Ready":"True"
	I0916 10:56:44.163974  157008 pod_ready.go:82] duration metric: took 4.523834ms for pod "kube-controller-manager-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:44.163985  157008 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2vhmt" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:44.164041  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2vhmt
	I0916 10:56:44.164048  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:44.164056  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:44.164061  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:44.165708  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:44.165729  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:44.165738  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:44.165744  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:44.165750  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:44.165755  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:44.165759  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:44 GMT
	I0916 10:56:44.165764  157008 round_trippers.go:580]     Audit-Id: 45666010-4ea2-45ba-9eec-334bf9a42b9d
	I0916 10:56:44.165901  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2vhmt","generateName":"kube-proxy-","namespace":"kube-system","uid":"6f3faf85-04e9-4840-855d-dd1ef9d4e463","resourceVersion":"383","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6175 chars]
	I0916 10:56:44.342829  157008 request.go:632] Waited for 176.336266ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:44.342901  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:44.342909  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:44.342919  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:44.342923  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:44.345243  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:44.345266  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:44.345275  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:44.345280  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:44 GMT
	I0916 10:56:44.345296  157008 round_trippers.go:580]     Audit-Id: cc4348c5-82ce-4efa-867e-db0d26ed964a
	I0916 10:56:44.345302  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:44.345307  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:44.345314  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:44.345417  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:44.345908  157008 pod_ready.go:93] pod "kube-proxy-2vhmt" in "kube-system" namespace has status "Ready":"True"
	I0916 10:56:44.345936  157008 pod_ready.go:82] duration metric: took 181.942972ms for pod "kube-proxy-2vhmt" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:44.345951  157008 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:44.543383  157008 request.go:632] Waited for 197.342144ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 10:56:44.543447  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 10:56:44.543453  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:44.543461  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:44.543465  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:44.545618  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:44.545638  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:44.545647  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:44.545651  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:44 GMT
	I0916 10:56:44.545655  157008 round_trippers.go:580]     Audit-Id: c16b43f3-9846-468f-b142-3dc438b7c8a7
	I0916 10:56:44.545659  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:44.545663  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:44.545667  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:44.545800  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-079070","namespace":"kube-system","uid":"cc5dd2a3-a136-42b4-a6f9-11c733cb38c4","resourceVersion":"395","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.mirror":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.seen":"2024-09-16T10:56:25.844756911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4975 chars]
	I0916 10:56:44.743274  157008 request.go:632] Waited for 197.065316ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:44.743352  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:44.743360  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:44.743369  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:44.743374  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:44.745605  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:44.745624  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:44.745630  157008 round_trippers.go:580]     Audit-Id: 72575e76-f5ba-4148-baa3-79826b9fa941
	I0916 10:56:44.745634  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:44.745638  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:44.745641  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:44.745644  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:44.745646  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:44 GMT
	I0916 10:56:44.745813  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:44.746109  157008 pod_ready.go:93] pod "kube-scheduler-multinode-079070" in "kube-system" namespace has status "Ready":"True"
	I0916 10:56:44.746123  157008 pod_ready.go:82] duration metric: took 400.165771ms for pod "kube-scheduler-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:44.746130  157008 pod_ready.go:39] duration metric: took 13.116745728s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:56:44.746145  157008 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:56:44.746201  157008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:56:44.756316  157008 command_runner.go:130] > 1464
	I0916 10:56:44.757072  157008 api_server.go:72] duration metric: took 13.835479469s to wait for apiserver process to appear ...
	I0916 10:56:44.757092  157008 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:56:44.757116  157008 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0916 10:56:44.760758  157008 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0916 10:56:44.760825  157008 round_trippers.go:463] GET https://192.168.67.2:8443/version
	I0916 10:56:44.760830  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:44.760839  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:44.760844  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:44.761651  157008 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 10:56:44.761671  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:44.761679  157008 round_trippers.go:580]     Content-Length: 263
	I0916 10:56:44.761683  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:44 GMT
	I0916 10:56:44.761686  157008 round_trippers.go:580]     Audit-Id: db581900-5e49-4d17-80dd-6040f90c7677
	I0916 10:56:44.761688  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:44.761691  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:44.761694  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:44.761696  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:44.761712  157008 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0916 10:56:44.761804  157008 api_server.go:141] control plane version: v1.31.1
	I0916 10:56:44.761821  157008 api_server.go:131] duration metric: took 4.723091ms to wait for apiserver health ...
	I0916 10:56:44.761828  157008 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:56:44.943280  157008 request.go:632] Waited for 181.380007ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:56:44.943361  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:56:44.943367  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:44.943374  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:44.943379  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:44.946324  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:44.946346  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:44.946356  157008 round_trippers.go:580]     Audit-Id: e7e29ee8-f8fa-46d7-a199-50cf285a8fda
	I0916 10:56:44.946363  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:44.946367  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:44.946371  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:44.946376  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:44.946379  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:44 GMT
	I0916 10:56:44.946824  157008 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"417"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"411","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 58808 chars]
	I0916 10:56:44.948609  157008 system_pods.go:59] 8 kube-system pods found
	I0916 10:56:44.948642  157008 system_pods.go:61] "coredns-7c65d6cfc9-ft9gh" [8052b6a1-7257-44d4-a318-740afd039d2c] Running
	I0916 10:56:44.948650  157008 system_pods.go:61] "etcd-multinode-079070" [fdcbee48-0d3b-47d7-acd2-298979345c7a] Running
	I0916 10:56:44.948656  157008 system_pods.go:61] "kindnet-flmdv" [91449e63-0ca3-4dc6-92ef-e3c5ab102dae] Running
	I0916 10:56:44.948663  157008 system_pods.go:61] "kube-apiserver-multinode-079070" [72784d35-4a94-476c-a4e9-dc3c6c3a8c46] Running
	I0916 10:56:44.948672  157008 system_pods.go:61] "kube-controller-manager-multinode-079070" [44a2f17c-654a-4d64-b474-70ce475c3afe] Running
	I0916 10:56:44.948678  157008 system_pods.go:61] "kube-proxy-2vhmt" [6f3faf85-04e9-4840-855d-dd1ef9d4e463] Running
	I0916 10:56:44.948687  157008 system_pods.go:61] "kube-scheduler-multinode-079070" [cc5dd2a3-a136-42b4-a6f9-11c733cb38c4] Running
	I0916 10:56:44.948692  157008 system_pods.go:61] "storage-provisioner" [43862f2e-c773-468d-ab03-8b0bc0633ad4] Running
	I0916 10:56:44.948703  157008 system_pods.go:74] duration metric: took 186.86592ms to wait for pod list to return data ...
	I0916 10:56:44.948716  157008 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:56:45.143167  157008 request.go:632] Waited for 194.333179ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:56:45.143221  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:56:45.143226  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:45.143233  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:45.143236  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:45.145785  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:45.145807  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:45.145814  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:45.145817  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:45.145822  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:45.145826  157008 round_trippers.go:580]     Content-Length: 261
	I0916 10:56:45.145829  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:45 GMT
	I0916 10:56:45.145832  157008 round_trippers.go:580]     Audit-Id: 3e574ad9-7a25-4f42-b316-2d50be01118c
	I0916 10:56:45.145834  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:45.145858  157008 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"418"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"4622bf83-82d0-4a2c-a46c-d6dbfa5ce9ea","resourceVersion":"300","creationTimestamp":"2024-09-16T10:56:30Z"}}]}
	I0916 10:56:45.146022  157008 default_sa.go:45] found service account: "default"
	I0916 10:56:45.146038  157008 default_sa.go:55] duration metric: took 197.316045ms for default service account to be created ...
	I0916 10:56:45.146047  157008 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:56:45.343489  157008 request.go:632] Waited for 197.37506ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:56:45.343554  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:56:45.343571  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:45.343581  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:45.343594  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:45.346644  157008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:56:45.346666  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:45.346674  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:45.346677  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:45.346680  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:45.346683  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:45.346685  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:45 GMT
	I0916 10:56:45.346688  157008 round_trippers.go:580]     Audit-Id: ce162911-dd20-4acf-b944-e5a3e23b5b5b
	I0916 10:56:45.347074  157008 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"418"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"411","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 58808 chars]
	I0916 10:56:45.348816  157008 system_pods.go:86] 8 kube-system pods found
	I0916 10:56:45.348837  157008 system_pods.go:89] "coredns-7c65d6cfc9-ft9gh" [8052b6a1-7257-44d4-a318-740afd039d2c] Running
	I0916 10:56:45.348843  157008 system_pods.go:89] "etcd-multinode-079070" [fdcbee48-0d3b-47d7-acd2-298979345c7a] Running
	I0916 10:56:45.348847  157008 system_pods.go:89] "kindnet-flmdv" [91449e63-0ca3-4dc6-92ef-e3c5ab102dae] Running
	I0916 10:56:45.348851  157008 system_pods.go:89] "kube-apiserver-multinode-079070" [72784d35-4a94-476c-a4e9-dc3c6c3a8c46] Running
	I0916 10:56:45.348858  157008 system_pods.go:89] "kube-controller-manager-multinode-079070" [44a2f17c-654a-4d64-b474-70ce475c3afe] Running
	I0916 10:56:45.348864  157008 system_pods.go:89] "kube-proxy-2vhmt" [6f3faf85-04e9-4840-855d-dd1ef9d4e463] Running
	I0916 10:56:45.348870  157008 system_pods.go:89] "kube-scheduler-multinode-079070" [cc5dd2a3-a136-42b4-a6f9-11c733cb38c4] Running
	I0916 10:56:45.348874  157008 system_pods.go:89] "storage-provisioner" [43862f2e-c773-468d-ab03-8b0bc0633ad4] Running
	I0916 10:56:45.348881  157008 system_pods.go:126] duration metric: took 202.828654ms to wait for k8s-apps to be running ...
	I0916 10:56:45.348888  157008 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:56:45.348935  157008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:56:45.359950  157008 system_svc.go:56] duration metric: took 11.051162ms WaitForService to wait for kubelet
	I0916 10:56:45.359981  157008 kubeadm.go:582] duration metric: took 14.438390222s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:56:45.359997  157008 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:56:45.543416  157008 request.go:632] Waited for 183.343539ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes
	I0916 10:56:45.543515  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes
	I0916 10:56:45.543526  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:45.543537  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:45.543543  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:45.545923  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:45.545955  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:45.545965  157008 round_trippers.go:580]     Audit-Id: f26878ca-580e-458e-ba1e-a48fce241806
	I0916 10:56:45.545971  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:45.545976  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:45.545981  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:45.545987  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:45.545997  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:45 GMT
	I0916 10:56:45.546113  157008 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"418"},"items":[{"metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"manag
edFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1", [truncated 5074 chars]
	I0916 10:56:45.546471  157008 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:56:45.546492  157008 node_conditions.go:123] node cpu capacity is 8
	I0916 10:56:45.546508  157008 node_conditions.go:105] duration metric: took 186.505921ms to run NodePressure ...
	I0916 10:56:45.546521  157008 start.go:241] waiting for startup goroutines ...
	I0916 10:56:45.546532  157008 start.go:246] waiting for cluster config update ...
	I0916 10:56:45.546548  157008 start.go:255] writing updated cluster config ...
	I0916 10:56:45.548730  157008 out.go:201] 
	I0916 10:56:45.550196  157008 config.go:182] Loaded profile config "multinode-079070": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:56:45.550264  157008 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/config.json ...
	I0916 10:56:45.551810  157008 out.go:177] * Starting "multinode-079070-m02" worker node in "multinode-079070" cluster
	I0916 10:56:45.553282  157008 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 10:56:45.554392  157008 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:56:45.555340  157008 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:56:45.555357  157008 cache.go:56] Caching tarball of preloaded images
	I0916 10:56:45.555369  157008 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:56:45.555461  157008 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 10:56:45.555475  157008 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 10:56:45.555573  157008 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/config.json ...
	W0916 10:56:45.574716  157008 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:56:45.574739  157008 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:56:45.574828  157008 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:56:45.574844  157008 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:56:45.574848  157008 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:56:45.574857  157008 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:56:45.574864  157008 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:56:45.576034  157008 image.go:273] response: 
	I0916 10:56:45.626398  157008 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:56:45.626436  157008 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:56:45.626480  157008 start.go:360] acquireMachinesLock for multinode-079070-m02: {Name:mk1713c8fba020df744918162d1a483c7b41a015 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:56:45.626594  157008 start.go:364] duration metric: took 93.573µs to acquireMachinesLock for "multinode-079070-m02"
	I0916 10:56:45.626629  157008 start.go:93] Provisioning new machine with config: &{Name:multinode-079070 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-079070 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:false Worker:true}
	I0916 10:56:45.626715  157008 start.go:125] createHost starting for "m02" (driver="docker")
	I0916 10:56:45.628577  157008 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 10:56:45.628686  157008 start.go:159] libmachine.API.Create for "multinode-079070" (driver="docker")
	I0916 10:56:45.628719  157008 client.go:168] LocalClient.Create starting
	I0916 10:56:45.628809  157008 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem
	I0916 10:56:45.628844  157008 main.go:141] libmachine: Decoding PEM data...
	I0916 10:56:45.628859  157008 main.go:141] libmachine: Parsing certificate...
	I0916 10:56:45.628910  157008 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem
	I0916 10:56:45.628929  157008 main.go:141] libmachine: Decoding PEM data...
	I0916 10:56:45.628936  157008 main.go:141] libmachine: Parsing certificate...
	I0916 10:56:45.629156  157008 cli_runner.go:164] Run: docker network inspect multinode-079070 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:56:45.645613  157008 network_create.go:77] Found existing network {name:multinode-079070 subnet:0xc0014afb30 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 67 1] mtu:1500}
	I0916 10:56:45.645649  157008 kic.go:121] calculated static IP "192.168.67.3" for the "multinode-079070-m02" container
	I0916 10:56:45.645705  157008 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 10:56:45.662816  157008 cli_runner.go:164] Run: docker volume create multinode-079070-m02 --label name.minikube.sigs.k8s.io=multinode-079070-m02 --label created_by.minikube.sigs.k8s.io=true
	I0916 10:56:45.681356  157008 oci.go:103] Successfully created a docker volume multinode-079070-m02
	I0916 10:56:45.681428  157008 cli_runner.go:164] Run: docker run --rm --name multinode-079070-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-079070-m02 --entrypoint /usr/bin/test -v multinode-079070-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 10:56:46.179369  157008 oci.go:107] Successfully prepared a docker volume multinode-079070-m02
	I0916 10:56:46.179409  157008 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:56:46.179433  157008 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 10:56:46.179500  157008 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-079070-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 10:56:50.531696  157008 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-079070-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.352149496s)
	I0916 10:56:50.531729  157008 kic.go:203] duration metric: took 4.352293012s to extract preloaded images to volume ...
	W0916 10:56:50.531893  157008 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 10:56:50.532011  157008 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 10:56:50.581633  157008 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-079070-m02 --name multinode-079070-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-079070-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-079070-m02 --network multinode-079070 --ip 192.168.67.3 --volume multinode-079070-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 10:56:50.886437  157008 cli_runner.go:164] Run: docker container inspect multinode-079070-m02 --format={{.State.Running}}
	I0916 10:56:50.906368  157008 cli_runner.go:164] Run: docker container inspect multinode-079070-m02 --format={{.State.Status}}
	I0916 10:56:50.924478  157008 cli_runner.go:164] Run: docker exec multinode-079070-m02 stat /var/lib/dpkg/alternatives/iptables
	I0916 10:56:50.968626  157008 oci.go:144] the created container "multinode-079070-m02" has a running status.
	I0916 10:56:50.968664  157008 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070-m02/id_rsa...
	I0916 10:56:51.042731  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0916 10:56:51.042776  157008 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 10:56:51.063220  157008 cli_runner.go:164] Run: docker container inspect multinode-079070-m02 --format={{.State.Status}}
	I0916 10:56:51.080379  157008 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 10:56:51.080406  157008 kic_runner.go:114] Args: [docker exec --privileged multinode-079070-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 10:56:51.123842  157008 cli_runner.go:164] Run: docker container inspect multinode-079070-m02 --format={{.State.Status}}
	I0916 10:56:51.144968  157008 machine.go:93] provisionDockerMachine start ...
	I0916 10:56:51.145060  157008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070-m02
	I0916 10:56:51.163108  157008 main.go:141] libmachine: Using SSH client type: native
	I0916 10:56:51.163413  157008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32913 <nil> <nil>}
	I0916 10:56:51.163431  157008 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:56:51.164211  157008 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55248->127.0.0.1:32913: read: connection reset by peer
	I0916 10:56:54.295104  157008 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-079070-m02
	
	I0916 10:56:54.295133  157008 ubuntu.go:169] provisioning hostname "multinode-079070-m02"
	I0916 10:56:54.295195  157008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070-m02
	I0916 10:56:54.311975  157008 main.go:141] libmachine: Using SSH client type: native
	I0916 10:56:54.312178  157008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32913 <nil> <nil>}
	I0916 10:56:54.312197  157008 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-079070-m02 && echo "multinode-079070-m02" | sudo tee /etc/hostname
	I0916 10:56:54.454703  157008 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-079070-m02
	
	I0916 10:56:54.454767  157008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070-m02
	I0916 10:56:54.471812  157008 main.go:141] libmachine: Using SSH client type: native
	I0916 10:56:54.472033  157008 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32913 <nil> <nil>}
	I0916 10:56:54.472054  157008 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-079070-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-079070-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-079070-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:56:54.607946  157008 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:56:54.607977  157008 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 10:56:54.607999  157008 ubuntu.go:177] setting up certificates
	I0916 10:56:54.608012  157008 provision.go:84] configureAuth start
	I0916 10:56:54.608068  157008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-079070-m02
	I0916 10:56:54.624810  157008 provision.go:143] copyHostCerts
	I0916 10:56:54.624853  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:56:54.624889  157008 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 10:56:54.624898  157008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:56:54.624976  157008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 10:56:54.625066  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:56:54.625086  157008 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 10:56:54.625094  157008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:56:54.625135  157008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 10:56:54.625197  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:56:54.625221  157008 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 10:56:54.625230  157008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:56:54.625263  157008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 10:56:54.625338  157008 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.multinode-079070-m02 san=[127.0.0.1 192.168.67.3 localhost minikube multinode-079070-m02]
	I0916 10:56:54.842419  157008 provision.go:177] copyRemoteCerts
	I0916 10:56:54.842473  157008 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:56:54.842510  157008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070-m02
	I0916 10:56:54.859515  157008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070-m02/id_rsa Username:docker}
	I0916 10:56:54.956648  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:56:54.956771  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 10:56:54.980228  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:56:54.980305  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0916 10:56:55.003269  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:56:55.003367  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 10:56:55.027071  157008 provision.go:87] duration metric: took 419.04362ms to configureAuth
	I0916 10:56:55.027105  157008 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:56:55.027266  157008 config.go:182] Loaded profile config "multinode-079070": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:56:55.027277  157008 machine.go:96] duration metric: took 3.88228902s to provisionDockerMachine
	I0916 10:56:55.027285  157008 client.go:171] duration metric: took 9.398556633s to LocalClient.Create
	I0916 10:56:55.027302  157008 start.go:167] duration metric: took 9.398616763s to libmachine.API.Create "multinode-079070"
	I0916 10:56:55.027315  157008 start.go:293] postStartSetup for "multinode-079070-m02" (driver="docker")
	I0916 10:56:55.027326  157008 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:56:55.027376  157008 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:56:55.027423  157008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070-m02
	I0916 10:56:55.045390  157008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070-m02/id_rsa Username:docker}
	I0916 10:56:55.140601  157008 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:56:55.143611  157008 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.4 LTS"
	I0916 10:56:55.143627  157008 command_runner.go:130] > NAME="Ubuntu"
	I0916 10:56:55.143633  157008 command_runner.go:130] > VERSION_ID="22.04"
	I0916 10:56:55.143639  157008 command_runner.go:130] > VERSION="22.04.4 LTS (Jammy Jellyfish)"
	I0916 10:56:55.143646  157008 command_runner.go:130] > VERSION_CODENAME=jammy
	I0916 10:56:55.143653  157008 command_runner.go:130] > ID=ubuntu
	I0916 10:56:55.143662  157008 command_runner.go:130] > ID_LIKE=debian
	I0916 10:56:55.143669  157008 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0916 10:56:55.143678  157008 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0916 10:56:55.143686  157008 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0916 10:56:55.143695  157008 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0916 10:56:55.143702  157008 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0916 10:56:55.143794  157008 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:56:55.143819  157008 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:56:55.143832  157008 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:56:55.143843  157008 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:56:55.143862  157008 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 10:56:55.143922  157008 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 10:56:55.144015  157008 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 10:56:55.144026  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /etc/ssl/certs/111892.pem
	I0916 10:56:55.144137  157008 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:56:55.152200  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:56:55.175663  157008 start.go:296] duration metric: took 148.332655ms for postStartSetup
	I0916 10:56:55.176051  157008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-079070-m02
	I0916 10:56:55.193621  157008 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/config.json ...
	I0916 10:56:55.193888  157008 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:56:55.193928  157008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070-m02
	I0916 10:56:55.211158  157008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070-m02/id_rsa Username:docker}
	I0916 10:56:55.300676  157008 command_runner.go:130] > 31%
	I0916 10:56:55.300766  157008 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:56:55.305047  157008 command_runner.go:130] > 202G
	I0916 10:56:55.305233  157008 start.go:128] duration metric: took 9.678504602s to createHost
	I0916 10:56:55.305256  157008 start.go:83] releasing machines lock for "multinode-079070-m02", held for 9.67864523s
	I0916 10:56:55.305332  157008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-079070-m02
	I0916 10:56:55.326806  157008 out.go:177] * Found network options:
	I0916 10:56:55.328472  157008 out.go:177]   - NO_PROXY=192.168.67.2
	W0916 10:56:55.329913  157008 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:56:55.329993  157008 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 10:56:55.330067  157008 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:56:55.330102  157008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070-m02
	I0916 10:56:55.330128  157008 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:56:55.330185  157008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070-m02
	I0916 10:56:55.348534  157008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070-m02/id_rsa Username:docker}
	I0916 10:56:55.348869  157008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070-m02/id_rsa Username:docker}
	I0916 10:56:55.517890  157008 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0916 10:56:55.517961  157008 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0916 10:56:55.517972  157008 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0916 10:56:55.517983  157008 command_runner.go:130] > Device: efh/239d	Inode: 534561      Links: 1
	I0916 10:56:55.517997  157008 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:56:55.518007  157008 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0916 10:56:55.518018  157008 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0916 10:56:55.518029  157008 command_runner.go:130] > Change: 2024-09-16 10:23:17.243457347 +0000
	I0916 10:56:55.518035  157008 command_runner.go:130] >  Birth: 2024-09-16 10:23:17.243457347 +0000
	I0916 10:56:55.518112  157008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 10:56:55.542986  157008 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:56:55.543060  157008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:56:55.570356  157008 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0916 10:56:55.570428  157008 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 10:56:55.570442  157008 start.go:495] detecting cgroup driver to use...
	I0916 10:56:55.570475  157008 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:56:55.570521  157008 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 10:56:55.582185  157008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 10:56:55.592876  157008 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:56:55.592926  157008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:56:55.605320  157008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:56:55.618782  157008 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:56:55.693454  157008 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:56:55.777136  157008 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0916 10:56:55.777171  157008 docker.go:233] disabling docker service ...
	I0916 10:56:55.777222  157008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:56:55.795058  157008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:56:55.806099  157008 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:56:55.817155  157008 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0916 10:56:55.881737  157008 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:56:55.959443  157008 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0916 10:56:55.959510  157008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:56:55.970376  157008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:56:55.985389  157008 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0916 10:56:55.985465  157008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 10:56:55.995204  157008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:56:56.004736  157008 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:56:56.004802  157008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:56:56.014108  157008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:56:56.022860  157008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:56:56.032372  157008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:56:56.041762  157008 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:56:56.050177  157008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:56:56.059397  157008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:56:56.068681  157008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:56:56.078048  157008 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:56:56.085949  157008 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0916 10:56:56.086011  157008 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:56:56.094045  157008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:56:56.175772  157008 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 10:56:56.273047  157008 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 10:56:56.273118  157008 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 10:56:56.276536  157008 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I0916 10:56:56.276576  157008 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0916 10:56:56.276586  157008 command_runner.go:130] > Device: f8h/248d	Inode: 175         Links: 1
	I0916 10:56:56.276596  157008 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:56:56.276605  157008 command_runner.go:130] > Access: 2024-09-16 10:56:56.237445775 +0000
	I0916 10:56:56.276615  157008 command_runner.go:130] > Modify: 2024-09-16 10:56:56.237445775 +0000
	I0916 10:56:56.276625  157008 command_runner.go:130] > Change: 2024-09-16 10:56:56.237445775 +0000
	I0916 10:56:56.276635  157008 command_runner.go:130] >  Birth: -
	I0916 10:56:56.276664  157008 start.go:563] Will wait 60s for crictl version
	I0916 10:56:56.276715  157008 ssh_runner.go:195] Run: which crictl
	I0916 10:56:56.279690  157008 command_runner.go:130] > /usr/bin/crictl
	I0916 10:56:56.279801  157008 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:56:56.309682  157008 command_runner.go:130] > Version:  0.1.0
	I0916 10:56:56.309708  157008 command_runner.go:130] > RuntimeName:  containerd
	I0916 10:56:56.309717  157008 command_runner.go:130] > RuntimeVersion:  1.7.22
	I0916 10:56:56.309723  157008 command_runner.go:130] > RuntimeApiVersion:  v1
	I0916 10:56:56.311581  157008 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 10:56:56.311630  157008 ssh_runner.go:195] Run: containerd --version
	I0916 10:56:56.332608  157008 command_runner.go:130] > containerd containerd.io 1.7.22 7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c
	I0916 10:56:56.334053  157008 ssh_runner.go:195] Run: containerd --version
	I0916 10:56:56.356704  157008 command_runner.go:130] > containerd containerd.io 1.7.22 7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c
	I0916 10:56:56.360520  157008 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 10:56:56.363834  157008 out.go:177]   - env NO_PROXY=192.168.67.2
	I0916 10:56:56.366031  157008 cli_runner.go:164] Run: docker network inspect multinode-079070 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:56:56.384346  157008 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0916 10:56:56.388431  157008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:56:56.400156  157008 mustload.go:65] Loading cluster: multinode-079070
	I0916 10:56:56.400377  157008 config.go:182] Loaded profile config "multinode-079070": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:56:56.400592  157008 cli_runner.go:164] Run: docker container inspect multinode-079070 --format={{.State.Status}}
	I0916 10:56:56.419058  157008 host.go:66] Checking if "multinode-079070" exists ...
	I0916 10:56:56.419372  157008 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070 for IP: 192.168.67.3
	I0916 10:56:56.419386  157008 certs.go:194] generating shared ca certs ...
	I0916 10:56:56.419404  157008 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:56:56.419550  157008 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 10:56:56.419602  157008 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 10:56:56.419616  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:56:56.419634  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:56:56.419657  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:56:56.419670  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:56:56.419766  157008 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 10:56:56.419813  157008 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 10:56:56.419825  157008 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:56:56.419859  157008 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 10:56:56.419894  157008 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:56:56.419921  157008 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 10:56:56.419977  157008 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:56:56.420019  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem -> /usr/share/ca-certificates/11189.pem
	I0916 10:56:56.420050  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /usr/share/ca-certificates/111892.pem
	I0916 10:56:56.420068  157008 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:56:56.420093  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:56:56.445256  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:56:56.469387  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:56:56.493622  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:56:56.517743  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 10:56:56.540533  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 10:56:56.564648  157008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:56:56.589157  157008 ssh_runner.go:195] Run: openssl version
	I0916 10:56:56.594895  157008 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0916 10:56:56.594997  157008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 10:56:56.604974  157008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 10:56:56.608597  157008 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 10:56:56.608638  157008 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 10:56:56.608695  157008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 10:56:56.615370  157008 command_runner.go:130] > 3ec20f2e
	I0916 10:56:56.615451  157008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:56:56.625401  157008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:56:56.635119  157008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:56:56.639267  157008 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:56:56.639332  157008 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:56:56.639382  157008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:56:56.646551  157008 command_runner.go:130] > b5213941
	I0916 10:56:56.646739  157008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:56:56.656780  157008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 10:56:56.667334  157008 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 10:56:56.671420  157008 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 10:56:56.671465  157008 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 10:56:56.671518  157008 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 10:56:56.678595  157008 command_runner.go:130] > 51391683
	I0916 10:56:56.678679  157008 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 10:56:56.688744  157008 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:56:56.692399  157008 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:56:56.692449  157008 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:56:56.692492  157008 kubeadm.go:934] updating node {m02 192.168.67.3 8443 v1.31.1 containerd false true} ...
	I0916 10:56:56.692596  157008 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-079070-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-079070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:56:56.692664  157008 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:56:56.701739  157008 command_runner.go:130] > kubeadm
	I0916 10:56:56.701763  157008 command_runner.go:130] > kubectl
	I0916 10:56:56.701768  157008 command_runner.go:130] > kubelet
	I0916 10:56:56.701786  157008 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:56:56.701838  157008 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0916 10:56:56.710811  157008 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0916 10:56:56.728467  157008 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:56:56.746427  157008 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0916 10:56:56.750239  157008 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:56:56.761646  157008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:56:56.839074  157008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:56:56.853245  157008 host.go:66] Checking if "multinode-079070" exists ...
	I0916 10:56:56.853545  157008 start.go:317] joinCluster: &{Name:multinode-079070 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-079070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:56:56.853658  157008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0916 10:56:56.853716  157008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070
	I0916 10:56:56.873855  157008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070/id_rsa Username:docker}
	I0916 10:56:57.025381  157008 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token g05kzm.0mbgqu1p8k523k5h --discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 
	I0916 10:56:57.025446  157008 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:false Worker:true}
	I0916 10:56:57.025486  157008 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token g05kzm.0mbgqu1p8k523k5h --discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=multinode-079070-m02"
	I0916 10:56:57.061284  157008 command_runner.go:130] > [preflight] Running pre-flight checks
	I0916 10:56:57.070829  157008 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0916 10:56:57.070862  157008 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 10:56:57.070870  157008 command_runner.go:130] > OS: Linux
	I0916 10:56:57.070879  157008 command_runner.go:130] > CGROUPS_CPU: enabled
	I0916 10:56:57.070888  157008 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0916 10:56:57.070895  157008 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0916 10:56:57.070903  157008 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0916 10:56:57.070909  157008 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0916 10:56:57.070929  157008 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0916 10:56:57.070940  157008 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0916 10:56:57.070948  157008 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0916 10:56:57.070955  157008 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0916 10:56:57.140017  157008 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0916 10:56:57.140049  157008 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0916 10:56:57.171565  157008 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 10:56:57.171657  157008 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 10:56:57.171675  157008 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0916 10:56:57.263400  157008 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 10:56:57.764408  157008 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.030642ms
	I0916 10:56:57.764444  157008 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0916 10:56:58.275591  157008 command_runner.go:130] > This node has joined the cluster:
	I0916 10:56:58.275619  157008 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0916 10:56:58.275629  157008 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0916 10:56:58.275639  157008 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0916 10:56:58.278537  157008 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 10:56:58.278582  157008 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 10:56:58.278611  157008 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token g05kzm.0mbgqu1p8k523k5h --discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 --ignore-preflight-errors=all --cri-socket unix:///run/containerd/containerd.sock --node-name=multinode-079070-m02": (1.253111083s)
	I0916 10:56:58.278638  157008 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0916 10:56:58.370323  157008 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0916 10:56:58.443859  157008 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-079070-m02 minikube.k8s.io/updated_at=2024_09_16T10_56_58_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=multinode-079070 minikube.k8s.io/primary=false
	I0916 10:56:58.517626  157008 command_runner.go:130] > node/multinode-079070-m02 labeled
	I0916 10:56:58.517669  157008 start.go:319] duration metric: took 1.664126156s to joinCluster
	I0916 10:56:58.517728  157008 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:false Worker:true}
	I0916 10:56:58.518033  157008 config.go:182] Loaded profile config "multinode-079070": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:56:58.519730  157008 out.go:177] * Verifying Kubernetes components...
	I0916 10:56:58.521371  157008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:56:58.606241  157008 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:56:58.619445  157008 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:56:58.619685  157008 kapi.go:59] client config for multinode-079070: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:56:58.619965  157008 node_ready.go:35] waiting up to 6m0s for node "multinode-079070-m02" to be "Ready" ...
	I0916 10:56:58.620039  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m02
	I0916 10:56:58.620044  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:58.620051  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:58.620057  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:58.622365  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:58.622383  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:58.622389  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:58.622393  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:58.622397  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:58.622400  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:58 GMT
	I0916 10:56:58.622406  157008 round_trippers.go:580]     Audit-Id: 9936bf44-7b7c-4713-9369-85e89a62f5b9
	I0916 10:56:58.622411  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:58.622582  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070-m02","uid":"5bc3423c-6b73-4003-b0b3-5aa501e974c0","resourceVersion":"457","creationTimestamp":"2024-09-16T10:56:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_56_58_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:57Z","fieldsType":"FieldsV1","fiel
dsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl": [truncated 4404 chars]
	I0916 10:56:59.120197  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m02
	I0916 10:56:59.120225  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:59.120236  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:59.120241  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:59.122563  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:59.122586  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:59.122594  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:59.122601  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:59.122607  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:59.122611  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:59.122620  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:59 GMT
	I0916 10:56:59.122625  157008 round_trippers.go:580]     Audit-Id: fc633423-9e11-4284-8f24-560f03599694
	I0916 10:56:59.122736  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070-m02","uid":"5bc3423c-6b73-4003-b0b3-5aa501e974c0","resourceVersion":"461","creationTimestamp":"2024-09-16T10:56:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_56_58_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:57Z","fieldsType":"FieldsV1","fiel
dsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl": [truncated 4809 chars]
	I0916 10:56:59.123038  157008 node_ready.go:49] node "multinode-079070-m02" has status "Ready":"True"
	I0916 10:56:59.123055  157008 node_ready.go:38] duration metric: took 503.07284ms for node "multinode-079070-m02" to be "Ready" ...
	I0916 10:56:59.123065  157008 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:56:59.123124  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:56:59.123131  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:59.123138  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:59.123142  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:59.126057  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:59.126083  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:59.126093  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:59.126099  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:59.126103  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:59 GMT
	I0916 10:56:59.126108  157008 round_trippers.go:580]     Audit-Id: 26235fe9-2148-430e-8713-fcf75bf03afd
	I0916 10:56:59.126112  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:59.126117  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:59.126602  157008 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"462"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"411","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 69157 chars]
	I0916 10:56:59.128684  157008 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:59.128788  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:56:59.128799  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:59.128809  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:59.128815  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:59.130878  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:59.130900  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:59.130909  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:59.130913  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:59.130917  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:59 GMT
	I0916 10:56:59.130922  157008 round_trippers.go:580]     Audit-Id: af410ee3-7bb1-465b-b0bf-bfd5b2616fdb
	I0916 10:56:59.130925  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:59.130931  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:59.131161  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"411","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6480 chars]
	I0916 10:56:59.131625  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:59.131640  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:59.131650  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:59.131655  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:59.133536  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:59.133557  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:59.133566  157008 round_trippers.go:580]     Audit-Id: 4d67c7a5-338a-494a-8b30-6bfc1a89dd7b
	I0916 10:56:59.133570  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:59.133573  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:59.133575  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:59.133580  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:59.133582  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:59 GMT
	I0916 10:56:59.133681  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:59.134042  157008 pod_ready.go:93] pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace has status "Ready":"True"
	I0916 10:56:59.134061  157008 pod_ready.go:82] duration metric: took 5.351776ms for pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:59.134077  157008 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:59.134147  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-079070
	I0916 10:56:59.134157  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:59.134168  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:59.134176  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:59.136173  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:59.136192  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:59.136199  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:59.136204  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:59.136208  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:59.136210  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:59.136214  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:59 GMT
	I0916 10:56:59.136220  157008 round_trippers.go:580]     Audit-Id: 33b898f1-b11b-4b70-b3a8-017c9822941a
	I0916 10:56:59.136382  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-079070","namespace":"kube-system","uid":"fdcbee48-0d3b-47d7-acd2-298979345c7a","resourceVersion":"400","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"4cd98cb286c25ab6542db09649b1ab0f","kubernetes.io/config.mirror":"4cd98cb286c25ab6542db09649b1ab0f","kubernetes.io/config.seen":"2024-09-16T10:56:25.844758522Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6440 chars]
	I0916 10:56:59.136786  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:59.136798  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:59.136805  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:59.136809  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:59.138469  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:59.138491  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:59.138501  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:59.138509  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:59 GMT
	I0916 10:56:59.138513  157008 round_trippers.go:580]     Audit-Id: 9a0d2a9b-4553-496b-9005-3de9392d37a2
	I0916 10:56:59.138516  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:59.138520  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:59.138525  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:59.138669  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:59.138974  157008 pod_ready.go:93] pod "etcd-multinode-079070" in "kube-system" namespace has status "Ready":"True"
	I0916 10:56:59.138989  157008 pod_ready.go:82] duration metric: took 4.902844ms for pod "etcd-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:59.139010  157008 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:59.139068  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-079070
	I0916 10:56:59.139076  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:59.139082  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:59.139089  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:59.140826  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:59.140841  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:59.140847  157008 round_trippers.go:580]     Audit-Id: a429f0bd-d016-4e91-895b-1ccf679fc242
	I0916 10:56:59.140850  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:59.140853  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:59.140858  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:59.140862  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:59.140865  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:59 GMT
	I0916 10:56:59.140980  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-079070","namespace":"kube-system","uid":"72784d35-4a94-476c-a4e9-dc3c6c3a8c46","resourceVersion":"397","creationTimestamp":"2024-09-16T10:56:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.mirror":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.seen":"2024-09-16T10:56:20.578394748Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8518 chars]
	I0916 10:56:59.141381  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:59.141393  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:59.141400  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:59.141405  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:59.142988  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:59.143008  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:59.143016  157008 round_trippers.go:580]     Audit-Id: f571d816-0825-4252-bccb-c6dd29f4e1b4
	I0916 10:56:59.143023  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:59.143028  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:59.143032  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:59.143036  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:59.143040  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:59 GMT
	I0916 10:56:59.143179  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:59.143495  157008 pod_ready.go:93] pod "kube-apiserver-multinode-079070" in "kube-system" namespace has status "Ready":"True"
	I0916 10:56:59.143511  157008 pod_ready.go:82] duration metric: took 4.489464ms for pod "kube-apiserver-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:59.143522  157008 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:59.143582  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-079070
	I0916 10:56:59.143597  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:59.143604  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:59.143613  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:59.145448  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:59.145469  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:59.145479  157008 round_trippers.go:580]     Audit-Id: 46aa8449-a877-4116-a4bc-1cfb4dd84f9f
	I0916 10:56:59.145485  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:59.145490  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:59.145495  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:59.145503  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:59.145516  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:59 GMT
	I0916 10:56:59.145644  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-079070","namespace":"kube-system","uid":"44a2f17c-654a-4d64-b474-70ce475c3afe","resourceVersion":"403","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"93bd5dba25d1e51504f9fc3f55fd27c8","kubernetes.io/config.mirror":"93bd5dba25d1e51504f9fc3f55fd27c8","kubernetes.io/config.seen":"2024-09-16T10:56:25.844752735Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8093 chars]
	I0916 10:56:59.146176  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:59.146192  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:59.146202  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:59.146214  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:59.147926  157008 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:56:59.147946  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:59.147956  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:59.147962  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:59.147968  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:59 GMT
	I0916 10:56:59.147973  157008 round_trippers.go:580]     Audit-Id: 5ce70ec4-f46e-4d0b-8ddf-ce58e6b8aa93
	I0916 10:56:59.147977  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:59.147981  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:59.148121  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:59.148417  157008 pod_ready.go:93] pod "kube-controller-manager-multinode-079070" in "kube-system" namespace has status "Ready":"True"
	I0916 10:56:59.148434  157008 pod_ready.go:82] duration metric: took 4.901859ms for pod "kube-controller-manager-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:59.148443  157008 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2vhmt" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:59.320902  157008 request.go:632] Waited for 172.379789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2vhmt
	I0916 10:56:59.320967  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2vhmt
	I0916 10:56:59.320976  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:59.320987  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:59.320998  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:59.323200  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:59.323226  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:59.323235  157008 round_trippers.go:580]     Audit-Id: c849a5df-3fd2-4e46-aa74-f27996fd7032
	I0916 10:56:59.323238  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:59.323242  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:59.323246  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:59.323249  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:59.323253  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:59 GMT
	I0916 10:56:59.323418  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2vhmt","generateName":"kube-proxy-","namespace":"kube-system","uid":"6f3faf85-04e9-4840-855d-dd1ef9d4e463","resourceVersion":"383","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6175 chars]
	I0916 10:56:59.521320  157008 request.go:632] Waited for 197.428621ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:59.521424  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:56:59.521434  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:59.521446  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:59.521453  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:59.523960  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:59.523980  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:59.523987  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:59.523990  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:59 GMT
	I0916 10:56:59.523994  157008 round_trippers.go:580]     Audit-Id: 597df916-e2b4-4d08-86b0-bc689c536613
	I0916 10:56:59.523998  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:59.524004  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:59.524007  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:59.524096  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:56:59.524385  157008 pod_ready.go:93] pod "kube-proxy-2vhmt" in "kube-system" namespace has status "Ready":"True"
	I0916 10:56:59.524399  157008 pod_ready.go:82] duration metric: took 375.950399ms for pod "kube-proxy-2vhmt" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:59.524409  157008 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xkr65" in "kube-system" namespace to be "Ready" ...
	I0916 10:56:59.720561  157008 request.go:632] Waited for 196.084388ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xkr65
	I0916 10:56:59.720653  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xkr65
	I0916 10:56:59.720664  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:59.720676  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:59.720684  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:59.722787  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:59.722812  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:59.722822  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:59.722827  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:59.722832  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:59.722836  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:59 GMT
	I0916 10:56:59.722841  157008 round_trippers.go:580]     Audit-Id: f8279b24-8ea5-41bc-8e46-1de936f01c7a
	I0916 10:56:59.722845  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:59.722975  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xkr65","generateName":"kube-proxy-","namespace":"kube-system","uid":"b8d1009a-f71f-4cb1-a2f0-510a2894874f","resourceVersion":"463","creationTimestamp":"2024-09-16T10:56:57Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6209 chars]
	I0916 10:56:59.920863  157008 request.go:632] Waited for 197.385392ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m02
	I0916 10:56:59.920936  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m02
	I0916 10:56:59.920948  157008 round_trippers.go:469] Request Headers:
	I0916 10:56:59.920959  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:56:59.920968  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:56:59.923208  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:56:59.923246  157008 round_trippers.go:577] Response Headers:
	I0916 10:56:59.923255  157008 round_trippers.go:580]     Audit-Id: 6021aae4-70fd-42ee-ac23-12cd73e86e3f
	I0916 10:56:59.923261  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:56:59.923266  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:56:59.923278  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:56:59.923282  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:56:59.923289  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:56:59 GMT
	I0916 10:56:59.923382  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070-m02","uid":"5bc3423c-6b73-4003-b0b3-5aa501e974c0","resourceVersion":"461","creationTimestamp":"2024-09-16T10:56:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_56_58_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:57Z","fieldsType":"FieldsV1","fiel
dsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl": [truncated 4809 chars]
	I0916 10:57:00.120950  157008 request.go:632] Waited for 95.315232ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xkr65
	I0916 10:57:00.121021  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xkr65
	I0916 10:57:00.121027  157008 round_trippers.go:469] Request Headers:
	I0916 10:57:00.121036  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:00.121041  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:00.124280  157008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:57:00.124308  157008 round_trippers.go:577] Response Headers:
	I0916 10:57:00.124318  157008 round_trippers.go:580]     Audit-Id: 6db4b81e-d235-49df-9b53-1f099e6c503a
	I0916 10:57:00.124326  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:00.124331  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:00.124337  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:57:00.124342  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:57:00.124348  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:00 GMT
	I0916 10:57:00.124480  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xkr65","generateName":"kube-proxy-","namespace":"kube-system","uid":"b8d1009a-f71f-4cb1-a2f0-510a2894874f","resourceVersion":"463","creationTimestamp":"2024-09-16T10:56:57Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6209 chars]
	I0916 10:57:00.320297  157008 request.go:632] Waited for 195.238107ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m02
	I0916 10:57:00.320370  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m02
	I0916 10:57:00.320377  157008 round_trippers.go:469] Request Headers:
	I0916 10:57:00.320387  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:00.320395  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:00.322800  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:00.322822  157008 round_trippers.go:577] Response Headers:
	I0916 10:57:00.322831  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:00 GMT
	I0916 10:57:00.322837  157008 round_trippers.go:580]     Audit-Id: d54911a8-c4b8-4dd5-9147-832b8562e5c2
	I0916 10:57:00.322843  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:00.322848  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:00.322854  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:57:00.322860  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:57:00.322979  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070-m02","uid":"5bc3423c-6b73-4003-b0b3-5aa501e974c0","resourceVersion":"461","creationTimestamp":"2024-09-16T10:56:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_56_58_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:57Z","fieldsType":"FieldsV1","fiel
dsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl": [truncated 4809 chars]
	I0916 10:57:00.525428  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xkr65
	I0916 10:57:00.525455  157008 round_trippers.go:469] Request Headers:
	I0916 10:57:00.525463  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:00.525469  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:00.527839  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:00.527866  157008 round_trippers.go:577] Response Headers:
	I0916 10:57:00.527877  157008 round_trippers.go:580]     Audit-Id: ba062c1f-1886-4ab9-a7f1-035b943a8e99
	I0916 10:57:00.527882  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:00.527887  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:00.527891  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:57:00.527896  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:57:00.527924  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:00 GMT
	I0916 10:57:00.528161  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xkr65","generateName":"kube-proxy-","namespace":"kube-system","uid":"b8d1009a-f71f-4cb1-a2f0-510a2894874f","resourceVersion":"463","creationTimestamp":"2024-09-16T10:56:57Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6209 chars]
	I0916 10:57:00.720949  157008 request.go:632] Waited for 192.313677ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m02
	I0916 10:57:00.721102  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m02
	I0916 10:57:00.721121  157008 round_trippers.go:469] Request Headers:
	I0916 10:57:00.721148  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:00.721161  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:00.725156  157008 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:57:00.725186  157008 round_trippers.go:577] Response Headers:
	I0916 10:57:00.725195  157008 round_trippers.go:580]     Audit-Id: 3477703f-a059-4cd8-b356-8854f325621a
	I0916 10:57:00.725200  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:00.725206  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:00.725210  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:57:00.725215  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:57:00.725219  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:00 GMT
	I0916 10:57:00.725363  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070-m02","uid":"5bc3423c-6b73-4003-b0b3-5aa501e974c0","resourceVersion":"461","creationTimestamp":"2024-09-16T10:56:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_56_58_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:57Z","fieldsType":"FieldsV1","fiel
dsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl": [truncated 4809 chars]
	I0916 10:57:01.025509  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xkr65
	I0916 10:57:01.025532  157008 round_trippers.go:469] Request Headers:
	I0916 10:57:01.025549  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:01.025554  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:01.027576  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:01.027599  157008 round_trippers.go:577] Response Headers:
	I0916 10:57:01.027608  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:57:01.027617  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:01 GMT
	I0916 10:57:01.027623  157008 round_trippers.go:580]     Audit-Id: 3f612fa7-8160-4161-96bd-d1e225d5bec1
	I0916 10:57:01.027628  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:01.027634  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:01.027638  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:57:01.027792  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xkr65","generateName":"kube-proxy-","namespace":"kube-system","uid":"b8d1009a-f71f-4cb1-a2f0-510a2894874f","resourceVersion":"473","creationTimestamp":"2024-09-16T10:56:57Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6183 chars]
	I0916 10:57:01.120579  157008 request.go:632] Waited for 92.258964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m02
	I0916 10:57:01.120660  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m02
	I0916 10:57:01.120668  157008 round_trippers.go:469] Request Headers:
	I0916 10:57:01.120680  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:01.120689  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:01.123169  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:01.123198  157008 round_trippers.go:577] Response Headers:
	I0916 10:57:01.123211  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:01.123216  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:01.123221  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:57:01.123227  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:57:01.123233  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:01 GMT
	I0916 10:57:01.123238  157008 round_trippers.go:580]     Audit-Id: 1de0db49-a15c-4496-b38b-80dc984ea638
	I0916 10:57:01.123339  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070-m02","uid":"5bc3423c-6b73-4003-b0b3-5aa501e974c0","resourceVersion":"461","creationTimestamp":"2024-09-16T10:56:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_56_58_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:57Z","fieldsType":"FieldsV1","fiel
dsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl": [truncated 4809 chars]
	I0916 10:57:01.123709  157008 pod_ready.go:93] pod "kube-proxy-xkr65" in "kube-system" namespace has status "Ready":"True"
	I0916 10:57:01.123728  157008 pod_ready.go:82] duration metric: took 1.599312782s for pod "kube-proxy-xkr65" in "kube-system" namespace to be "Ready" ...
	I0916 10:57:01.123772  157008 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:57:01.321200  157008 request.go:632] Waited for 197.347273ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 10:57:01.321278  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 10:57:01.321287  157008 round_trippers.go:469] Request Headers:
	I0916 10:57:01.321295  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:01.321301  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:01.323595  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:01.323619  157008 round_trippers.go:577] Response Headers:
	I0916 10:57:01.323627  157008 round_trippers.go:580]     Audit-Id: fefee779-b734-45a3-8a31-a4ca8cf296c3
	I0916 10:57:01.323632  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:01.323639  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:01.323643  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:57:01.323648  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:57:01.323651  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:01 GMT
	I0916 10:57:01.323808  157008 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-079070","namespace":"kube-system","uid":"cc5dd2a3-a136-42b4-a6f9-11c733cb38c4","resourceVersion":"395","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.mirror":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.seen":"2024-09-16T10:56:25.844756911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4975 chars]
	I0916 10:57:01.520613  157008 request.go:632] Waited for 196.362213ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:57:01.520671  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:57:01.520676  157008 round_trippers.go:469] Request Headers:
	I0916 10:57:01.520683  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:01.520687  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:01.523097  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:01.523125  157008 round_trippers.go:577] Response Headers:
	I0916 10:57:01.523132  157008 round_trippers.go:580]     Audit-Id: af1059ea-ae61-4fec-bce6-3ee0b7aa31fb
	I0916 10:57:01.523140  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:01.523143  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:01.523146  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:57:01.523149  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:57:01.523153  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:01 GMT
	I0916 10:57:01.523389  157008 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5021 chars]
	I0916 10:57:01.523698  157008 pod_ready.go:93] pod "kube-scheduler-multinode-079070" in "kube-system" namespace has status "Ready":"True"
	I0916 10:57:01.523717  157008 pod_ready.go:82] duration metric: took 399.936355ms for pod "kube-scheduler-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:57:01.523731  157008 pod_ready.go:39] duration metric: took 2.400654396s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:57:01.523781  157008 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:57:01.523832  157008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:57:01.535208  157008 system_svc.go:56] duration metric: took 11.422237ms WaitForService to wait for kubelet
	I0916 10:57:01.535243  157008 kubeadm.go:582] duration metric: took 3.017488281s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:57:01.535262  157008 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:57:01.720683  157008 request.go:632] Waited for 185.351495ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes
	I0916 10:57:01.720756  157008 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes
	I0916 10:57:01.720763  157008 round_trippers.go:469] Request Headers:
	I0916 10:57:01.720773  157008 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:57:01.720779  157008 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:57:01.723345  157008 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:57:01.723368  157008 round_trippers.go:577] Response Headers:
	I0916 10:57:01.723378  157008 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:57:01 GMT
	I0916 10:57:01.723384  157008 round_trippers.go:580]     Audit-Id: b53cf9d9-acc1-4546-8df1-8d8ea64f26a7
	I0916 10:57:01.723388  157008 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:57:01.723392  157008 round_trippers.go:580]     Content-Type: application/json
	I0916 10:57:01.723396  157008 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:57:01.723399  157008 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:57:01.723642  157008 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"477"},"items":[{"metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"393","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"manag
edFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1", [truncated 10875 chars]
	I0916 10:57:01.724142  157008 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:57:01.724161  157008 node_conditions.go:123] node cpu capacity is 8
	I0916 10:57:01.724173  157008 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:57:01.724181  157008 node_conditions.go:123] node cpu capacity is 8
	I0916 10:57:01.724188  157008 node_conditions.go:105] duration metric: took 188.919976ms to run NodePressure ...
	I0916 10:57:01.724204  157008 start.go:241] waiting for startup goroutines ...
	I0916 10:57:01.724240  157008 start.go:255] writing updated cluster config ...
	I0916 10:57:01.724528  157008 ssh_runner.go:195] Run: rm -f paused
	I0916 10:57:01.731531  157008 out.go:177] * Done! kubectl is now configured to use "multinode-079070" cluster and "default" namespace by default
	E0916 10:57:01.732850  157008 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	8414e0e62b35b       8c811b4aec35f       50 seconds ago       Running             busybox                   0                   10183dc0f9d0a       busybox-7dff88458-pjlvx
	8954864d99d22       c69fa2e9cbf5f       About a minute ago   Running             coredns                   0                   fa69986f2f5d5       coredns-7c65d6cfc9-ft9gh
	269042fd7e065       6e38f40d628db       About a minute ago   Running             storage-provisioner       0                   097580079dfa7       storage-provisioner
	de61885ae0251       12968670680f4       About a minute ago   Running             kindnet-cni               0                   a9b3bc3ef2872       kindnet-flmdv
	809210a041e03       60c005f310ff3       About a minute ago   Running             kube-proxy                0                   d6e6b6a3008e8       kube-proxy-2vhmt
	941f1dc8e3837       175ffd71cce3d       About a minute ago   Running             kube-controller-manager   0                   84635e5713cec       kube-controller-manager-multinode-079070
	0bc7fe20ff6ae       2e96e5913fc06       About a minute ago   Running             etcd                      0                   a53811583dd27       etcd-multinode-079070
	5d29b7e4482f8       9aa1fad941575       About a minute ago   Running             kube-scheduler            0                   b33679bbe5cbf       kube-scheduler-multinode-079070
	411c657184dfd       6bab7719df100       About a minute ago   Running             kube-apiserver            0                   c43b3a5fe0f9f       kube-apiserver-multinode-079070
	
	
	==> containerd <==
	Sep 16 10:56:42 multinode-079070 containerd[863]: time="2024-09-16T10:56:42.893918371Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 10:56:42 multinode-079070 containerd[863]: time="2024-09-16T10:56:42.893933858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:56:42 multinode-079070 containerd[863]: time="2024-09-16T10:56:42.894036742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:56:42 multinode-079070 containerd[863]: time="2024-09-16T10:56:42.943242149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ft9gh,Uid:8052b6a1-7257-44d4-a318-740afd039d2c,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa69986f2f5d5faeb3b57e3dd348714100794668735a682dfbb154a829d8612d\""
	Sep 16 10:56:42 multinode-079070 containerd[863]: time="2024-09-16T10:56:42.946054211Z" level=info msg="CreateContainer within sandbox \"fa69986f2f5d5faeb3b57e3dd348714100794668735a682dfbb154a829d8612d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Sep 16 10:56:42 multinode-079070 containerd[863]: time="2024-09-16T10:56:42.958624686Z" level=info msg="CreateContainer within sandbox \"fa69986f2f5d5faeb3b57e3dd348714100794668735a682dfbb154a829d8612d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8954864d99d2239b93c763ac312c7353b37bfba4eb693480619025ea3402616f\""
	Sep 16 10:56:42 multinode-079070 containerd[863]: time="2024-09-16T10:56:42.959194391Z" level=info msg="StartContainer for \"8954864d99d2239b93c763ac312c7353b37bfba4eb693480619025ea3402616f\""
	Sep 16 10:56:43 multinode-079070 containerd[863]: time="2024-09-16T10:56:43.003984911Z" level=info msg="StartContainer for \"8954864d99d2239b93c763ac312c7353b37bfba4eb693480619025ea3402616f\" returns successfully"
	Sep 16 10:57:02 multinode-079070 containerd[863]: time="2024-09-16T10:57:02.668464026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-7dff88458-pjlvx,Uid:e697a697-12c1-405c-bc2e-fa881b5fd008,Namespace:default,Attempt:0,}"
	Sep 16 10:57:02 multinode-079070 containerd[863]: time="2024-09-16T10:57:02.705293950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 10:57:02 multinode-079070 containerd[863]: time="2024-09-16T10:57:02.705365581Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 10:57:02 multinode-079070 containerd[863]: time="2024-09-16T10:57:02.705377176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:57:02 multinode-079070 containerd[863]: time="2024-09-16T10:57:02.705466070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 10:57:02 multinode-079070 containerd[863]: time="2024-09-16T10:57:02.751280360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-7dff88458-pjlvx,Uid:e697a697-12c1-405c-bc2e-fa881b5fd008,Namespace:default,Attempt:0,} returns sandbox id \"10183dc0f9d0a512adcc7b4ca83b964d4c75224cc9c608e780553e39c4cb8d21\""
	Sep 16 10:57:02 multinode-079070 containerd[863]: time="2024-09-16T10:57:02.753499034Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 16 10:57:04 multinode-079070 containerd[863]: time="2024-09-16T10:57:04.714040465Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 10:57:04 multinode-079070 containerd[863]: time="2024-09-16T10:57:04.714991927Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28: active requests=0, bytes read=727667"
	Sep 16 10:57:04 multinode-079070 containerd[863]: time="2024-09-16T10:57:04.716562157Z" level=info msg="ImageCreate event name:\"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 10:57:04 multinode-079070 containerd[863]: time="2024-09-16T10:57:04.718928282Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 10:57:04 multinode-079070 containerd[863]: time="2024-09-16T10:57:04.719456183Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28\" with image id \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\", repo tag \"gcr.io/k8s-minikube/busybox:1.28\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12\", size \"725911\" in 1.965911634s"
	Sep 16 10:57:04 multinode-079070 containerd[863]: time="2024-09-16T10:57:04.719505047Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28\" returns image reference \"sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a\""
	Sep 16 10:57:04 multinode-079070 containerd[863]: time="2024-09-16T10:57:04.721597208Z" level=info msg="CreateContainer within sandbox \"10183dc0f9d0a512adcc7b4ca83b964d4c75224cc9c608e780553e39c4cb8d21\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Sep 16 10:57:04 multinode-079070 containerd[863]: time="2024-09-16T10:57:04.733291033Z" level=info msg="CreateContainer within sandbox \"10183dc0f9d0a512adcc7b4ca83b964d4c75224cc9c608e780553e39c4cb8d21\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"8414e0e62b35baa7bf8703924991d3cd9f3e9132c0609f0ef74a8091678aefea\""
	Sep 16 10:57:04 multinode-079070 containerd[863]: time="2024-09-16T10:57:04.733905454Z" level=info msg="StartContainer for \"8414e0e62b35baa7bf8703924991d3cd9f3e9132c0609f0ef74a8091678aefea\""
	Sep 16 10:57:04 multinode-079070 containerd[863]: time="2024-09-16T10:57:04.805432362Z" level=info msg="StartContainer for \"8414e0e62b35baa7bf8703924991d3cd9f3e9132c0609f0ef74a8091678aefea\" returns successfully"
	
	
	==> coredns [8954864d99d2239b93c763ac312c7353b37bfba4eb693480619025ea3402616f] <==
	[INFO] 10.244.0.3:51056 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102475s
	[INFO] 10.244.1.2:41548 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000178899s
	[INFO] 10.244.1.2:39453 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001782363s
	[INFO] 10.244.1.2:56115 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000130511s
	[INFO] 10.244.1.2:37210 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000101251s
	[INFO] 10.244.1.2:55581 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001236938s
	[INFO] 10.244.1.2:35975 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081083s
	[INFO] 10.244.1.2:42877 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073809s
	[INFO] 10.244.1.2:41783 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000084902s
	[INFO] 10.244.0.3:55155 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116031s
	[INFO] 10.244.0.3:59444 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115061s
	[INFO] 10.244.0.3:34308 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088507s
	[INFO] 10.244.0.3:40765 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088438s
	[INFO] 10.244.1.2:59446 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204406s
	[INFO] 10.244.1.2:52620 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000138315s
	[INFO] 10.244.1.2:51972 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105158s
	[INFO] 10.244.1.2:47877 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087457s
	[INFO] 10.244.0.3:45741 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142885s
	[INFO] 10.244.0.3:32935 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000169213s
	[INFO] 10.244.0.3:49721 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000165206s
	[INFO] 10.244.0.3:45554 - 5 "PTR IN 1.67.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109895s
	[INFO] 10.244.1.2:44123 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168559s
	[INFO] 10.244.1.2:55322 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000107325s
	[INFO] 10.244.1.2:36098 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000102498s
	[INFO] 10.244.1.2:57704 - 5 "PTR IN 1.67.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000095141s
	
	
	==> describe nodes <==
	Name:               multinode-079070
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-079070
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=multinode-079070
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_56_26_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:56:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-079070
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:57:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:57:27 +0000   Mon, 16 Sep 2024 10:56:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:57:27 +0000   Mon, 16 Sep 2024 10:56:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:57:27 +0000   Mon, 16 Sep 2024 10:56:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:57:27 +0000   Mon, 16 Sep 2024 10:56:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    multinode-079070
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 36f88572435b40548db739493820dc2c
	  System UUID:                aacf5fc8-9d89-4df8-b6e3-7265bb86b554
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-pjlvx                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kube-system                 coredns-7c65d6cfc9-ft9gh                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     84s
	  kube-system                 etcd-multinode-079070                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         89s
	  kube-system                 kindnet-flmdv                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      84s
	  kube-system                 kube-apiserver-multinode-079070             250m (3%)     0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-controller-manager-multinode-079070    200m (2%)     0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-proxy-2vhmt                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-scheduler-multinode-079070             100m (1%)     0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 83s   kube-proxy       
	  Normal   Starting                 90s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 90s   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  89s   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  89s   kubelet          Node multinode-079070 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    89s   kubelet          Node multinode-079070 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     89s   kubelet          Node multinode-079070 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           85s   node-controller  Node multinode-079070 event: Registered Node multinode-079070 in Controller
	
	
	Name:               multinode-079070-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-079070-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=multinode-079070
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_56_58_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:56:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-079070-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:57:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:57:28 +0000   Mon, 16 Sep 2024 10:56:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:57:28 +0000   Mon, 16 Sep 2024 10:56:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:57:28 +0000   Mon, 16 Sep 2024 10:56:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:57:28 +0000   Mon, 16 Sep 2024 10:56:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.3
	  Hostname:    multinode-079070-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 e82148e92f1f47e3b3415e006f73af99
	  System UUID:                230f6bd5-a1b9-46e1-be41-9ec64c608739
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-x6h7b    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kube-system                 kindnet-fs5x4              100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      58s
	  kube-system                 kube-proxy-xkr65           0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%)  100m (1%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 55s                kube-proxy       
	  Warning  CgroupV1                 58s                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  58s (x2 over 58s)  kubelet          Node multinode-079070-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    58s (x2 over 58s)  kubelet          Node multinode-079070-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     58s (x2 over 58s)  kubelet          Node multinode-079070-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  58s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeReady                57s                kubelet          Node multinode-079070-m02 status is now: NodeReady
	  Normal   RegisteredNode           55s                node-controller  Node multinode-079070-m02 event: Registered Node multinode-079070-m02 in Controller
	
	
	Name:               multinode-079070-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-079070-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=multinode-079070
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_57_29_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:57:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-079070-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:57:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:57:54 +0000   Mon, 16 Sep 2024 10:57:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:57:54 +0000   Mon, 16 Sep 2024 10:57:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:57:54 +0000   Mon, 16 Sep 2024 10:57:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:57:54 +0000   Mon, 16 Sep 2024 10:57:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.4
	  Hostname:    multinode-079070-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 eb932199f8eb4679b125253f5cd938eb
	  System UUID:                63b9436a-6158-4245-a785-e7aa6a2fcca8
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-kxnzq       100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-proxy-9z4qh    0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%)  100m (1%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 25s                kube-proxy       
	  Normal   NodeHasSufficientPID     26s (x2 over 26s)  kubelet          Node multinode-079070-m03 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    26s (x2 over 26s)  kubelet          Node multinode-079070-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  26s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  26s (x2 over 26s)  kubelet          Node multinode-079070-m03 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           25s                node-controller  Node multinode-079070-m03 event: Registered Node multinode-079070-m03 in Controller
	  Normal   NodeReady                25s                kubelet          Node multinode-079070-m03 status is now: NodeReady
	  Normal   Starting                 7s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 7s                 kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  1s (x7 over 7s)    kubelet          Node multinode-079070-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    1s (x7 over 7s)    kubelet          Node multinode-079070-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     1s (x7 over 7s)    kubelet          Node multinode-079070-m03 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[  +0.095971] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-c95c64bb41bd
	[  +0.000006] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-c95c64bb41bd
	[  +0.000001] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +5.951420] net_ratelimit: 6 callbacks suppressed
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-c95c64bb41bd
	[  +0.000006] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +0.256004] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-c95c64bb41bd
	[  +0.000006] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-c95c64bb41bd
	[  +0.000002] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-c95c64bb41bd
	[  +0.000001] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +7.935271] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-c95c64bb41bd
	[  +0.000006] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-c95c64bb41bd
	[  +0.000001] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +0.003990] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-c95c64bb41bd
	[  +0.000004] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +0.255992] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-c95c64bb41bd
	[  +0.000006] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-c95c64bb41bd
	[  +0.000001] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.4, on dev br-c95c64bb41bd
	[  +0.000001] ll header: 00000000: 02 42 bd 76 ab c4 02 42 c0 a8 31 02 08 00
	
	
	==> etcd [0bc7fe20ff6ae92cd3f996cddadca6ddb2788e2f661cd3c4b2f9fb33045bed71] <==
	{"level":"info","ts":"2024-09-16T10:56:21.548252Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-09-16T10:56:21.548288Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-09-16T10:56:21.548321Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:56:21.548342Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:56:22.035573Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T10:56:22.035616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T10:56:22.035652Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2024-09-16T10:56:22.035677Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T10:56:22.035691Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-09-16T10:56:22.035701Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2024-09-16T10:56:22.035712Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-09-16T10:56:22.036773Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:multinode-079070 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:56:22.036773Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:56:22.036802Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:56:22.036801Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:56:22.037065Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:56:22.037130Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:56:22.037464Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:56:22.037668Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:56:22.037772Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:56:22.037989Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:56:22.037989Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:56:22.038884Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2024-09-16T10:56:22.038985Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:56:48.788053Z","caller":"traceutil/trace.go:171","msg":"trace[1037408987] transaction","detail":"{read_only:false; response_revision:421; number_of_response:1; }","duration":"200.765868ms","start":"2024-09-16T10:56:48.587270Z","end":"2024-09-16T10:56:48.788036Z","steps":["trace[1037408987] 'process raft request'  (duration: 200.648474ms)"],"step_count":1}
	
	
	==> kernel <==
	 10:57:56 up 40 min,  0 users,  load average: 0.90, 1.29, 1.09
	Linux multinode-079070 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [de61885ae02518041c7aa7ce71f66fe6f83e66c09666b89a7765dd6c5955ef2e] <==
	I0916 10:57:12.827945       1 main.go:295] Handling node with IPs: map[192.168.67.3:{}]
	I0916 10:57:12.827950       1 main.go:322] Node multinode-079070-m02 has CIDR [10.244.1.0/24] 
	I0916 10:57:22.822406       1 main.go:295] Handling node with IPs: map[192.168.67.2:{}]
	I0916 10:57:22.822444       1 main.go:299] handling current node
	I0916 10:57:22.822468       1 main.go:295] Handling node with IPs: map[192.168.67.3:{}]
	I0916 10:57:22.822491       1 main.go:322] Node multinode-079070-m02 has CIDR [10.244.1.0/24] 
	I0916 10:57:32.820303       1 main.go:295] Handling node with IPs: map[192.168.67.2:{}]
	I0916 10:57:32.820363       1 main.go:299] handling current node
	I0916 10:57:32.820385       1 main.go:295] Handling node with IPs: map[192.168.67.3:{}]
	I0916 10:57:32.820394       1 main.go:322] Node multinode-079070-m02 has CIDR [10.244.1.0/24] 
	I0916 10:57:32.820565       1 main.go:295] Handling node with IPs: map[192.168.67.4:{}]
	I0916 10:57:32.820582       1 main.go:322] Node multinode-079070-m03 has CIDR [10.244.2.0/24] 
	I0916 10:57:32.820644       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.67.4 Flags: [] Table: 0} 
	I0916 10:57:42.827816       1 main.go:295] Handling node with IPs: map[192.168.67.2:{}]
	I0916 10:57:42.827849       1 main.go:299] handling current node
	I0916 10:57:42.827865       1 main.go:295] Handling node with IPs: map[192.168.67.3:{}]
	I0916 10:57:42.827871       1 main.go:322] Node multinode-079070-m02 has CIDR [10.244.1.0/24] 
	I0916 10:57:42.827984       1 main.go:295] Handling node with IPs: map[192.168.67.4:{}]
	I0916 10:57:42.827997       1 main.go:322] Node multinode-079070-m03 has CIDR [10.244.2.0/24] 
	I0916 10:57:52.828123       1 main.go:295] Handling node with IPs: map[192.168.67.2:{}]
	I0916 10:57:52.828162       1 main.go:299] handling current node
	I0916 10:57:52.828178       1 main.go:295] Handling node with IPs: map[192.168.67.3:{}]
	I0916 10:57:52.828183       1 main.go:322] Node multinode-079070-m02 has CIDR [10.244.1.0/24] 
	I0916 10:57:52.828306       1 main.go:295] Handling node with IPs: map[192.168.67.4:{}]
	I0916 10:57:52.828313       1 main.go:322] Node multinode-079070-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [411c657184dfd15c5a637bda842998291203948392b41c07d2e8b35719214e87] <==
	I0916 10:56:24.478924       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0916 10:56:24.483098       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0916 10:56:24.483123       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 10:56:24.887180       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:56:24.923351       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 10:56:25.030521       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0916 10:56:25.037379       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I0916 10:56:25.038608       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:56:25.042579       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:56:25.548706       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:56:25.953503       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:56:25.964413       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 10:56:25.974975       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:56:31.130667       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 10:56:31.150004       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0916 10:57:18.122976       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:35682: use of closed network connection
	E0916 10:57:18.268644       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:35704: use of closed network connection
	E0916 10:57:18.422165       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:35718: use of closed network connection
	E0916 10:57:18.568802       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:35742: use of closed network connection
	E0916 10:57:18.713040       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:35752: use of closed network connection
	E0916 10:57:18.854979       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:35772: use of closed network connection
	E0916 10:57:19.111050       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:35800: use of closed network connection
	E0916 10:57:19.253105       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:35822: use of closed network connection
	E0916 10:57:19.403005       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:35840: use of closed network connection
	E0916 10:57:19.547708       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:35870: use of closed network connection
	
	
	==> kube-controller-manager [941f1dc8e383770d56fc04131cd6e118a0b22f2035d16d7cd123273e0f80863c] <==
	I0916 10:57:00.349486       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-079070-m02"
	I0916 10:57:02.363803       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="22.564249ms"
	I0916 10:57:02.368707       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="4.838562ms"
	I0916 10:57:02.368809       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="59.079µs"
	I0916 10:57:02.373194       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="61.291µs"
	I0916 10:57:02.377414       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="48.53µs"
	I0916 10:57:05.024697       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="5.41303ms"
	I0916 10:57:05.024803       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="59.231µs"
	I0916 10:57:17.736156       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="4.686451ms"
	I0916 10:57:17.736242       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="46.644µs"
	I0916 10:57:27.562728       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070"
	I0916 10:57:28.354952       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m02"
	I0916 10:57:29.133862       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-079070-m03\" does not exist"
	I0916 10:57:29.133865       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-079070-m02"
	I0916 10:57:29.139577       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-079070-m03" podCIDRs=["10.244.2.0/24"]
	I0916 10:57:29.139620       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m03"
	I0916 10:57:29.139698       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m03"
	I0916 10:57:29.145420       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m03"
	I0916 10:57:29.203923       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m03"
	I0916 10:57:29.443782       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m03"
	I0916 10:57:30.057860       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m03"
	I0916 10:57:30.057909       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-079070-m02"
	I0916 10:57:30.065546       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m03"
	I0916 10:57:30.353600       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-079070-m03"
	I0916 10:57:54.399879       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m03"
	
	
	==> kube-proxy [809210a041e030e61062aa021eb36041df90e322c3257f94c546c420614699bc] <==
	I0916 10:56:32.029982       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:56:32.179672       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.67.2"]
	E0916 10:56:32.179750       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:56:32.234955       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:56:32.235009       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:56:32.237569       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:56:32.237995       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:56:32.238032       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:56:32.239678       1 config.go:199] "Starting service config controller"
	I0916 10:56:32.239727       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:56:32.239777       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:56:32.239783       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:56:32.240007       1 config.go:328] "Starting node config controller"
	I0916 10:56:32.240016       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:56:32.340062       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:56:32.340082       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:56:32.340144       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5d29b7e4482f874fecde10cfcd42e99ca36d060f25d2e8e7a8110ea495ea8583] <==
	W0916 10:56:23.626494       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 10:56:23.626538       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:56:23.626572       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:56:23.626619       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:56:23.626626       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 10:56:23.626673       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:56:23.626691       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:56:23.626732       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:56:23.626711       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 10:56:23.626782       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:56:24.460004       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:56:24.460050       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:56:24.468721       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:56:24.468769       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:56:24.515374       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:56:24.515416       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:56:24.539117       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 10:56:24.539157       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:56:24.708195       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 10:56:24.708249       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 10:56:24.711434       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:56:24.711474       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:56:24.728071       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:56:24.728136       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 10:56:25.122409       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:56:31 multinode-079070 kubelet[1627]: I0916 10:56:31.524491    1627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8fvl\" (UniqueName: \"kubernetes.io/projected/3bfb600a-3b88-4834-beac-acc911b78ef1-kube-api-access-j8fvl\") pod \"coredns-7c65d6cfc9-ql4g8\" (UID: \"3bfb600a-3b88-4834-beac-acc911b78ef1\") " pod="kube-system/coredns-7c65d6cfc9-ql4g8"
	Sep 16 10:56:31 multinode-079070 kubelet[1627]: I0916 10:56:31.524520    1627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnfv2\" (UniqueName: \"kubernetes.io/projected/8052b6a1-7257-44d4-a318-740afd039d2c-kube-api-access-nnfv2\") pod \"coredns-7c65d6cfc9-ft9gh\" (UID: \"8052b6a1-7257-44d4-a318-740afd039d2c\") " pod="kube-system/coredns-7c65d6cfc9-ft9gh"
	Sep 16 10:56:31 multinode-079070 kubelet[1627]: E0916 10:56:31.828082    1627 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"601406ecb567f37a2cd3807f254a99f738c45d74d3eb998856b79fc12b5a5c0e\": failed to find network info for sandbox \"601406ecb567f37a2cd3807f254a99f738c45d74d3eb998856b79fc12b5a5c0e\""
	Sep 16 10:56:31 multinode-079070 kubelet[1627]: E0916 10:56:31.828160    1627 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"601406ecb567f37a2cd3807f254a99f738c45d74d3eb998856b79fc12b5a5c0e\": failed to find network info for sandbox \"601406ecb567f37a2cd3807f254a99f738c45d74d3eb998856b79fc12b5a5c0e\"" pod="kube-system/coredns-7c65d6cfc9-ql4g8"
	Sep 16 10:56:31 multinode-079070 kubelet[1627]: E0916 10:56:31.849132    1627 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed42e1245f47ac4d19e8c2086a6823fb11aa6146a3f2b70e2a2da781d70e8bc4\": failed to find network info for sandbox \"ed42e1245f47ac4d19e8c2086a6823fb11aa6146a3f2b70e2a2da781d70e8bc4\""
	Sep 16 10:56:31 multinode-079070 kubelet[1627]: E0916 10:56:31.849223    1627 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed42e1245f47ac4d19e8c2086a6823fb11aa6146a3f2b70e2a2da781d70e8bc4\": failed to find network info for sandbox \"ed42e1245f47ac4d19e8c2086a6823fb11aa6146a3f2b70e2a2da781d70e8bc4\"" pod="kube-system/coredns-7c65d6cfc9-ft9gh"
	Sep 16 10:56:31 multinode-079070 kubelet[1627]: E0916 10:56:31.849254    1627 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed42e1245f47ac4d19e8c2086a6823fb11aa6146a3f2b70e2a2da781d70e8bc4\": failed to find network info for sandbox \"ed42e1245f47ac4d19e8c2086a6823fb11aa6146a3f2b70e2a2da781d70e8bc4\"" pod="kube-system/coredns-7c65d6cfc9-ft9gh"
	Sep 16 10:56:31 multinode-079070 kubelet[1627]: E0916 10:56:31.849316    1627 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-ft9gh_kube-system(8052b6a1-7257-44d4-a318-740afd039d2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-ft9gh_kube-system(8052b6a1-7257-44d4-a318-740afd039d2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed42e1245f47ac4d19e8c2086a6823fb11aa6146a3f2b70e2a2da781d70e8bc4\\\": failed to find network info for sandbox \\\"ed42e1245f47ac4d19e8c2086a6823fb11aa6146a3f2b70e2a2da781d70e8bc4\\\"\"" pod="kube-system/coredns-7c65d6cfc9-ft9gh" podUID="8052b6a1-7257-44d4-a318-740afd039d2c"
	Sep 16 10:56:32 multinode-079070 kubelet[1627]: I0916 10:56:32.134826    1627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8fvl\" (UniqueName: \"kubernetes.io/projected/3bfb600a-3b88-4834-beac-acc911b78ef1-kube-api-access-j8fvl\") pod \"3bfb600a-3b88-4834-beac-acc911b78ef1\" (UID: \"3bfb600a-3b88-4834-beac-acc911b78ef1\") "
	Sep 16 10:56:32 multinode-079070 kubelet[1627]: I0916 10:56:32.134913    1627 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bfb600a-3b88-4834-beac-acc911b78ef1-config-volume\") pod \"3bfb600a-3b88-4834-beac-acc911b78ef1\" (UID: \"3bfb600a-3b88-4834-beac-acc911b78ef1\") "
	Sep 16 10:56:32 multinode-079070 kubelet[1627]: I0916 10:56:32.134985    1627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vbbr\" (UniqueName: \"kubernetes.io/projected/43862f2e-c773-468d-ab03-8b0bc0633ad4-kube-api-access-8vbbr\") pod \"storage-provisioner\" (UID: \"43862f2e-c773-468d-ab03-8b0bc0633ad4\") " pod="kube-system/storage-provisioner"
	Sep 16 10:56:32 multinode-079070 kubelet[1627]: I0916 10:56:32.135018    1627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/43862f2e-c773-468d-ab03-8b0bc0633ad4-tmp\") pod \"storage-provisioner\" (UID: \"43862f2e-c773-468d-ab03-8b0bc0633ad4\") " pod="kube-system/storage-provisioner"
	Sep 16 10:56:32 multinode-079070 kubelet[1627]: I0916 10:56:32.135359    1627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bfb600a-3b88-4834-beac-acc911b78ef1-config-volume" (OuterVolumeSpecName: "config-volume") pod "3bfb600a-3b88-4834-beac-acc911b78ef1" (UID: "3bfb600a-3b88-4834-beac-acc911b78ef1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Sep 16 10:56:32 multinode-079070 kubelet[1627]: I0916 10:56:32.137072    1627 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bfb600a-3b88-4834-beac-acc911b78ef1-kube-api-access-j8fvl" (OuterVolumeSpecName: "kube-api-access-j8fvl") pod "3bfb600a-3b88-4834-beac-acc911b78ef1" (UID: "3bfb600a-3b88-4834-beac-acc911b78ef1"). InnerVolumeSpecName "kube-api-access-j8fvl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 10:56:32 multinode-079070 kubelet[1627]: I0916 10:56:32.235816    1627 reconciler_common.go:288] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bfb600a-3b88-4834-beac-acc911b78ef1-config-volume\") on node \"multinode-079070\" DevicePath \"\""
	Sep 16 10:56:32 multinode-079070 kubelet[1627]: I0916 10:56:32.235869    1627 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-j8fvl\" (UniqueName: \"kubernetes.io/projected/3bfb600a-3b88-4834-beac-acc911b78ef1-kube-api-access-j8fvl\") on node \"multinode-079070\" DevicePath \"\""
	Sep 16 10:56:32 multinode-079070 kubelet[1627]: I0916 10:56:32.949310    1627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=0.949290109 podStartE2EDuration="949.290109ms" podCreationTimestamp="2024-09-16 10:56:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:56:32.949138814 +0000 UTC m=+7.174161919" watchObservedRunningTime="2024-09-16 10:56:32.949290109 +0000 UTC m=+7.174313215"
	Sep 16 10:56:32 multinode-079070 kubelet[1627]: I0916 10:56:32.960140    1627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-flmdv" podStartSLOduration=1.960118484 podStartE2EDuration="1.960118484s" podCreationTimestamp="2024-09-16 10:56:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:56:32.959956196 +0000 UTC m=+7.184979302" watchObservedRunningTime="2024-09-16 10:56:32.960118484 +0000 UTC m=+7.185141588"
	Sep 16 10:56:32 multinode-079070 kubelet[1627]: I0916 10:56:32.970526    1627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2vhmt" podStartSLOduration=1.970499873 podStartE2EDuration="1.970499873s" podCreationTimestamp="2024-09-16 10:56:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:56:32.970065634 +0000 UTC m=+7.195088740" watchObservedRunningTime="2024-09-16 10:56:32.970499873 +0000 UTC m=+7.195522979"
	Sep 16 10:56:33 multinode-079070 kubelet[1627]: I0916 10:56:33.861371    1627 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bfb600a-3b88-4834-beac-acc911b78ef1" path="/var/lib/kubelet/pods/3bfb600a-3b88-4834-beac-acc911b78ef1/volumes"
	Sep 16 10:56:36 multinode-079070 kubelet[1627]: I0916 10:56:36.446666    1627 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 16 10:56:36 multinode-079070 kubelet[1627]: I0916 10:56:36.447540    1627 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 16 10:56:43 multinode-079070 kubelet[1627]: I0916 10:56:43.977616    1627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-ft9gh" podStartSLOduration=12.977590048 podStartE2EDuration="12.977590048s" podCreationTimestamp="2024-09-16 10:56:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 10:56:43.977398293 +0000 UTC m=+18.202421390" watchObservedRunningTime="2024-09-16 10:56:43.977590048 +0000 UTC m=+18.202613185"
	Sep 16 10:57:02 multinode-079070 kubelet[1627]: I0916 10:57:02.501783    1627 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2hmg\" (UniqueName: \"kubernetes.io/projected/e697a697-12c1-405c-bc2e-fa881b5fd008-kube-api-access-q2hmg\") pod \"busybox-7dff88458-pjlvx\" (UID: \"e697a697-12c1-405c-bc2e-fa881b5fd008\") " pod="default/busybox-7dff88458-pjlvx"
	Sep 16 10:57:05 multinode-079070 kubelet[1627]: I0916 10:57:05.019767    1627 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-7dff88458-pjlvx" podStartSLOduration=1.052026777 podStartE2EDuration="3.019715123s" podCreationTimestamp="2024-09-16 10:57:02 +0000 UTC" firstStartedPulling="2024-09-16 10:57:02.752765721 +0000 UTC m=+36.977788820" lastFinishedPulling="2024-09-16 10:57:04.720454068 +0000 UTC m=+38.945477166" observedRunningTime="2024-09-16 10:57:05.019580623 +0000 UTC m=+39.244603730" watchObservedRunningTime="2024-09-16 10:57:05.019715123 +0000 UTC m=+39.244738229"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-079070 -n multinode-079070
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-079070 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context multinode-079070 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (476.558µs)
helpers_test.go:263: kubectl --context multinode-079070 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestMultiNode/serial/StartAfterStop (10.56s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (7.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-079070 node delete m03: (4.552434209s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:436: (dbg) Non-zero exit: kubectl get nodes: fork/exec /usr/local/bin/kubectl: exec format error (497.34µs)
multinode_test.go:438: failed to run kubectl get nodes. args "kubectl get nodes" : fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-079070
helpers_test.go:235: (dbg) docker inspect multinode-079070:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1f3af6522540e0cf7a676c3192a1437f3ee712969b647e0c5ff8f6e5d39943b2",
	        "Created": "2024-09-16T10:56:12.200290899Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 174328,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:58:21.987508872Z",
	            "FinishedAt": "2024-09-16T10:58:21.311600613Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/1f3af6522540e0cf7a676c3192a1437f3ee712969b647e0c5ff8f6e5d39943b2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1f3af6522540e0cf7a676c3192a1437f3ee712969b647e0c5ff8f6e5d39943b2/hostname",
	        "HostsPath": "/var/lib/docker/containers/1f3af6522540e0cf7a676c3192a1437f3ee712969b647e0c5ff8f6e5d39943b2/hosts",
	        "LogPath": "/var/lib/docker/containers/1f3af6522540e0cf7a676c3192a1437f3ee712969b647e0c5ff8f6e5d39943b2/1f3af6522540e0cf7a676c3192a1437f3ee712969b647e0c5ff8f6e5d39943b2-json.log",
	        "Name": "/multinode-079070",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-079070:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-079070",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e5e11918fc04456f16b46e3608935d02c293129393f819ada075321189caa2ad-init/diff:/var/lib/docker/overlay2/e391a31d00d345931d18da68f8a7d7338830f6d9b98fe203461225d03a65d594/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e5e11918fc04456f16b46e3608935d02c293129393f819ada075321189caa2ad/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e5e11918fc04456f16b46e3608935d02c293129393f819ada075321189caa2ad/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e5e11918fc04456f16b46e3608935d02c293129393f819ada075321189caa2ad/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-079070",
	                "Source": "/var/lib/docker/volumes/multinode-079070/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-079070",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-079070",
	                "name.minikube.sigs.k8s.io": "multinode-079070",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "51b3ee97042f19edcf1485c4947bf52c07c74eba4cdff690f53679c088ac2e99",
	            "SandboxKey": "/var/run/docker/netns/51b3ee97042f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32928"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32929"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32932"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32930"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32931"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-079070": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null,
	                    "NetworkID": "49585fce923a48b44636990469ad4decadcc5b1b88fcdd63ced7ebb1e3971b52",
	                    "EndpointID": "ea5f15e5f3b777b4427ea380939cf5617f0679188f322ff672496f43585cff06",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "multinode-079070",
	                        "1f3af6522540"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-079070 -n multinode-079070
helpers_test.go:244: <<< TestMultiNode/serial/DeleteNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-079070 logs -n 25: (1.624337776s)
helpers_test.go:252: TestMultiNode/serial/DeleteNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-079070 cp multinode-079070-m02:/home/docker/cp-test.txt                       | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile475858721/001/cp-test_multinode-079070-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-079070 ssh -n                                                                 | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | multinode-079070-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-079070 cp multinode-079070-m02:/home/docker/cp-test.txt                       | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | multinode-079070:/home/docker/cp-test_multinode-079070-m02_multinode-079070.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-079070 ssh -n                                                                 | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | multinode-079070-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-079070 ssh -n multinode-079070 sudo cat                                       | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | /home/docker/cp-test_multinode-079070-m02_multinode-079070.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-079070 cp multinode-079070-m02:/home/docker/cp-test.txt                       | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | multinode-079070-m03:/home/docker/cp-test_multinode-079070-m02_multinode-079070-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-079070 ssh -n                                                                 | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | multinode-079070-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-079070 ssh -n multinode-079070-m03 sudo cat                                   | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | /home/docker/cp-test_multinode-079070-m02_multinode-079070-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-079070 cp testdata/cp-test.txt                                                | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | multinode-079070-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-079070 ssh -n                                                                 | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | multinode-079070-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-079070 cp multinode-079070-m03:/home/docker/cp-test.txt                       | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile475858721/001/cp-test_multinode-079070-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-079070 ssh -n                                                                 | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | multinode-079070-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-079070 cp multinode-079070-m03:/home/docker/cp-test.txt                       | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | multinode-079070:/home/docker/cp-test_multinode-079070-m03_multinode-079070.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-079070 ssh -n                                                                 | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | multinode-079070-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-079070 ssh -n multinode-079070 sudo cat                                       | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | /home/docker/cp-test_multinode-079070-m03_multinode-079070.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-079070 cp multinode-079070-m03:/home/docker/cp-test.txt                       | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | multinode-079070-m02:/home/docker/cp-test_multinode-079070-m03_multinode-079070-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-079070 ssh -n                                                                 | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | multinode-079070-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-079070 ssh -n multinode-079070-m02 sudo cat                                   | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | /home/docker/cp-test_multinode-079070-m03_multinode-079070-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-079070 node stop m03                                                          | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	| node    | multinode-079070 node start                                                             | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-079070                                                                | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC |                     |
	| stop    | -p multinode-079070                                                                     | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:58 UTC |
	| start   | -p multinode-079070                                                                     | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:58 UTC | 16 Sep 24 10:59 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-079070                                                                | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:59 UTC |                     |
	| node    | multinode-079070 node delete                                                            | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:59 UTC | 16 Sep 24 10:59 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:58:21
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:58:21.635278  174032 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:58:21.635519  174032 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:58:21.635527  174032 out.go:358] Setting ErrFile to fd 2...
	I0916 10:58:21.635531  174032 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:58:21.635760  174032 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 10:58:21.636372  174032 out.go:352] Setting JSON to false
	I0916 10:58:21.637464  174032 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2446,"bootTime":1726481856,"procs":246,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:58:21.637565  174032 start.go:139] virtualization: kvm guest
	I0916 10:58:21.642505  174032 out.go:177] * [multinode-079070] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:58:21.644212  174032 notify.go:220] Checking for updates...
	I0916 10:58:21.644234  174032 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:58:21.646343  174032 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:58:21.647800  174032 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:58:21.649171  174032 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 10:58:21.650539  174032 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:58:21.651774  174032 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:58:21.653637  174032 config.go:182] Loaded profile config "multinode-079070": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:58:21.653736  174032 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:58:21.677329  174032 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:58:21.677425  174032 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:58:21.724754  174032 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:42 SystemTime:2024-09-16 10:58:21.715044141 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:58:21.724911  174032 docker.go:318] overlay module found
	I0916 10:58:21.727205  174032 out.go:177] * Using the docker driver based on existing profile
	I0916 10:58:21.728745  174032 start.go:297] selected driver: docker
	I0916 10:58:21.728768  174032 start.go:901] validating driver "docker" against &{Name:multinode-079070 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-079070 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:false Worker:true} {Name:m03 IP:192.168.67.4 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false
kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:58:21.728906  174032 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:58:21.729000  174032 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:58:21.777319  174032 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:42 SystemTime:2024-09-16 10:58:21.768498092 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:58:21.777961  174032 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:58:21.777989  174032 cni.go:84] Creating CNI manager for ""
	I0916 10:58:21.778048  174032 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0916 10:58:21.778105  174032 start.go:340] cluster config:
	{Name:multinode-079070 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-079070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:false Worker:true} {Name:m03 IP:192.168.67.4 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-s
erver:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:58:21.780287  174032 out.go:177] * Starting "multinode-079070" primary control-plane node in "multinode-079070" cluster
	I0916 10:58:21.781936  174032 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 10:58:21.783414  174032 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:58:21.784736  174032 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:58:21.784767  174032 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:58:21.784793  174032 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4
	I0916 10:58:21.784809  174032 cache.go:56] Caching tarball of preloaded images
	I0916 10:58:21.784894  174032 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 10:58:21.784909  174032 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 10:58:21.785095  174032 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/config.json ...
	W0916 10:58:21.804722  174032 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:58:21.804739  174032 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:58:21.804829  174032 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:58:21.804841  174032 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:58:21.804845  174032 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:58:21.804852  174032 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:58:21.804860  174032 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:58:21.806033  174032 image.go:273] response: 
	I0916 10:58:21.857798  174032 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:58:21.857837  174032 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:58:21.857875  174032 start.go:360] acquireMachinesLock for multinode-079070: {Name:mka8d048a8e19e1d22189c5e81470c7f2336c084 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:58:21.857965  174032 start.go:364] duration metric: took 51.322µs to acquireMachinesLock for "multinode-079070"
	I0916 10:58:21.857988  174032 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:58:21.857993  174032 fix.go:54] fixHost starting: 
	I0916 10:58:21.858212  174032 cli_runner.go:164] Run: docker container inspect multinode-079070 --format={{.State.Status}}
	I0916 10:58:21.875440  174032 fix.go:112] recreateIfNeeded on multinode-079070: state=Stopped err=<nil>
	W0916 10:58:21.875472  174032 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:58:21.877739  174032 out.go:177] * Restarting existing docker container for "multinode-079070" ...
	I0916 10:58:21.879023  174032 cli_runner.go:164] Run: docker start multinode-079070
	I0916 10:58:22.149813  174032 cli_runner.go:164] Run: docker container inspect multinode-079070 --format={{.State.Status}}
	I0916 10:58:22.168868  174032 kic.go:430] container "multinode-079070" state is running.
	I0916 10:58:22.169369  174032 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-079070
	I0916 10:58:22.189557  174032 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/config.json ...
	I0916 10:58:22.189866  174032 machine.go:93] provisionDockerMachine start ...
	I0916 10:58:22.189943  174032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070
	I0916 10:58:22.208346  174032 main.go:141] libmachine: Using SSH client type: native
	I0916 10:58:22.208579  174032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32928 <nil> <nil>}
	I0916 10:58:22.208598  174032 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:58:22.209268  174032 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36966->127.0.0.1:32928: read: connection reset by peer
	I0916 10:58:25.343361  174032 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-079070
	
	I0916 10:58:25.343390  174032 ubuntu.go:169] provisioning hostname "multinode-079070"
	I0916 10:58:25.343445  174032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070
	I0916 10:58:25.361382  174032 main.go:141] libmachine: Using SSH client type: native
	I0916 10:58:25.361602  174032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32928 <nil> <nil>}
	I0916 10:58:25.361622  174032 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-079070 && echo "multinode-079070" | sudo tee /etc/hostname
	I0916 10:58:25.506659  174032 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-079070
	
	I0916 10:58:25.506740  174032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070
	I0916 10:58:25.527424  174032 main.go:141] libmachine: Using SSH client type: native
	I0916 10:58:25.527615  174032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32928 <nil> <nil>}
	I0916 10:58:25.527635  174032 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-079070' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-079070/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-079070' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:58:25.660082  174032 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:58:25.660114  174032 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 10:58:25.660150  174032 ubuntu.go:177] setting up certificates
	I0916 10:58:25.660165  174032 provision.go:84] configureAuth start
	I0916 10:58:25.660225  174032 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-079070
	I0916 10:58:25.678040  174032 provision.go:143] copyHostCerts
	I0916 10:58:25.678087  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:58:25.678128  174032 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 10:58:25.678139  174032 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:58:25.678221  174032 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 10:58:25.678308  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:58:25.678329  174032 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 10:58:25.678337  174032 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:58:25.678368  174032 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 10:58:25.678415  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:58:25.678435  174032 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 10:58:25.678442  174032 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:58:25.678468  174032 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 10:58:25.678539  174032 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.multinode-079070 san=[127.0.0.1 192.168.67.2 localhost minikube multinode-079070]
	I0916 10:58:25.866719  174032 provision.go:177] copyRemoteCerts
	I0916 10:58:25.866802  174032 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:58:25.866848  174032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070
	I0916 10:58:25.884491  174032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32928 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070/id_rsa Username:docker}
	I0916 10:58:25.980422  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:58:25.980487  174032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 10:58:26.002234  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:58:26.002300  174032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0916 10:58:26.023682  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:58:26.023761  174032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 10:58:26.045902  174032 provision.go:87] duration metric: took 385.718837ms to configureAuth
	I0916 10:58:26.045930  174032 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:58:26.046167  174032 config.go:182] Loaded profile config "multinode-079070": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:58:26.046182  174032 machine.go:96] duration metric: took 3.856297879s to provisionDockerMachine
	I0916 10:58:26.046191  174032 start.go:293] postStartSetup for "multinode-079070" (driver="docker")
	I0916 10:58:26.046203  174032 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:58:26.046255  174032 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:58:26.046299  174032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070
	I0916 10:58:26.064286  174032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32928 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070/id_rsa Username:docker}
	I0916 10:58:26.164974  174032 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:58:26.168254  174032 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.4 LTS"
	I0916 10:58:26.168275  174032 command_runner.go:130] > NAME="Ubuntu"
	I0916 10:58:26.168282  174032 command_runner.go:130] > VERSION_ID="22.04"
	I0916 10:58:26.168289  174032 command_runner.go:130] > VERSION="22.04.4 LTS (Jammy Jellyfish)"
	I0916 10:58:26.168296  174032 command_runner.go:130] > VERSION_CODENAME=jammy
	I0916 10:58:26.168301  174032 command_runner.go:130] > ID=ubuntu
	I0916 10:58:26.168307  174032 command_runner.go:130] > ID_LIKE=debian
	I0916 10:58:26.168314  174032 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0916 10:58:26.168322  174032 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0916 10:58:26.168333  174032 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0916 10:58:26.168346  174032 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0916 10:58:26.168357  174032 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0916 10:58:26.168428  174032 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:58:26.168452  174032 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:58:26.168459  174032 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:58:26.168467  174032 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:58:26.168476  174032 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 10:58:26.168560  174032 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 10:58:26.168647  174032 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 10:58:26.168659  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /etc/ssl/certs/111892.pem
	I0916 10:58:26.168745  174032 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:58:26.177098  174032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:58:26.199532  174032 start.go:296] duration metric: took 153.327465ms for postStartSetup
	I0916 10:58:26.199600  174032 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:58:26.199632  174032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070
	I0916 10:58:26.217956  174032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32928 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070/id_rsa Username:docker}
	I0916 10:58:26.308097  174032 command_runner.go:130] > 32%
	I0916 10:58:26.308404  174032 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:58:26.312638  174032 command_runner.go:130] > 201G
	I0916 10:58:26.312674  174032 fix.go:56] duration metric: took 4.45467854s for fixHost
	I0916 10:58:26.312686  174032 start.go:83] releasing machines lock for "multinode-079070", held for 4.454708714s
	I0916 10:58:26.312758  174032 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-079070
	I0916 10:58:26.330760  174032 ssh_runner.go:195] Run: cat /version.json
	I0916 10:58:26.330812  174032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070
	I0916 10:58:26.330843  174032 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:58:26.330907  174032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070
	I0916 10:58:26.348084  174032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32928 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070/id_rsa Username:docker}
	I0916 10:58:26.348725  174032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32928 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070/id_rsa Username:docker}
	I0916 10:58:26.514720  174032 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0916 10:58:26.516862  174032 command_runner.go:130] > {"iso_version": "v1.34.0-1726281733-19643", "kicbase_version": "v0.0.45-1726358845-19644", "minikube_version": "v1.34.0", "commit": "f890713149c79cf50e25c13e6a5c0470aa0f0450"}
	I0916 10:58:26.517002  174032 ssh_runner.go:195] Run: systemctl --version
	I0916 10:58:26.520990  174032 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.12)
	I0916 10:58:26.521029  174032 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0916 10:58:26.521120  174032 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:58:26.524942  174032 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0916 10:58:26.524968  174032 command_runner.go:130] >   Size: 78        	Blocks: 8          IO Block: 4096   regular file
	I0916 10:58:26.524975  174032 command_runner.go:130] > Device: 35h/53d	Inode: 809407      Links: 1
	I0916 10:58:26.524981  174032 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:58:26.524987  174032 command_runner.go:130] > Access: 2024-09-16 10:56:14.297749394 +0000
	I0916 10:58:26.524992  174032 command_runner.go:130] > Modify: 2024-09-16 10:56:14.273747279 +0000
	I0916 10:58:26.524997  174032 command_runner.go:130] > Change: 2024-09-16 10:56:14.273747279 +0000
	I0916 10:58:26.525001  174032 command_runner.go:130] >  Birth: 2024-09-16 10:56:14.273747279 +0000
	I0916 10:58:26.525112  174032 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 10:58:26.541554  174032 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:58:26.541629  174032 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:58:26.550005  174032 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:58:26.550031  174032 start.go:495] detecting cgroup driver to use...
	I0916 10:58:26.550065  174032 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:58:26.550106  174032 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 10:58:26.563286  174032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 10:58:26.574629  174032 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:58:26.574693  174032 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:58:26.586521  174032 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:58:26.597154  174032 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:58:26.670994  174032 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:58:26.746938  174032 docker.go:233] disabling docker service ...
	I0916 10:58:26.746994  174032 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:58:26.758582  174032 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:58:26.769126  174032 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:58:26.840398  174032 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:58:26.911995  174032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:58:26.922906  174032 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:58:26.937459  174032 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0916 10:58:26.938458  174032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 10:58:26.948747  174032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:58:26.958488  174032 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:58:26.958572  174032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:58:26.968638  174032 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:58:26.978072  174032 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:58:26.987280  174032 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:58:26.996183  174032 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:58:27.004513  174032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:58:27.013626  174032 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:58:27.022628  174032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:58:27.031758  174032 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:58:27.039309  174032 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0916 10:58:27.039368  174032 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:58:27.047077  174032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:58:27.116898  174032 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 10:58:27.215887  174032 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 10:58:27.215958  174032 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 10:58:27.219470  174032 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I0916 10:58:27.219499  174032 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0916 10:58:27.219509  174032 command_runner.go:130] > Device: 40h/64d	Inode: 160         Links: 1
	I0916 10:58:27.219520  174032 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:58:27.219527  174032 command_runner.go:130] > Access: 2024-09-16 10:58:27.173460321 +0000
	I0916 10:58:27.219535  174032 command_runner.go:130] > Modify: 2024-09-16 10:58:27.173460321 +0000
	I0916 10:58:27.219540  174032 command_runner.go:130] > Change: 2024-09-16 10:58:27.173460321 +0000
	I0916 10:58:27.219544  174032 command_runner.go:130] >  Birth: -
	I0916 10:58:27.219561  174032 start.go:563] Will wait 60s for crictl version
	I0916 10:58:27.219610  174032 ssh_runner.go:195] Run: which crictl
	I0916 10:58:27.222699  174032 command_runner.go:130] > /usr/bin/crictl
	I0916 10:58:27.222782  174032 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:58:27.254034  174032 command_runner.go:130] > Version:  0.1.0
	I0916 10:58:27.254057  174032 command_runner.go:130] > RuntimeName:  containerd
	I0916 10:58:27.254063  174032 command_runner.go:130] > RuntimeVersion:  1.7.22
	I0916 10:58:27.254067  174032 command_runner.go:130] > RuntimeApiVersion:  v1
	I0916 10:58:27.255968  174032 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 10:58:27.256030  174032 ssh_runner.go:195] Run: containerd --version
	I0916 10:58:27.276835  174032 command_runner.go:130] > containerd containerd.io 1.7.22 7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c
	I0916 10:58:27.278157  174032 ssh_runner.go:195] Run: containerd --version
	I0916 10:58:27.299925  174032 command_runner.go:130] > containerd containerd.io 1.7.22 7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c
	I0916 10:58:27.302611  174032 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 10:58:27.304226  174032 cli_runner.go:164] Run: docker network inspect multinode-079070 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:58:27.321161  174032 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0916 10:58:27.324651  174032 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:58:27.335060  174032 kubeadm.go:883] updating cluster {Name:multinode-079070 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-079070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:false Worker:true} {Name:m03 IP:192.168.67.4 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubefl
ow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 10:58:27.335218  174032 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:58:27.335279  174032 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:58:27.366881  174032 command_runner.go:130] > {
	I0916 10:58:27.366910  174032 command_runner.go:130] >   "images": [
	I0916 10:58:27.366916  174032 command_runner.go:130] >     {
	I0916 10:58:27.366927  174032 command_runner.go:130] >       "id": "sha256:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 10:58:27.366933  174032 command_runner.go:130] >       "repoTags": [
	I0916 10:58:27.366938  174032 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 10:58:27.366941  174032 command_runner.go:130] >       ],
	I0916 10:58:27.366948  174032 command_runner.go:130] >       "repoDigests": [
	I0916 10:58:27.366956  174032 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 10:58:27.366960  174032 command_runner.go:130] >       ],
	I0916 10:58:27.366964  174032 command_runner.go:130] >       "size": "36793393",
	I0916 10:58:27.366971  174032 command_runner.go:130] >       "uid": null,
	I0916 10:58:27.366975  174032 command_runner.go:130] >       "username": "",
	I0916 10:58:27.366980  174032 command_runner.go:130] >       "spec": null,
	I0916 10:58:27.366984  174032 command_runner.go:130] >       "pinned": false
	I0916 10:58:27.366990  174032 command_runner.go:130] >     },
	I0916 10:58:27.366994  174032 command_runner.go:130] >     {
	I0916 10:58:27.367004  174032 command_runner.go:130] >       "id": "sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0916 10:58:27.367010  174032 command_runner.go:130] >       "repoTags": [
	I0916 10:58:27.367018  174032 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0916 10:58:27.367024  174032 command_runner.go:130] >       ],
	I0916 10:58:27.367029  174032 command_runner.go:130] >       "repoDigests": [
	I0916 10:58:27.367038  174032 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0916 10:58:27.367043  174032 command_runner.go:130] >       ],
	I0916 10:58:27.367048  174032 command_runner.go:130] >       "size": "725911",
	I0916 10:58:27.367054  174032 command_runner.go:130] >       "uid": null,
	I0916 10:58:27.367058  174032 command_runner.go:130] >       "username": "",
	I0916 10:58:27.367064  174032 command_runner.go:130] >       "spec": null,
	I0916 10:58:27.367068  174032 command_runner.go:130] >       "pinned": false
	I0916 10:58:27.367074  174032 command_runner.go:130] >     },
	I0916 10:58:27.367077  174032 command_runner.go:130] >     {
	I0916 10:58:27.367085  174032 command_runner.go:130] >       "id": "sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 10:58:27.367092  174032 command_runner.go:130] >       "repoTags": [
	I0916 10:58:27.367100  174032 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 10:58:27.367106  174032 command_runner.go:130] >       ],
	I0916 10:58:27.367110  174032 command_runner.go:130] >       "repoDigests": [
	I0916 10:58:27.367124  174032 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0916 10:58:27.367129  174032 command_runner.go:130] >       ],
	I0916 10:58:27.367134  174032 command_runner.go:130] >       "size": "9058936",
	I0916 10:58:27.367139  174032 command_runner.go:130] >       "uid": null,
	I0916 10:58:27.367144  174032 command_runner.go:130] >       "username": "",
	I0916 10:58:27.367150  174032 command_runner.go:130] >       "spec": null,
	I0916 10:58:27.367154  174032 command_runner.go:130] >       "pinned": false
	I0916 10:58:27.367160  174032 command_runner.go:130] >     },
	I0916 10:58:27.367163  174032 command_runner.go:130] >     {
	I0916 10:58:27.367172  174032 command_runner.go:130] >       "id": "sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 10:58:27.367177  174032 command_runner.go:130] >       "repoTags": [
	I0916 10:58:27.367183  174032 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 10:58:27.367188  174032 command_runner.go:130] >       ],
	I0916 10:58:27.367193  174032 command_runner.go:130] >       "repoDigests": [
	I0916 10:58:27.367202  174032 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"
	I0916 10:58:27.367208  174032 command_runner.go:130] >       ],
	I0916 10:58:27.367213  174032 command_runner.go:130] >       "size": "18562039",
	I0916 10:58:27.367218  174032 command_runner.go:130] >       "uid": null,
	I0916 10:58:27.367222  174032 command_runner.go:130] >       "username": "nonroot",
	I0916 10:58:27.367229  174032 command_runner.go:130] >       "spec": null,
	I0916 10:58:27.367233  174032 command_runner.go:130] >       "pinned": false
	I0916 10:58:27.367238  174032 command_runner.go:130] >     },
	I0916 10:58:27.367242  174032 command_runner.go:130] >     {
	I0916 10:58:27.367265  174032 command_runner.go:130] >       "id": "sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 10:58:27.367271  174032 command_runner.go:130] >       "repoTags": [
	I0916 10:58:27.367275  174032 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 10:58:27.367281  174032 command_runner.go:130] >       ],
	I0916 10:58:27.367286  174032 command_runner.go:130] >       "repoDigests": [
	I0916 10:58:27.367294  174032 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 10:58:27.367303  174032 command_runner.go:130] >       ],
	I0916 10:58:27.367309  174032 command_runner.go:130] >       "size": "56909194",
	I0916 10:58:27.367313  174032 command_runner.go:130] >       "uid": {
	I0916 10:58:27.367319  174032 command_runner.go:130] >         "value": "0"
	I0916 10:58:27.367323  174032 command_runner.go:130] >       },
	I0916 10:58:27.367327  174032 command_runner.go:130] >       "username": "",
	I0916 10:58:27.367333  174032 command_runner.go:130] >       "spec": null,
	I0916 10:58:27.367337  174032 command_runner.go:130] >       "pinned": false
	I0916 10:58:27.367343  174032 command_runner.go:130] >     },
	I0916 10:58:27.367346  174032 command_runner.go:130] >     {
	I0916 10:58:27.367354  174032 command_runner.go:130] >       "id": "sha256:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 10:58:27.367358  174032 command_runner.go:130] >       "repoTags": [
	I0916 10:58:27.367366  174032 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 10:58:27.367369  174032 command_runner.go:130] >       ],
	I0916 10:58:27.367373  174032 command_runner.go:130] >       "repoDigests": [
	I0916 10:58:27.367383  174032 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 10:58:27.367388  174032 command_runner.go:130] >       ],
	I0916 10:58:27.367392  174032 command_runner.go:130] >       "size": "28047142",
	I0916 10:58:27.367399  174032 command_runner.go:130] >       "uid": {
	I0916 10:58:27.367403  174032 command_runner.go:130] >         "value": "0"
	I0916 10:58:27.367410  174032 command_runner.go:130] >       },
	I0916 10:58:27.367414  174032 command_runner.go:130] >       "username": "",
	I0916 10:58:27.367420  174032 command_runner.go:130] >       "spec": null,
	I0916 10:58:27.367424  174032 command_runner.go:130] >       "pinned": false
	I0916 10:58:27.367429  174032 command_runner.go:130] >     },
	I0916 10:58:27.367433  174032 command_runner.go:130] >     {
	I0916 10:58:27.367440  174032 command_runner.go:130] >       "id": "sha256:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 10:58:27.367446  174032 command_runner.go:130] >       "repoTags": [
	I0916 10:58:27.367451  174032 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 10:58:27.367457  174032 command_runner.go:130] >       ],
	I0916 10:58:27.367461  174032 command_runner.go:130] >       "repoDigests": [
	I0916 10:58:27.367470  174032 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1"
	I0916 10:58:27.367476  174032 command_runner.go:130] >       ],
	I0916 10:58:27.367480  174032 command_runner.go:130] >       "size": "26221554",
	I0916 10:58:27.367486  174032 command_runner.go:130] >       "uid": {
	I0916 10:58:27.367490  174032 command_runner.go:130] >         "value": "0"
	I0916 10:58:27.367496  174032 command_runner.go:130] >       },
	I0916 10:58:27.367501  174032 command_runner.go:130] >       "username": "",
	I0916 10:58:27.367508  174032 command_runner.go:130] >       "spec": null,
	I0916 10:58:27.367512  174032 command_runner.go:130] >       "pinned": false
	I0916 10:58:27.367517  174032 command_runner.go:130] >     },
	I0916 10:58:27.367520  174032 command_runner.go:130] >     {
	I0916 10:58:27.367529  174032 command_runner.go:130] >       "id": "sha256:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 10:58:27.367533  174032 command_runner.go:130] >       "repoTags": [
	I0916 10:58:27.367540  174032 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 10:58:27.367543  174032 command_runner.go:130] >       ],
	I0916 10:58:27.367549  174032 command_runner.go:130] >       "repoDigests": [
	I0916 10:58:27.367557  174032 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"
	I0916 10:58:27.367562  174032 command_runner.go:130] >       ],
	I0916 10:58:27.367567  174032 command_runner.go:130] >       "size": "30211884",
	I0916 10:58:27.367573  174032 command_runner.go:130] >       "uid": null,
	I0916 10:58:27.367577  174032 command_runner.go:130] >       "username": "",
	I0916 10:58:27.367583  174032 command_runner.go:130] >       "spec": null,
	I0916 10:58:27.367588  174032 command_runner.go:130] >       "pinned": false
	I0916 10:58:27.367594  174032 command_runner.go:130] >     },
	I0916 10:58:27.367598  174032 command_runner.go:130] >     {
	I0916 10:58:27.367606  174032 command_runner.go:130] >       "id": "sha256:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 10:58:27.367613  174032 command_runner.go:130] >       "repoTags": [
	I0916 10:58:27.367617  174032 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 10:58:27.367623  174032 command_runner.go:130] >       ],
	I0916 10:58:27.367627  174032 command_runner.go:130] >       "repoDigests": [
	I0916 10:58:27.367636  174032 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"
	I0916 10:58:27.367641  174032 command_runner.go:130] >       ],
	I0916 10:58:27.367645  174032 command_runner.go:130] >       "size": "20177215",
	I0916 10:58:27.367651  174032 command_runner.go:130] >       "uid": {
	I0916 10:58:27.367655  174032 command_runner.go:130] >         "value": "0"
	I0916 10:58:27.367661  174032 command_runner.go:130] >       },
	I0916 10:58:27.367665  174032 command_runner.go:130] >       "username": "",
	I0916 10:58:27.367670  174032 command_runner.go:130] >       "spec": null,
	I0916 10:58:27.367674  174032 command_runner.go:130] >       "pinned": false
	I0916 10:58:27.367679  174032 command_runner.go:130] >     },
	I0916 10:58:27.367683  174032 command_runner.go:130] >     {
	I0916 10:58:27.367697  174032 command_runner.go:130] >       "id": "sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 10:58:27.367703  174032 command_runner.go:130] >       "repoTags": [
	I0916 10:58:27.367708  174032 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 10:58:27.367711  174032 command_runner.go:130] >       ],
	I0916 10:58:27.367718  174032 command_runner.go:130] >       "repoDigests": [
	I0916 10:58:27.367724  174032 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 10:58:27.367730  174032 command_runner.go:130] >       ],
	I0916 10:58:27.367758  174032 command_runner.go:130] >       "size": "320368",
	I0916 10:58:27.367769  174032 command_runner.go:130] >       "uid": {
	I0916 10:58:27.367776  174032 command_runner.go:130] >         "value": "65535"
	I0916 10:58:27.367782  174032 command_runner.go:130] >       },
	I0916 10:58:27.367786  174032 command_runner.go:130] >       "username": "",
	I0916 10:58:27.367791  174032 command_runner.go:130] >       "spec": null,
	I0916 10:58:27.367795  174032 command_runner.go:130] >       "pinned": true
	I0916 10:58:27.367802  174032 command_runner.go:130] >     }
	I0916 10:58:27.367805  174032 command_runner.go:130] >   ]
	I0916 10:58:27.367810  174032 command_runner.go:130] > }
	I0916 10:58:27.367960  174032 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 10:58:27.367971  174032 containerd.go:534] Images already preloaded, skipping extraction
	I0916 10:58:27.368017  174032 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 10:58:27.398218  174032 command_runner.go:130] > {
	I0916 10:58:27.398244  174032 command_runner.go:130] >   "images": [
	I0916 10:58:27.398251  174032 command_runner.go:130] >     {
	I0916 10:58:27.398265  174032 command_runner.go:130] >       "id": "sha256:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 10:58:27.398275  174032 command_runner.go:130] >       "repoTags": [
	I0916 10:58:27.398283  174032 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 10:58:27.398288  174032 command_runner.go:130] >       ],
	I0916 10:58:27.398296  174032 command_runner.go:130] >       "repoDigests": [
	I0916 10:58:27.398316  174032 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 10:58:27.398324  174032 command_runner.go:130] >       ],
	I0916 10:58:27.398331  174032 command_runner.go:130] >       "size": "36793393",
	I0916 10:58:27.398338  174032 command_runner.go:130] >       "uid": null,
	I0916 10:58:27.398346  174032 command_runner.go:130] >       "username": "",
	I0916 10:58:27.398354  174032 command_runner.go:130] >       "spec": null,
	I0916 10:58:27.398360  174032 command_runner.go:130] >       "pinned": false
	I0916 10:58:27.398367  174032 command_runner.go:130] >     },
	I0916 10:58:27.398372  174032 command_runner.go:130] >     {
	I0916 10:58:27.398387  174032 command_runner.go:130] >       "id": "sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0916 10:58:27.398396  174032 command_runner.go:130] >       "repoTags": [
	I0916 10:58:27.398406  174032 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0916 10:58:27.398411  174032 command_runner.go:130] >       ],
	I0916 10:58:27.398420  174032 command_runner.go:130] >       "repoDigests": [
	I0916 10:58:27.398437  174032 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0916 10:58:27.398444  174032 command_runner.go:130] >       ],
	I0916 10:58:27.398451  174032 command_runner.go:130] >       "size": "725911",
	I0916 10:58:27.398460  174032 command_runner.go:130] >       "uid": null,
	I0916 10:58:27.398469  174032 command_runner.go:130] >       "username": "",
	I0916 10:58:27.398477  174032 command_runner.go:130] >       "spec": null,
	I0916 10:58:27.398486  174032 command_runner.go:130] >       "pinned": false
	I0916 10:58:27.398491  174032 command_runner.go:130] >     },
	I0916 10:58:27.398499  174032 command_runner.go:130] >     {
	I0916 10:58:27.398512  174032 command_runner.go:130] >       "id": "sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 10:58:27.398522  174032 command_runner.go:130] >       "repoTags": [
	I0916 10:58:27.398545  174032 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 10:58:27.398554  174032 command_runner.go:130] >       ],
	I0916 10:58:27.398560  174032 command_runner.go:130] >       "repoDigests": [
	I0916 10:58:27.398572  174032 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0916 10:58:27.398580  174032 command_runner.go:130] >       ],
	I0916 10:58:27.398593  174032 command_runner.go:130] >       "size": "9058936",
	I0916 10:58:27.398602  174032 command_runner.go:130] >       "uid": null,
	I0916 10:58:27.398608  174032 command_runner.go:130] >       "username": "",
	I0916 10:58:27.398617  174032 command_runner.go:130] >       "spec": null,
	I0916 10:58:27.398625  174032 command_runner.go:130] >       "pinned": false
	I0916 10:58:27.398633  174032 command_runner.go:130] >     },
	I0916 10:58:27.398638  174032 command_runner.go:130] >     {
	I0916 10:58:27.398650  174032 command_runner.go:130] >       "id": "sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 10:58:27.398659  174032 command_runner.go:130] >       "repoTags": [
	I0916 10:58:27.398670  174032 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 10:58:27.398678  174032 command_runner.go:130] >       ],
	I0916 10:58:27.398687  174032 command_runner.go:130] >       "repoDigests": [
	I0916 10:58:27.398701  174032 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"
	I0916 10:58:27.398709  174032 command_runner.go:130] >       ],
	I0916 10:58:27.398718  174032 command_runner.go:130] >       "size": "18562039",
	I0916 10:58:27.398726  174032 command_runner.go:130] >       "uid": null,
	I0916 10:58:27.398734  174032 command_runner.go:130] >       "username": "nonroot",
	I0916 10:58:27.398741  174032 command_runner.go:130] >       "spec": null,
	I0916 10:58:27.398749  174032 command_runner.go:130] >       "pinned": false
	I0916 10:58:27.398757  174032 command_runner.go:130] >     },
	I0916 10:58:27.398766  174032 command_runner.go:130] >     {
	I0916 10:58:27.398779  174032 command_runner.go:130] >       "id": "sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 10:58:27.398787  174032 command_runner.go:130] >       "repoTags": [
	I0916 10:58:27.398797  174032 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 10:58:27.398805  174032 command_runner.go:130] >       ],
	I0916 10:58:27.398814  174032 command_runner.go:130] >       "repoDigests": [
	I0916 10:58:27.398828  174032 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 10:58:27.398840  174032 command_runner.go:130] >       ],
	I0916 10:58:27.398849  174032 command_runner.go:130] >       "size": "56909194",
	I0916 10:58:27.398860  174032 command_runner.go:130] >       "uid": {
	I0916 10:58:27.398868  174032 command_runner.go:130] >         "value": "0"
	I0916 10:58:27.398877  174032 command_runner.go:130] >       },
	I0916 10:58:27.398887  174032 command_runner.go:130] >       "username": "",
	I0916 10:58:27.398897  174032 command_runner.go:130] >       "spec": null,
	I0916 10:58:27.398906  174032 command_runner.go:130] >       "pinned": false
	I0916 10:58:27.398914  174032 command_runner.go:130] >     },
	I0916 10:58:27.398923  174032 command_runner.go:130] >     {
	I0916 10:58:27.398933  174032 command_runner.go:130] >       "id": "sha256:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 10:58:27.398938  174032 command_runner.go:130] >       "repoTags": [
	I0916 10:58:27.398946  174032 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 10:58:27.398950  174032 command_runner.go:130] >       ],
	I0916 10:58:27.398955  174032 command_runner.go:130] >       "repoDigests": [
	I0916 10:58:27.398967  174032 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 10:58:27.398972  174032 command_runner.go:130] >       ],
	I0916 10:58:27.398977  174032 command_runner.go:130] >       "size": "28047142",
	I0916 10:58:27.398982  174032 command_runner.go:130] >       "uid": {
	I0916 10:58:27.398987  174032 command_runner.go:130] >         "value": "0"
	I0916 10:58:27.398992  174032 command_runner.go:130] >       },
	I0916 10:58:27.398997  174032 command_runner.go:130] >       "username": "",
	I0916 10:58:27.399003  174032 command_runner.go:130] >       "spec": null,
	I0916 10:58:27.399010  174032 command_runner.go:130] >       "pinned": false
	I0916 10:58:27.399014  174032 command_runner.go:130] >     },
	I0916 10:58:27.399019  174032 command_runner.go:130] >     {
	I0916 10:58:27.399028  174032 command_runner.go:130] >       "id": "sha256:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 10:58:27.399034  174032 command_runner.go:130] >       "repoTags": [
	I0916 10:58:27.399042  174032 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 10:58:27.399049  174032 command_runner.go:130] >       ],
	I0916 10:58:27.399056  174032 command_runner.go:130] >       "repoDigests": [
	I0916 10:58:27.399069  174032 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1"
	I0916 10:58:27.399076  174032 command_runner.go:130] >       ],
	I0916 10:58:27.399085  174032 command_runner.go:130] >       "size": "26221554",
	I0916 10:58:27.399091  174032 command_runner.go:130] >       "uid": {
	I0916 10:58:27.399100  174032 command_runner.go:130] >         "value": "0"
	I0916 10:58:27.399109  174032 command_runner.go:130] >       },
	I0916 10:58:27.399116  174032 command_runner.go:130] >       "username": "",
	I0916 10:58:27.399126  174032 command_runner.go:130] >       "spec": null,
	I0916 10:58:27.399135  174032 command_runner.go:130] >       "pinned": false
	I0916 10:58:27.399144  174032 command_runner.go:130] >     },
	I0916 10:58:27.399153  174032 command_runner.go:130] >     {
	I0916 10:58:27.399166  174032 command_runner.go:130] >       "id": "sha256:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 10:58:27.399174  174032 command_runner.go:130] >       "repoTags": [
	I0916 10:58:27.399183  174032 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 10:58:27.399190  174032 command_runner.go:130] >       ],
	I0916 10:58:27.399196  174032 command_runner.go:130] >       "repoDigests": [
	I0916 10:58:27.399209  174032 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"
	I0916 10:58:27.399216  174032 command_runner.go:130] >       ],
	I0916 10:58:27.399223  174032 command_runner.go:130] >       "size": "30211884",
	I0916 10:58:27.399231  174032 command_runner.go:130] >       "uid": null,
	I0916 10:58:27.399239  174032 command_runner.go:130] >       "username": "",
	I0916 10:58:27.399247  174032 command_runner.go:130] >       "spec": null,
	I0916 10:58:27.399256  174032 command_runner.go:130] >       "pinned": false
	I0916 10:58:27.399264  174032 command_runner.go:130] >     },
	I0916 10:58:27.399271  174032 command_runner.go:130] >     {
	I0916 10:58:27.399281  174032 command_runner.go:130] >       "id": "sha256:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 10:58:27.399290  174032 command_runner.go:130] >       "repoTags": [
	I0916 10:58:27.399299  174032 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 10:58:27.399307  174032 command_runner.go:130] >       ],
	I0916 10:58:27.399316  174032 command_runner.go:130] >       "repoDigests": [
	I0916 10:58:27.399326  174032 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"
	I0916 10:58:27.399334  174032 command_runner.go:130] >       ],
	I0916 10:58:27.399342  174032 command_runner.go:130] >       "size": "20177215",
	I0916 10:58:27.399352  174032 command_runner.go:130] >       "uid": {
	I0916 10:58:27.399362  174032 command_runner.go:130] >         "value": "0"
	I0916 10:58:27.399368  174032 command_runner.go:130] >       },
	I0916 10:58:27.399377  174032 command_runner.go:130] >       "username": "",
	I0916 10:58:27.399386  174032 command_runner.go:130] >       "spec": null,
	I0916 10:58:27.399395  174032 command_runner.go:130] >       "pinned": false
	I0916 10:58:27.399404  174032 command_runner.go:130] >     },
	I0916 10:58:27.399413  174032 command_runner.go:130] >     {
	I0916 10:58:27.399435  174032 command_runner.go:130] >       "id": "sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 10:58:27.399444  174032 command_runner.go:130] >       "repoTags": [
	I0916 10:58:27.399453  174032 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 10:58:27.399459  174032 command_runner.go:130] >       ],
	I0916 10:58:27.399468  174032 command_runner.go:130] >       "repoDigests": [
	I0916 10:58:27.399480  174032 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 10:58:27.399488  174032 command_runner.go:130] >       ],
	I0916 10:58:27.399497  174032 command_runner.go:130] >       "size": "320368",
	I0916 10:58:27.399506  174032 command_runner.go:130] >       "uid": {
	I0916 10:58:27.399516  174032 command_runner.go:130] >         "value": "65535"
	I0916 10:58:27.399524  174032 command_runner.go:130] >       },
	I0916 10:58:27.399530  174032 command_runner.go:130] >       "username": "",
	I0916 10:58:27.399538  174032 command_runner.go:130] >       "spec": null,
	I0916 10:58:27.399547  174032 command_runner.go:130] >       "pinned": true
	I0916 10:58:27.399551  174032 command_runner.go:130] >     }
	I0916 10:58:27.399556  174032 command_runner.go:130] >   ]
	I0916 10:58:27.399563  174032 command_runner.go:130] > }
	I0916 10:58:27.400730  174032 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 10:58:27.400749  174032 cache_images.go:84] Images are preloaded, skipping loading
	I0916 10:58:27.400757  174032 kubeadm.go:934] updating node { 192.168.67.2 8443 v1.31.1 containerd true true} ...
	I0916 10:58:27.400847  174032 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-079070 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-079070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:58:27.400895  174032 ssh_runner.go:195] Run: sudo crictl info
	I0916 10:58:27.434352  174032 command_runner.go:130] > {
	I0916 10:58:27.434376  174032 command_runner.go:130] >   "status": {
	I0916 10:58:27.434384  174032 command_runner.go:130] >     "conditions": [
	I0916 10:58:27.434390  174032 command_runner.go:130] >       {
	I0916 10:58:27.434398  174032 command_runner.go:130] >         "type": "RuntimeReady",
	I0916 10:58:27.434405  174032 command_runner.go:130] >         "status": true,
	I0916 10:58:27.434412  174032 command_runner.go:130] >         "reason": "",
	I0916 10:58:27.434419  174032 command_runner.go:130] >         "message": ""
	I0916 10:58:27.434424  174032 command_runner.go:130] >       },
	I0916 10:58:27.434427  174032 command_runner.go:130] >       {
	I0916 10:58:27.434432  174032 command_runner.go:130] >         "type": "NetworkReady",
	I0916 10:58:27.434436  174032 command_runner.go:130] >         "status": true,
	I0916 10:58:27.434440  174032 command_runner.go:130] >         "reason": "",
	I0916 10:58:27.434447  174032 command_runner.go:130] >         "message": ""
	I0916 10:58:27.434451  174032 command_runner.go:130] >       },
	I0916 10:58:27.434454  174032 command_runner.go:130] >       {
	I0916 10:58:27.434466  174032 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings",
	I0916 10:58:27.434476  174032 command_runner.go:130] >         "status": true,
	I0916 10:58:27.434483  174032 command_runner.go:130] >         "reason": "",
	I0916 10:58:27.434490  174032 command_runner.go:130] >         "message": ""
	I0916 10:58:27.434498  174032 command_runner.go:130] >       }
	I0916 10:58:27.434503  174032 command_runner.go:130] >     ]
	I0916 10:58:27.434517  174032 command_runner.go:130] >   },
	I0916 10:58:27.434527  174032 command_runner.go:130] >   "cniconfig": {
	I0916 10:58:27.434533  174032 command_runner.go:130] >     "PluginDirs": [
	I0916 10:58:27.434538  174032 command_runner.go:130] >       "/opt/cni/bin"
	I0916 10:58:27.434541  174032 command_runner.go:130] >     ],
	I0916 10:58:27.434556  174032 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I0916 10:58:27.434563  174032 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I0916 10:58:27.434567  174032 command_runner.go:130] >     "Prefix": "eth",
	I0916 10:58:27.434573  174032 command_runner.go:130] >     "Networks": [
	I0916 10:58:27.434577  174032 command_runner.go:130] >       {
	I0916 10:58:27.434583  174032 command_runner.go:130] >         "Config": {
	I0916 10:58:27.434595  174032 command_runner.go:130] >           "Name": "cni-loopback",
	I0916 10:58:27.434601  174032 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I0916 10:58:27.434607  174032 command_runner.go:130] >           "Plugins": [
	I0916 10:58:27.434611  174032 command_runner.go:130] >             {
	I0916 10:58:27.434618  174032 command_runner.go:130] >               "Network": {
	I0916 10:58:27.434623  174032 command_runner.go:130] >                 "type": "loopback",
	I0916 10:58:27.434629  174032 command_runner.go:130] >                 "ipam": {},
	I0916 10:58:27.434633  174032 command_runner.go:130] >                 "dns": {}
	I0916 10:58:27.434637  174032 command_runner.go:130] >               },
	I0916 10:58:27.434644  174032 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I0916 10:58:27.434647  174032 command_runner.go:130] >             }
	I0916 10:58:27.434653  174032 command_runner.go:130] >           ],
	I0916 10:58:27.434666  174032 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I0916 10:58:27.434672  174032 command_runner.go:130] >         },
	I0916 10:58:27.434676  174032 command_runner.go:130] >         "IFName": "lo"
	I0916 10:58:27.434682  174032 command_runner.go:130] >       },
	I0916 10:58:27.434685  174032 command_runner.go:130] >       {
	I0916 10:58:27.434691  174032 command_runner.go:130] >         "Config": {
	I0916 10:58:27.434695  174032 command_runner.go:130] >           "Name": "kindnet",
	I0916 10:58:27.434701  174032 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I0916 10:58:27.434705  174032 command_runner.go:130] >           "Plugins": [
	I0916 10:58:27.434712  174032 command_runner.go:130] >             {
	I0916 10:58:27.434717  174032 command_runner.go:130] >               "Network": {
	I0916 10:58:27.434722  174032 command_runner.go:130] >                 "type": "ptp",
	I0916 10:58:27.434728  174032 command_runner.go:130] >                 "ipam": {
	I0916 10:58:27.434733  174032 command_runner.go:130] >                   "type": "host-local"
	I0916 10:58:27.434739  174032 command_runner.go:130] >                 },
	I0916 10:58:27.434743  174032 command_runner.go:130] >                 "dns": {}
	I0916 10:58:27.434749  174032 command_runner.go:130] >               },
	I0916 10:58:27.434762  174032 command_runner.go:130] >               "Source": "{\"ipMasq\":false,\"ipam\":{\"dataDir\":\"/run/cni-ipam-state\",\"ranges\":[[{\"subnet\":\"10.244.0.0/24\"}]],\"routes\":[{\"dst\":\"0.0.0.0/0\"}],\"type\":\"host-local\"},\"mtu\":1500,\"type\":\"ptp\"}"
	I0916 10:58:27.434768  174032 command_runner.go:130] >             },
	I0916 10:58:27.434778  174032 command_runner.go:130] >             {
	I0916 10:58:27.434782  174032 command_runner.go:130] >               "Network": {
	I0916 10:58:27.434789  174032 command_runner.go:130] >                 "type": "portmap",
	I0916 10:58:27.434794  174032 command_runner.go:130] >                 "capabilities": {
	I0916 10:58:27.434800  174032 command_runner.go:130] >                   "portMappings": true
	I0916 10:58:27.434804  174032 command_runner.go:130] >                 },
	I0916 10:58:27.434808  174032 command_runner.go:130] >                 "ipam": {},
	I0916 10:58:27.434813  174032 command_runner.go:130] >                 "dns": {}
	I0916 10:58:27.434818  174032 command_runner.go:130] >               },
	I0916 10:58:27.434826  174032 command_runner.go:130] >               "Source": "{\"capabilities\":{\"portMappings\":true},\"type\":\"portmap\"}"
	I0916 10:58:27.434832  174032 command_runner.go:130] >             }
	I0916 10:58:27.434836  174032 command_runner.go:130] >           ],
	I0916 10:58:27.434866  174032 command_runner.go:130] >           "Source": "\n{\n\t\"cniVersion\": \"0.3.1\",\n\t\"name\": \"kindnet\",\n\t\"plugins\": [\n\t{\n\t\t\"type\": \"ptp\",\n\t\t\"ipMasq\": false,\n\t\t\"ipam\": {\n\t\t\t\"type\": \"host-local\",\n\t\t\t\"dataDir\": \"/run/cni-ipam-state\",\n\t\t\t\"routes\": [\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t{ \"dst\": \"0.0.0.0/0\" }\n\t\t\t],\n\t\t\t\"ranges\": [\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t[ { \"subnet\": \"10.244.0.0/24\" } ]\n\t\t\t]\n\t\t}\n\t\t,\n\t\t\"mtu\": 1500\n\t\t\n\t},\n\t{\n\t\t\"type\": \"portmap\",\n\t\t\"capabilities\": {\n\t\t\t\"portMappings\": true\n\t\t}\n\t}\n\t]\n}\n"
	I0916 10:58:27.434873  174032 command_runner.go:130] >         },
	I0916 10:58:27.434878  174032 command_runner.go:130] >         "IFName": "eth0"
	I0916 10:58:27.434882  174032 command_runner.go:130] >       }
	I0916 10:58:27.434885  174032 command_runner.go:130] >     ]
	I0916 10:58:27.434888  174032 command_runner.go:130] >   },
	I0916 10:58:27.434891  174032 command_runner.go:130] >   "config": {
	I0916 10:58:27.434895  174032 command_runner.go:130] >     "containerd": {
	I0916 10:58:27.434899  174032 command_runner.go:130] >       "snapshotter": "overlayfs",
	I0916 10:58:27.434923  174032 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I0916 10:58:27.434933  174032 command_runner.go:130] >       "defaultRuntime": {
	I0916 10:58:27.434939  174032 command_runner.go:130] >         "runtimeType": "",
	I0916 10:58:27.434945  174032 command_runner.go:130] >         "runtimePath": "",
	I0916 10:58:27.434951  174032 command_runner.go:130] >         "runtimeEngine": "",
	I0916 10:58:27.434955  174032 command_runner.go:130] >         "PodAnnotations": null,
	I0916 10:58:27.434962  174032 command_runner.go:130] >         "ContainerAnnotations": null,
	I0916 10:58:27.434966  174032 command_runner.go:130] >         "runtimeRoot": "",
	I0916 10:58:27.434971  174032 command_runner.go:130] >         "options": null,
	I0916 10:58:27.434976  174032 command_runner.go:130] >         "privileged_without_host_devices": false,
	I0916 10:58:27.434984  174032 command_runner.go:130] >         "privileged_without_host_devices_all_devices_allowed": false,
	I0916 10:58:27.434989  174032 command_runner.go:130] >         "baseRuntimeSpec": "",
	I0916 10:58:27.434993  174032 command_runner.go:130] >         "cniConfDir": "",
	I0916 10:58:27.434999  174032 command_runner.go:130] >         "cniMaxConfNum": 0,
	I0916 10:58:27.435004  174032 command_runner.go:130] >         "snapshotter": "",
	I0916 10:58:27.435010  174032 command_runner.go:130] >         "sandboxMode": ""
	I0916 10:58:27.435014  174032 command_runner.go:130] >       },
	I0916 10:58:27.435021  174032 command_runner.go:130] >       "untrustedWorkloadRuntime": {
	I0916 10:58:27.435025  174032 command_runner.go:130] >         "runtimeType": "",
	I0916 10:58:27.435032  174032 command_runner.go:130] >         "runtimePath": "",
	I0916 10:58:27.435036  174032 command_runner.go:130] >         "runtimeEngine": "",
	I0916 10:58:27.435042  174032 command_runner.go:130] >         "PodAnnotations": null,
	I0916 10:58:27.435046  174032 command_runner.go:130] >         "ContainerAnnotations": null,
	I0916 10:58:27.435053  174032 command_runner.go:130] >         "runtimeRoot": "",
	I0916 10:58:27.435057  174032 command_runner.go:130] >         "options": null,
	I0916 10:58:27.435064  174032 command_runner.go:130] >         "privileged_without_host_devices": false,
	I0916 10:58:27.435070  174032 command_runner.go:130] >         "privileged_without_host_devices_all_devices_allowed": false,
	I0916 10:58:27.435076  174032 command_runner.go:130] >         "baseRuntimeSpec": "",
	I0916 10:58:27.435080  174032 command_runner.go:130] >         "cniConfDir": "",
	I0916 10:58:27.435086  174032 command_runner.go:130] >         "cniMaxConfNum": 0,
	I0916 10:58:27.435090  174032 command_runner.go:130] >         "snapshotter": "",
	I0916 10:58:27.435097  174032 command_runner.go:130] >         "sandboxMode": ""
	I0916 10:58:27.435100  174032 command_runner.go:130] >       },
	I0916 10:58:27.435106  174032 command_runner.go:130] >       "runtimes": {
	I0916 10:58:27.435110  174032 command_runner.go:130] >         "runc": {
	I0916 10:58:27.435117  174032 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I0916 10:58:27.435122  174032 command_runner.go:130] >           "runtimePath": "",
	I0916 10:58:27.435128  174032 command_runner.go:130] >           "runtimeEngine": "",
	I0916 10:58:27.435132  174032 command_runner.go:130] >           "PodAnnotations": null,
	I0916 10:58:27.435138  174032 command_runner.go:130] >           "ContainerAnnotations": null,
	I0916 10:58:27.435143  174032 command_runner.go:130] >           "runtimeRoot": "",
	I0916 10:58:27.435148  174032 command_runner.go:130] >           "options": {
	I0916 10:58:27.435153  174032 command_runner.go:130] >             "SystemdCgroup": false
	I0916 10:58:27.435158  174032 command_runner.go:130] >           },
	I0916 10:58:27.435170  174032 command_runner.go:130] >           "privileged_without_host_devices": false,
	I0916 10:58:27.435177  174032 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I0916 10:58:27.435183  174032 command_runner.go:130] >           "baseRuntimeSpec": "",
	I0916 10:58:27.435187  174032 command_runner.go:130] >           "cniConfDir": "",
	I0916 10:58:27.435194  174032 command_runner.go:130] >           "cniMaxConfNum": 0,
	I0916 10:58:27.435198  174032 command_runner.go:130] >           "snapshotter": "",
	I0916 10:58:27.435204  174032 command_runner.go:130] >           "sandboxMode": "podsandbox"
	I0916 10:58:27.435208  174032 command_runner.go:130] >         }
	I0916 10:58:27.435212  174032 command_runner.go:130] >       },
	I0916 10:58:27.435216  174032 command_runner.go:130] >       "noPivot": false,
	I0916 10:58:27.435223  174032 command_runner.go:130] >       "disableSnapshotAnnotations": true,
	I0916 10:58:27.435228  174032 command_runner.go:130] >       "discardUnpackedLayers": true,
	I0916 10:58:27.435234  174032 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I0916 10:58:27.435239  174032 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false
	I0916 10:58:27.435244  174032 command_runner.go:130] >     },
	I0916 10:58:27.435248  174032 command_runner.go:130] >     "cni": {
	I0916 10:58:27.435253  174032 command_runner.go:130] >       "binDir": "/opt/cni/bin",
	I0916 10:58:27.435259  174032 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I0916 10:58:27.435263  174032 command_runner.go:130] >       "maxConfNum": 1,
	I0916 10:58:27.435269  174032 command_runner.go:130] >       "setupSerially": false,
	I0916 10:58:27.435274  174032 command_runner.go:130] >       "confTemplate": "",
	I0916 10:58:27.435280  174032 command_runner.go:130] >       "ipPref": ""
	I0916 10:58:27.435283  174032 command_runner.go:130] >     },
	I0916 10:58:27.435289  174032 command_runner.go:130] >     "registry": {
	I0916 10:58:27.435294  174032 command_runner.go:130] >       "configPath": "/etc/containerd/certs.d",
	I0916 10:58:27.435301  174032 command_runner.go:130] >       "mirrors": null,
	I0916 10:58:27.435305  174032 command_runner.go:130] >       "configs": null,
	I0916 10:58:27.435311  174032 command_runner.go:130] >       "auths": null,
	I0916 10:58:27.435316  174032 command_runner.go:130] >       "headers": null
	I0916 10:58:27.435319  174032 command_runner.go:130] >     },
	I0916 10:58:27.435325  174032 command_runner.go:130] >     "imageDecryption": {
	I0916 10:58:27.435330  174032 command_runner.go:130] >       "keyModel": "node"
	I0916 10:58:27.435336  174032 command_runner.go:130] >     },
	I0916 10:58:27.435341  174032 command_runner.go:130] >     "disableTCPService": true,
	I0916 10:58:27.435347  174032 command_runner.go:130] >     "streamServerAddress": "",
	I0916 10:58:27.435352  174032 command_runner.go:130] >     "streamServerPort": "10010",
	I0916 10:58:27.435358  174032 command_runner.go:130] >     "streamIdleTimeout": "4h0m0s",
	I0916 10:58:27.435363  174032 command_runner.go:130] >     "enableSelinux": false,
	I0916 10:58:27.435369  174032 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I0916 10:58:27.435374  174032 command_runner.go:130] >     "sandboxImage": "registry.k8s.io/pause:3.10",
	I0916 10:58:27.435381  174032 command_runner.go:130] >     "statsCollectPeriod": 10,
	I0916 10:58:27.435385  174032 command_runner.go:130] >     "systemdCgroup": false,
	I0916 10:58:27.435392  174032 command_runner.go:130] >     "enableTLSStreaming": false,
	I0916 10:58:27.435396  174032 command_runner.go:130] >     "x509KeyPairStreaming": {
	I0916 10:58:27.435401  174032 command_runner.go:130] >       "tlsCertFile": "",
	I0916 10:58:27.435405  174032 command_runner.go:130] >       "tlsKeyFile": ""
	I0916 10:58:27.435408  174032 command_runner.go:130] >     },
	I0916 10:58:27.435415  174032 command_runner.go:130] >     "maxContainerLogSize": 16384,
	I0916 10:58:27.435420  174032 command_runner.go:130] >     "disableCgroup": false,
	I0916 10:58:27.435426  174032 command_runner.go:130] >     "disableApparmor": false,
	I0916 10:58:27.435431  174032 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I0916 10:58:27.435437  174032 command_runner.go:130] >     "maxConcurrentDownloads": 3,
	I0916 10:58:27.435442  174032 command_runner.go:130] >     "disableProcMount": false,
	I0916 10:58:27.435448  174032 command_runner.go:130] >     "unsetSeccompProfile": "",
	I0916 10:58:27.435453  174032 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I0916 10:58:27.435459  174032 command_runner.go:130] >     "disableHugetlbController": true,
	I0916 10:58:27.435465  174032 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I0916 10:58:27.435471  174032 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I0916 10:58:27.435477  174032 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I0916 10:58:27.435484  174032 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I0916 10:58:27.435488  174032 command_runner.go:130] >     "enableUnprivilegedICMP": false,
	I0916 10:58:27.435494  174032 command_runner.go:130] >     "enableCDI": false,
	I0916 10:58:27.435498  174032 command_runner.go:130] >     "cdiSpecDirs": [
	I0916 10:58:27.435504  174032 command_runner.go:130] >       "/etc/cdi",
	I0916 10:58:27.435508  174032 command_runner.go:130] >       "/var/run/cdi"
	I0916 10:58:27.435514  174032 command_runner.go:130] >     ],
	I0916 10:58:27.435518  174032 command_runner.go:130] >     "imagePullProgressTimeout": "5m0s",
	I0916 10:58:27.435525  174032 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I0916 10:58:27.435529  174032 command_runner.go:130] >     "imagePullWithSyncFs": false,
	I0916 10:58:27.435535  174032 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I0916 10:58:27.435540  174032 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I0916 10:58:27.435548  174032 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I0916 10:58:27.435553  174032 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I0916 10:58:27.435561  174032 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri"
	I0916 10:58:27.435564  174032 command_runner.go:130] >   },
	I0916 10:58:27.435568  174032 command_runner.go:130] >   "golang": "go1.22.7",
	I0916 10:58:27.435573  174032 command_runner.go:130] >   "lastCNILoadStatus": "OK",
	I0916 10:58:27.435580  174032 command_runner.go:130] >   "lastCNILoadStatus.default": "OK"
	I0916 10:58:27.435583  174032 command_runner.go:130] > }
	I0916 10:58:27.436019  174032 cni.go:84] Creating CNI manager for ""
	I0916 10:58:27.436036  174032 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0916 10:58:27.436044  174032 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 10:58:27.436062  174032 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-079070 NodeName:multinode-079070 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 10:58:27.436179  174032 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "multinode-079070"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 10:58:27.436234  174032 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:58:27.443706  174032 command_runner.go:130] > kubeadm
	I0916 10:58:27.443728  174032 command_runner.go:130] > kubectl
	I0916 10:58:27.443750  174032 command_runner.go:130] > kubelet
	I0916 10:58:27.444399  174032 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:58:27.444450  174032 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 10:58:27.452263  174032 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0916 10:58:27.468429  174032 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:58:27.486287  174032 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2170 bytes)
	I0916 10:58:27.503688  174032 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0916 10:58:27.507100  174032 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:58:27.518428  174032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:58:27.592716  174032 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:58:27.605913  174032 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070 for IP: 192.168.67.2
	I0916 10:58:27.605937  174032 certs.go:194] generating shared ca certs ...
	I0916 10:58:27.605956  174032 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:58:27.606107  174032 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 10:58:27.606170  174032 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 10:58:27.606180  174032 certs.go:256] generating profile certs ...
	I0916 10:58:27.606278  174032 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/client.key
	I0916 10:58:27.606339  174032 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/apiserver.key.5aac267e
	I0916 10:58:27.606394  174032 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/proxy-client.key
	I0916 10:58:27.606409  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:58:27.606429  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:58:27.606446  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:58:27.606463  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:58:27.606495  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 10:58:27.606521  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 10:58:27.606539  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 10:58:27.606555  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 10:58:27.606635  174032 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 10:58:27.606685  174032 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 10:58:27.606700  174032 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:58:27.606733  174032 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 10:58:27.606764  174032 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:58:27.606794  174032 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 10:58:27.606845  174032 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:58:27.606883  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /usr/share/ca-certificates/111892.pem
	I0916 10:58:27.606902  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:58:27.606917  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem -> /usr/share/ca-certificates/11189.pem
	I0916 10:58:27.607695  174032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:58:27.631392  174032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:58:27.654670  174032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:58:27.727224  174032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:58:27.754150  174032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 10:58:27.776296  174032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 10:58:27.798404  174032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 10:58:27.820727  174032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 10:58:27.845087  174032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 10:58:27.867465  174032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:58:27.889830  174032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 10:58:27.911610  174032 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 10:58:27.927503  174032 ssh_runner.go:195] Run: openssl version
	I0916 10:58:27.932822  174032 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0916 10:58:27.933024  174032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 10:58:27.942053  174032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 10:58:27.945378  174032 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 10:58:27.945431  174032 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 10:58:27.945481  174032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 10:58:27.951816  174032 command_runner.go:130] > 3ec20f2e
	I0916 10:58:27.951897  174032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:58:27.960489  174032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:58:27.969862  174032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:58:27.973843  174032 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:58:27.973900  174032 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:58:27.973948  174032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:58:27.980465  174032 command_runner.go:130] > b5213941
	I0916 10:58:27.980556  174032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:58:27.988818  174032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 10:58:27.997494  174032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 10:58:28.000828  174032 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 10:58:28.000875  174032 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 10:58:28.000919  174032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 10:58:28.006937  174032 command_runner.go:130] > 51391683
	I0916 10:58:28.007108  174032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 10:58:28.015273  174032 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:58:28.018433  174032 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:58:28.018457  174032 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0916 10:58:28.018463  174032 command_runner.go:130] > Device: 801h/2049d	Inode: 809447      Links: 1
	I0916 10:58:28.018470  174032 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:58:28.018475  174032 command_runner.go:130] > Access: 2024-09-16 10:56:17.230007830 +0000
	I0916 10:58:28.018480  174032 command_runner.go:130] > Modify: 2024-09-16 10:56:17.230007830 +0000
	I0916 10:58:28.018485  174032 command_runner.go:130] > Change: 2024-09-16 10:56:17.230007830 +0000
	I0916 10:58:28.018490  174032 command_runner.go:130] >  Birth: 2024-09-16 10:56:17.230007830 +0000
	I0916 10:58:28.018544  174032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 10:58:28.024645  174032 command_runner.go:130] > Certificate will not expire
	I0916 10:58:28.024835  174032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 10:58:28.031477  174032 command_runner.go:130] > Certificate will not expire
	I0916 10:58:28.031630  174032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 10:58:28.037995  174032 command_runner.go:130] > Certificate will not expire
	I0916 10:58:28.038131  174032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 10:58:28.044720  174032 command_runner.go:130] > Certificate will not expire
	I0916 10:58:28.045062  174032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 10:58:28.051235  174032 command_runner.go:130] > Certificate will not expire
	I0916 10:58:28.051357  174032 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 10:58:28.057328  174032 command_runner.go:130] > Certificate will not expire
	I0916 10:58:28.057540  174032 kubeadm.go:392] StartCluster: {Name:multinode-079070 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-079070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:false Worker:true} {Name:m03 IP:192.168.67.4 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:
false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:58:28.057670  174032 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 10:58:28.057734  174032 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 10:58:28.088975  174032 command_runner.go:130] > 8954864d99d2239b93c763ac312c7353b37bfba4eb693480619025ea3402616f
	I0916 10:58:28.088996  174032 command_runner.go:130] > 269042fd7e0657021f86f96623c9937f1e0659eae415545c3508c149871ca048
	I0916 10:58:28.089002  174032 command_runner.go:130] > de61885ae02518041c7aa7ce71f66fe6f83e66c09666b89a7765dd6c5955ef2e
	I0916 10:58:28.089023  174032 command_runner.go:130] > 809210a041e030e61062aa021eb36041df90e322c3257f94c546c420614699bc
	I0916 10:58:28.089032  174032 command_runner.go:130] > 941f1dc8e383770d56fc04131cd6e118a0b22f2035d16d7cd123273e0f80863c
	I0916 10:58:28.089040  174032 command_runner.go:130] > 0bc7fe20ff6ae92cd3f996cddadca6ddb2788e2f661cd3c4b2f9fb33045bed71
	I0916 10:58:28.089053  174032 command_runner.go:130] > 5d29b7e4482f874fecde10cfcd42e99ca36d060f25d2e8e7a8110ea495ea8583
	I0916 10:58:28.089067  174032 command_runner.go:130] > 411c657184dfd15c5a637bda842998291203948392b41c07d2e8b35719214e87
	I0916 10:58:28.091427  174032 cri.go:89] found id: "8954864d99d2239b93c763ac312c7353b37bfba4eb693480619025ea3402616f"
	I0916 10:58:28.091444  174032 cri.go:89] found id: "269042fd7e0657021f86f96623c9937f1e0659eae415545c3508c149871ca048"
	I0916 10:58:28.091448  174032 cri.go:89] found id: "de61885ae02518041c7aa7ce71f66fe6f83e66c09666b89a7765dd6c5955ef2e"
	I0916 10:58:28.091452  174032 cri.go:89] found id: "809210a041e030e61062aa021eb36041df90e322c3257f94c546c420614699bc"
	I0916 10:58:28.091454  174032 cri.go:89] found id: "941f1dc8e383770d56fc04131cd6e118a0b22f2035d16d7cd123273e0f80863c"
	I0916 10:58:28.091458  174032 cri.go:89] found id: "0bc7fe20ff6ae92cd3f996cddadca6ddb2788e2f661cd3c4b2f9fb33045bed71"
	I0916 10:58:28.091460  174032 cri.go:89] found id: "5d29b7e4482f874fecde10cfcd42e99ca36d060f25d2e8e7a8110ea495ea8583"
	I0916 10:58:28.091464  174032 cri.go:89] found id: "411c657184dfd15c5a637bda842998291203948392b41c07d2e8b35719214e87"
	I0916 10:58:28.091468  174032 cri.go:89] found id: ""
	I0916 10:58:28.091515  174032 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0916 10:58:28.102433  174032 command_runner.go:130] > null
	I0916 10:58:28.103583  174032 cri.go:116] JSON = null
	W0916 10:58:28.103629  174032 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0916 10:58:28.103684  174032 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 10:58:28.111184  174032 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0916 10:58:28.111205  174032 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0916 10:58:28.111212  174032 command_runner.go:130] > /var/lib/minikube/etcd:
	I0916 10:58:28.111215  174032 command_runner.go:130] > member
	I0916 10:58:28.111921  174032 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 10:58:28.111941  174032 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 10:58:28.111990  174032 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0916 10:58:28.119863  174032 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0916 10:58:28.120286  174032 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-079070" does not appear in /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:58:28.120395  174032 kubeconfig.go:62] /home/jenkins/minikube-integration/19651-3687/kubeconfig needs updating (will repair): [kubeconfig missing "multinode-079070" cluster setting kubeconfig missing "multinode-079070" context setting]
	I0916 10:58:28.120656  174032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:58:28.121034  174032 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:58:28.121256  174032 kapi.go:59] client config for multinode-079070: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:58:28.121639  174032 cert_rotation.go:140] Starting client certificate rotation controller
	I0916 10:58:28.121840  174032 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 10:58:28.130002  174032 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.67.2
	I0916 10:58:28.130042  174032 kubeadm.go:597] duration metric: took 18.095489ms to restartPrimaryControlPlane
	I0916 10:58:28.130055  174032 kubeadm.go:394] duration metric: took 72.523609ms to StartCluster
	I0916 10:58:28.130077  174032 settings.go:142] acquiring lock: {Name:mk5f140da961bb985c566f732d611f3e238f1f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:58:28.130161  174032 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:58:28.130837  174032 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:58:28.131102  174032 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 10:58:28.131163  174032 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 10:58:28.131387  174032 config.go:182] Loaded profile config "multinode-079070": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:58:28.134909  174032 out.go:177] * Enabled addons: 
	I0916 10:58:28.134909  174032 out.go:177] * Verifying Kubernetes components...
	I0916 10:58:28.136583  174032 addons.go:510] duration metric: took 5.420333ms for enable addons: enabled=[]
	I0916 10:58:28.136624  174032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:58:28.333923  174032 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:58:28.349407  174032 node_ready.go:35] waiting up to 6m0s for node "multinode-079070" to be "Ready" ...
	I0916 10:58:28.349561  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:28.349575  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:28.349587  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:28.349597  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:28.349954  174032 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0916 10:58:28.349982  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:28.849615  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:28.849640  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:28.849650  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:28.849656  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:31.639383  174032 round_trippers.go:574] Response Status: 200 OK in 2789 milliseconds
	I0916 10:58:31.639414  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:31.639424  174032 round_trippers.go:580]     Audit-Id: 01e4468e-98dd-4701-a261-a9d4b4accf24
	I0916 10:58:31.639429  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:31.639442  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:31.639446  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0916 10:58:31.639450  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0916 10:58:31.639455  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:31 GMT
	I0916 10:58:31.639555  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"535","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5203 chars]
	I0916 10:58:31.640458  174032 node_ready.go:49] node "multinode-079070" has status "Ready":"True"
	I0916 10:58:31.640492  174032 node_ready.go:38] duration metric: took 3.291045617s for node "multinode-079070" to be "Ready" ...
	I0916 10:58:31.640505  174032 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:58:31.640581  174032 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 10:58:31.640605  174032 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 10:58:31.640674  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:58:31.640691  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:31.640702  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:31.640713  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:31.726600  174032 round_trippers.go:574] Response Status: 200 OK in 85 milliseconds
	I0916 10:58:31.726642  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:31.726652  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0916 10:58:31.726660  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:31 GMT
	I0916 10:58:31.726666  174032 round_trippers.go:580]     Audit-Id: d89077b6-24d0-4ace-b288-bc5492fbb17e
	I0916 10:58:31.726670  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:31.726676  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:31.726681  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0916 10:58:31.727352  174032 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"644"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"411","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 88174 chars]
	I0916 10:58:31.733467  174032 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:31.733592  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:58:31.733607  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:31.733618  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:31.733632  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:31.736249  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:31.736274  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:31.736284  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0916 10:58:31.736290  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0916 10:58:31.736295  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:31 GMT
	I0916 10:58:31.736300  174032 round_trippers.go:580]     Audit-Id: d3cd4bdb-0425-4575-8f96-1c5d939a61a0
	I0916 10:58:31.736305  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:31.736309  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:31.736455  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"411","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6480 chars]
	I0916 10:58:31.737012  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:31.737037  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:31.737046  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:31.737051  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:31.742207  174032 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 10:58:31.742230  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:31.742239  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:31.742246  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0916 10:58:31.742250  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0916 10:58:31.742254  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:31 GMT
	I0916 10:58:31.742258  174032 round_trippers.go:580]     Audit-Id: d5f64146-189a-431a-be98-a04fa8b9e306
	I0916 10:58:31.742262  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:31.742474  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"535","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5203 chars]
	I0916 10:58:31.742966  174032 pod_ready.go:93] pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace has status "Ready":"True"
	I0916 10:58:31.743024  174032 pod_ready.go:82] duration metric: took 9.515749ms for pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:31.743053  174032 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:31.743204  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-079070
	I0916 10:58:31.743217  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:31.743237  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:31.743244  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:31.824713  174032 round_trippers.go:574] Response Status: 200 OK in 81 milliseconds
	I0916 10:58:31.824797  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:31.824819  174032 round_trippers.go:580]     Audit-Id: 936accb2-c101-4e1f-ad7e-d05fe738436b
	I0916 10:58:31.824835  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:31.824849  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:31.824874  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0916 10:58:31.824895  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0916 10:58:31.824900  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:31 GMT
	I0916 10:58:31.825059  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-079070","namespace":"kube-system","uid":"fdcbee48-0d3b-47d7-acd2-298979345c7a","resourceVersion":"400","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"4cd98cb286c25ab6542db09649b1ab0f","kubernetes.io/config.mirror":"4cd98cb286c25ab6542db09649b1ab0f","kubernetes.io/config.seen":"2024-09-16T10:56:25.844758522Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6440 chars]
	I0916 10:58:31.825650  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:31.825672  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:31.825689  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:31.825697  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:31.842112  174032 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0916 10:58:31.842210  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:31.842238  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:31.842257  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:31.842283  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:31 GMT
	I0916 10:58:31.842306  174032 round_trippers.go:580]     Audit-Id: 9126f992-6f05-4955-bfa3-ed54cd7f5b31
	I0916 10:58:31.842327  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:31.842343  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:31.842762  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"535","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5203 chars]
	I0916 10:58:31.843184  174032 pod_ready.go:93] pod "etcd-multinode-079070" in "kube-system" namespace has status "Ready":"True"
	I0916 10:58:31.843222  174032 pod_ready.go:82] duration metric: took 100.155513ms for pod "etcd-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:31.843255  174032 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:31.843355  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-079070
	I0916 10:58:31.843369  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:31.843377  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:31.843386  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:31.845870  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:31.845926  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:31.845940  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:31.845947  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:31 GMT
	I0916 10:58:31.845953  174032 round_trippers.go:580]     Audit-Id: 85f07486-8eae-47ae-9adb-6541923d1a89
	I0916 10:58:31.845957  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:31.845969  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:31.845976  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:31.846194  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-079070","namespace":"kube-system","uid":"72784d35-4a94-476c-a4e9-dc3c6c3a8c46","resourceVersion":"646","creationTimestamp":"2024-09-16T10:56:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.mirror":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.seen":"2024-09-16T10:56:20.578394748Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8975 chars]
	I0916 10:58:31.846893  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:31.846911  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:31.846921  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:31.846926  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:31.848702  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:31.848721  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:31.848731  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:31.848736  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:31.848740  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:31 GMT
	I0916 10:58:31.848745  174032 round_trippers.go:580]     Audit-Id: a9d21ba0-ad53-4b92-a1c0-1b4e72ccb966
	I0916 10:58:31.848751  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:31.848754  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:31.848872  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"535","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5203 chars]
	I0916 10:58:32.343430  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-079070
	I0916 10:58:32.343461  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:32.343472  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:32.343478  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:32.345893  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:32.345920  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:32.345928  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:32.345932  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:32.345935  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:32.345938  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:32.345942  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:32 GMT
	I0916 10:58:32.345946  174032 round_trippers.go:580]     Audit-Id: 57455e3f-56e7-4d17-a297-b6e81700c984
	I0916 10:58:32.346155  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-079070","namespace":"kube-system","uid":"72784d35-4a94-476c-a4e9-dc3c6c3a8c46","resourceVersion":"646","creationTimestamp":"2024-09-16T10:56:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.mirror":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.seen":"2024-09-16T10:56:20.578394748Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8975 chars]
	I0916 10:58:32.346771  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:32.346791  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:32.346800  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:32.346810  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:32.348590  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:32.348608  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:32.348617  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:32.348622  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:32.348627  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:32 GMT
	I0916 10:58:32.348631  174032 round_trippers.go:580]     Audit-Id: fbe3000d-fea7-4340-a6a7-e0b176016264
	I0916 10:58:32.348635  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:32.348641  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:32.348760  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:32.843417  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-079070
	I0916 10:58:32.843440  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:32.843448  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:32.843452  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:32.845665  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:32.845733  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:32.845755  174032 round_trippers.go:580]     Audit-Id: 82e44797-8a53-47ca-a28b-5610127e5e4d
	I0916 10:58:32.845765  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:32.845775  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:32.845786  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:32.845792  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:32.845796  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:32 GMT
	I0916 10:58:32.845973  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-079070","namespace":"kube-system","uid":"72784d35-4a94-476c-a4e9-dc3c6c3a8c46","resourceVersion":"646","creationTimestamp":"2024-09-16T10:56:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.mirror":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.seen":"2024-09-16T10:56:20.578394748Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8975 chars]
	I0916 10:58:32.846587  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:32.846605  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:32.846616  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:32.846620  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:32.848271  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:32.848290  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:32.848298  174032 round_trippers.go:580]     Audit-Id: b19dc8bd-9c68-46be-bf02-1a7ed95f4b07
	I0916 10:58:32.848303  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:32.848306  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:32.848310  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:32.848314  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:32.848318  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:32 GMT
	I0916 10:58:32.848497  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:33.344212  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-079070
	I0916 10:58:33.344239  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:33.344247  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:33.344251  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:33.346490  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:33.346508  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:33.346515  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:33.346518  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:33.346521  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:33 GMT
	I0916 10:58:33.346524  174032 round_trippers.go:580]     Audit-Id: 24269870-069c-4c13-979e-7dfc1acb233d
	I0916 10:58:33.346526  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:33.346529  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:33.346725  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-079070","namespace":"kube-system","uid":"72784d35-4a94-476c-a4e9-dc3c6c3a8c46","resourceVersion":"646","creationTimestamp":"2024-09-16T10:56:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.mirror":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.seen":"2024-09-16T10:56:20.578394748Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8975 chars]
	I0916 10:58:33.347158  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:33.347169  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:33.347178  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:33.347182  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:33.348853  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:33.348869  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:33.348875  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:33.348878  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:33 GMT
	I0916 10:58:33.348881  174032 round_trippers.go:580]     Audit-Id: 365feebf-be2f-40a9-b431-9048c8258a8a
	I0916 10:58:33.348883  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:33.348886  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:33.348889  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:33.349015  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:33.843766  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-079070
	I0916 10:58:33.843861  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:33.843882  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:33.843897  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:33.846039  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:33.846115  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:33.846133  174032 round_trippers.go:580]     Audit-Id: 54c95957-0b13-4103-8783-de5143b6f82b
	I0916 10:58:33.846148  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:33.846162  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:33.846187  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:33.846208  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:33.846222  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:33 GMT
	I0916 10:58:33.846460  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-079070","namespace":"kube-system","uid":"72784d35-4a94-476c-a4e9-dc3c6c3a8c46","resourceVersion":"646","creationTimestamp":"2024-09-16T10:56:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.mirror":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.seen":"2024-09-16T10:56:20.578394748Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8975 chars]
	I0916 10:58:33.847174  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:33.847230  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:33.847252  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:33.847266  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:33.849757  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:33.849829  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:33.849851  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:33 GMT
	I0916 10:58:33.849868  174032 round_trippers.go:580]     Audit-Id: 841409c9-6a9c-414d-98c3-0a79b9d4f295
	I0916 10:58:33.849881  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:33.849917  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:33.849930  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:33.849944  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:33.850097  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:33.850475  174032 pod_ready.go:103] pod "kube-apiserver-multinode-079070" in "kube-system" namespace has status "Ready":"False"
	I0916 10:58:34.343965  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-079070
	I0916 10:58:34.343985  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:34.343993  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:34.343998  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:34.346125  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:34.346152  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:34.346161  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:34 GMT
	I0916 10:58:34.346182  174032 round_trippers.go:580]     Audit-Id: 22c730b3-fdb3-4696-a302-8dfeb6628333
	I0916 10:58:34.346187  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:34.346195  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:34.346199  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:34.346204  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:34.346403  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-079070","namespace":"kube-system","uid":"72784d35-4a94-476c-a4e9-dc3c6c3a8c46","resourceVersion":"646","creationTimestamp":"2024-09-16T10:56:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.mirror":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.seen":"2024-09-16T10:56:20.578394748Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8975 chars]
	I0916 10:58:34.346862  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:34.346875  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:34.346882  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:34.346888  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:34.348811  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:34.348830  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:34.348836  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:34.348840  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:34 GMT
	I0916 10:58:34.348843  174032 round_trippers.go:580]     Audit-Id: 34343d99-aa51-4a71-9122-3c091432b47f
	I0916 10:58:34.348846  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:34.348849  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:34.348857  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:34.348999  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:34.843651  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-079070
	I0916 10:58:34.843732  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:34.843789  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:34.843799  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:34.846988  174032 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:58:34.847010  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:34.847017  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:34 GMT
	I0916 10:58:34.847020  174032 round_trippers.go:580]     Audit-Id: cb01351c-4e66-455a-8d17-434bf380e64e
	I0916 10:58:34.847024  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:34.847026  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:34.847029  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:34.847032  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:34.847256  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-079070","namespace":"kube-system","uid":"72784d35-4a94-476c-a4e9-dc3c6c3a8c46","resourceVersion":"646","creationTimestamp":"2024-09-16T10:56:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.mirror":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.seen":"2024-09-16T10:56:20.578394748Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8975 chars]
	I0916 10:58:34.847914  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:34.847931  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:34.847942  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:34.847946  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:34.849900  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:34.849919  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:34.849929  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:34 GMT
	I0916 10:58:34.849935  174032 round_trippers.go:580]     Audit-Id: f10d04cb-d6e6-49d7-9327-c8b970ebde38
	I0916 10:58:34.849941  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:34.849945  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:34.849956  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:34.849961  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:34.850078  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:35.344342  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-079070
	I0916 10:58:35.344371  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:35.344381  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:35.344386  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:35.346819  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:35.346845  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:35.346856  174032 round_trippers.go:580]     Audit-Id: 40e2f8b4-23db-44b3-a754-b2eb59a085f3
	I0916 10:58:35.346862  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:35.346866  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:35.346871  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:35.346875  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:35.346879  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:35 GMT
	I0916 10:58:35.347107  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-079070","namespace":"kube-system","uid":"72784d35-4a94-476c-a4e9-dc3c6c3a8c46","resourceVersion":"646","creationTimestamp":"2024-09-16T10:56:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.mirror":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.seen":"2024-09-16T10:56:20.578394748Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8975 chars]
	I0916 10:58:35.347623  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:35.347638  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:35.347645  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:35.347649  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:35.349929  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:35.349956  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:35.349964  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:35.349970  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:35.349974  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:35 GMT
	I0916 10:58:35.349980  174032 round_trippers.go:580]     Audit-Id: 3f8a34e7-7b6d-4951-91f1-5a5cda3d7904
	I0916 10:58:35.349986  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:35.349989  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:35.350133  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:35.843673  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-079070
	I0916 10:58:35.843698  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:35.843706  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:35.843709  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:35.846406  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:35.846428  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:35.846437  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:35.846441  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:35.846445  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:35 GMT
	I0916 10:58:35.846450  174032 round_trippers.go:580]     Audit-Id: 797d75ee-e6bb-4c29-affd-5be14c88a6da
	I0916 10:58:35.846453  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:35.846457  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:35.846630  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-079070","namespace":"kube-system","uid":"72784d35-4a94-476c-a4e9-dc3c6c3a8c46","resourceVersion":"646","creationTimestamp":"2024-09-16T10:56:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.mirror":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.seen":"2024-09-16T10:56:20.578394748Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8975 chars]
	I0916 10:58:35.847141  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:35.847160  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:35.847170  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:35.847186  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:35.849057  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:35.849079  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:35.849085  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:35.849089  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:35.849093  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:35 GMT
	I0916 10:58:35.849107  174032 round_trippers.go:580]     Audit-Id: 1adb6f67-7e2b-44ed-bdd8-2f871b339cc7
	I0916 10:58:35.849110  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:35.849113  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:35.849281  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:36.343944  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-079070
	I0916 10:58:36.343973  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:36.343983  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:36.343990  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:36.346361  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:36.346390  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:36.346397  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:36.346401  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:36.346405  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:36.346408  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:36 GMT
	I0916 10:58:36.346412  174032 round_trippers.go:580]     Audit-Id: 8013a8b7-ed09-44b6-bb41-c2b5d13831df
	I0916 10:58:36.346430  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:36.346602  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-079070","namespace":"kube-system","uid":"72784d35-4a94-476c-a4e9-dc3c6c3a8c46","resourceVersion":"646","creationTimestamp":"2024-09-16T10:56:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.mirror":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.seen":"2024-09-16T10:56:20.578394748Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8975 chars]
	I0916 10:58:36.347081  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:36.347095  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:36.347102  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:36.347107  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:36.348993  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:36.349016  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:36.349026  174032 round_trippers.go:580]     Audit-Id: 1efa985a-cf76-4810-8269-5b538179662f
	I0916 10:58:36.349031  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:36.349036  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:36.349047  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:36.349052  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:36.349059  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:36 GMT
	I0916 10:58:36.349211  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:36.349540  174032 pod_ready.go:103] pod "kube-apiserver-multinode-079070" in "kube-system" namespace has status "Ready":"False"
	I0916 10:58:36.844113  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-079070
	I0916 10:58:36.844139  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:36.844147  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:36.844150  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:36.846765  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:36.846785  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:36.846794  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:36 GMT
	I0916 10:58:36.846799  174032 round_trippers.go:580]     Audit-Id: 28d4ef9d-aa18-4839-8b77-0f7c412ac6a1
	I0916 10:58:36.846802  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:36.846807  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:36.846811  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:36.846815  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:36.846984  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-079070","namespace":"kube-system","uid":"72784d35-4a94-476c-a4e9-dc3c6c3a8c46","resourceVersion":"646","creationTimestamp":"2024-09-16T10:56:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.mirror":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.seen":"2024-09-16T10:56:20.578394748Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8975 chars]
	I0916 10:58:36.847438  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:36.847451  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:36.847457  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:36.847460  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:36.849503  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:36.849526  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:36.849532  174032 round_trippers.go:580]     Audit-Id: a36fb60b-0dc7-43f9-8ee1-8bbaf046c08b
	I0916 10:58:36.849535  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:36.849539  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:36.849542  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:36.849551  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:36.849556  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:36 GMT
	I0916 10:58:36.849701  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:37.344404  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-079070
	I0916 10:58:37.344442  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:37.344450  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:37.344454  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:37.347005  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:37.347034  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:37.347043  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:37.347050  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:37 GMT
	I0916 10:58:37.347055  174032 round_trippers.go:580]     Audit-Id: 93f8c632-18d6-4de4-b9a4-7ddf2cedbc94
	I0916 10:58:37.347058  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:37.347062  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:37.347066  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:37.347242  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-079070","namespace":"kube-system","uid":"72784d35-4a94-476c-a4e9-dc3c6c3a8c46","resourceVersion":"646","creationTimestamp":"2024-09-16T10:56:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.mirror":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.seen":"2024-09-16T10:56:20.578394748Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8975 chars]
	I0916 10:58:37.347691  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:37.347703  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:37.347710  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:37.347714  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:37.349656  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:37.349682  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:37.349691  174032 round_trippers.go:580]     Audit-Id: d09caacc-8a19-41ce-95bc-b8efce1dc82b
	I0916 10:58:37.349696  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:37.349700  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:37.349704  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:37.349709  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:37.349717  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:37 GMT
	I0916 10:58:37.349856  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:37.843503  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-079070
	I0916 10:58:37.843543  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:37.843554  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:37.843561  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:37.846392  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:37.846418  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:37.846427  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:37.846452  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:37 GMT
	I0916 10:58:37.846461  174032 round_trippers.go:580]     Audit-Id: 4bba8738-1c54-4e69-913a-a01f02b127ec
	I0916 10:58:37.846466  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:37.846474  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:37.846478  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:37.846628  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-079070","namespace":"kube-system","uid":"72784d35-4a94-476c-a4e9-dc3c6c3a8c46","resourceVersion":"646","creationTimestamp":"2024-09-16T10:56:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.mirror":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.seen":"2024-09-16T10:56:20.578394748Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8975 chars]
	I0916 10:58:37.847089  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:37.847104  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:37.847114  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:37.847119  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:37.849045  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:37.849067  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:37.849076  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:37.849081  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:37.849085  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:37.849088  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:37.849093  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:37 GMT
	I0916 10:58:37.849106  174032 round_trippers.go:580]     Audit-Id: c55788c9-f7bf-43f5-b0d4-f56b796623e2
	I0916 10:58:37.849220  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:38.344115  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-079070
	I0916 10:58:38.344137  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:38.344145  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:38.344151  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:38.346115  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:38.346137  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:38.346146  174032 round_trippers.go:580]     Audit-Id: 8061b16f-b62e-48e2-9f5b-1458b0f5a0be
	I0916 10:58:38.346151  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:38.346156  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:38.346160  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:38.346164  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:38.346169  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:38 GMT
	I0916 10:58:38.346386  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-079070","namespace":"kube-system","uid":"72784d35-4a94-476c-a4e9-dc3c6c3a8c46","resourceVersion":"646","creationTimestamp":"2024-09-16T10:56:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.mirror":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.seen":"2024-09-16T10:56:20.578394748Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8975 chars]
	I0916 10:58:38.346910  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:38.346925  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:38.346933  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:38.346937  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:38.348726  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:38.348746  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:38.348755  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:38.348761  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:38.348770  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:38.348779  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:38 GMT
	I0916 10:58:38.348787  174032 round_trippers.go:580]     Audit-Id: 90c20662-fb72-4968-a12e-d7716995ef8c
	I0916 10:58:38.348792  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:38.348936  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:38.844309  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-079070
	I0916 10:58:38.844344  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:38.844374  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:38.844380  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:38.846801  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:38.846821  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:38.846828  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:38 GMT
	I0916 10:58:38.846832  174032 round_trippers.go:580]     Audit-Id: 579ca5a5-349e-46a7-b906-57c3bff346cd
	I0916 10:58:38.846834  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:38.846837  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:38.846841  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:38.846844  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:38.847046  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-079070","namespace":"kube-system","uid":"72784d35-4a94-476c-a4e9-dc3c6c3a8c46","resourceVersion":"646","creationTimestamp":"2024-09-16T10:56:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.mirror":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.seen":"2024-09-16T10:56:20.578394748Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8975 chars]
	I0916 10:58:38.847512  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:38.847524  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:38.847531  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:38.847534  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:38.849410  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:38.849431  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:38.849437  174032 round_trippers.go:580]     Audit-Id: 4dc6e2e7-2f5f-4908-bf68-6dee9ca11d72
	I0916 10:58:38.849440  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:38.849443  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:38.849447  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:38.849450  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:38.849453  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:38 GMT
	I0916 10:58:38.849609  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:38.849916  174032 pod_ready.go:103] pod "kube-apiserver-multinode-079070" in "kube-system" namespace has status "Ready":"False"
	I0916 10:58:39.344340  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-079070
	I0916 10:58:39.344369  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:39.344378  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:39.344382  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:39.346734  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:39.346752  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:39.346759  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:39.346764  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:39.346767  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:39 GMT
	I0916 10:58:39.346772  174032 round_trippers.go:580]     Audit-Id: 71ee1fe0-1634-4efb-9929-7b699cadb8ad
	I0916 10:58:39.346776  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:39.346779  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:39.347010  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-079070","namespace":"kube-system","uid":"72784d35-4a94-476c-a4e9-dc3c6c3a8c46","resourceVersion":"646","creationTimestamp":"2024-09-16T10:56:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.mirror":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.seen":"2024-09-16T10:56:20.578394748Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8975 chars]
	I0916 10:58:39.347469  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:39.347490  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:39.347498  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:39.347502  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:39.349347  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:39.349369  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:39.349378  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:39.349385  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:39.349390  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:39.349395  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:39 GMT
	I0916 10:58:39.349400  174032 round_trippers.go:580]     Audit-Id: d7c53c56-45f1-4da5-b221-11c02b649f4c
	I0916 10:58:39.349403  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:39.349532  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:39.844179  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-079070
	I0916 10:58:39.844210  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:39.844217  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:39.844222  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:39.846495  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:39.846522  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:39.846532  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:39.846538  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:39.846552  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:39.846556  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:39 GMT
	I0916 10:58:39.846561  174032 round_trippers.go:580]     Audit-Id: 732d8a8c-8cde-488b-81d0-f5d0ede411b2
	I0916 10:58:39.846565  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:39.846750  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-079070","namespace":"kube-system","uid":"72784d35-4a94-476c-a4e9-dc3c6c3a8c46","resourceVersion":"646","creationTimestamp":"2024-09-16T10:56:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.mirror":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.seen":"2024-09-16T10:56:20.578394748Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8975 chars]
	I0916 10:58:39.847241  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:39.847256  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:39.847263  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:39.847268  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:39.849260  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:39.849282  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:39.849300  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:39.849306  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:39.849309  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:39 GMT
	I0916 10:58:39.849313  174032 round_trippers.go:580]     Audit-Id: ff0a1ba1-16b1-4a64-9254-4113e5836942
	I0916 10:58:39.849317  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:39.849322  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:39.849460  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:40.344170  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-079070
	I0916 10:58:40.344196  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:40.344204  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:40.344209  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:40.346541  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:40.346569  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:40.346579  174032 round_trippers.go:580]     Audit-Id: 8316407a-0832-42bc-8f0b-4165da61a60a
	I0916 10:58:40.346584  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:40.346588  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:40.346600  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:40.346606  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:40.346610  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:40 GMT
	I0916 10:58:40.346841  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-079070","namespace":"kube-system","uid":"72784d35-4a94-476c-a4e9-dc3c6c3a8c46","resourceVersion":"646","creationTimestamp":"2024-09-16T10:56:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.mirror":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.seen":"2024-09-16T10:56:20.578394748Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8975 chars]
	I0916 10:58:40.347556  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:40.347578  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:40.347587  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:40.347594  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:40.349555  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:40.349572  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:40.349578  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:40.349582  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:40.349586  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:40.349590  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:40 GMT
	I0916 10:58:40.349596  174032 round_trippers.go:580]     Audit-Id: add2bfb2-e536-4818-8484-0d9c01bae581
	I0916 10:58:40.349599  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:40.349733  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:40.844468  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-079070
	I0916 10:58:40.844492  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:40.844498  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:40.844502  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:40.846802  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:40.846826  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:40.846835  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:40.846840  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:40 GMT
	I0916 10:58:40.846844  174032 round_trippers.go:580]     Audit-Id: 1b2f176d-f1df-4393-8a26-1b4393d0f538
	I0916 10:58:40.846848  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:40.846852  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:40.846855  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:40.847023  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-079070","namespace":"kube-system","uid":"72784d35-4a94-476c-a4e9-dc3c6c3a8c46","resourceVersion":"646","creationTimestamp":"2024-09-16T10:56:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.mirror":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.seen":"2024-09-16T10:56:20.578394748Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8975 chars]
	I0916 10:58:40.847523  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:40.847539  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:40.847548  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:40.847552  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:40.849254  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:40.849269  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:40.849276  174032 round_trippers.go:580]     Audit-Id: 4da318a7-0a8f-463c-b0ff-d7383e8f85ae
	I0916 10:58:40.849279  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:40.849283  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:40.849286  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:40.849288  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:40.849292  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:40 GMT
	I0916 10:58:40.849444  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:41.344146  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-079070
	I0916 10:58:41.344169  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:41.344177  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:41.344181  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:41.346342  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:41.346359  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:41.346366  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:41.346370  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:41.346373  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:41.346376  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:41 GMT
	I0916 10:58:41.346379  174032 round_trippers.go:580]     Audit-Id: 4c96328e-2be1-4165-9da5-322aa2a337b0
	I0916 10:58:41.346382  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:41.346569  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-079070","namespace":"kube-system","uid":"72784d35-4a94-476c-a4e9-dc3c6c3a8c46","resourceVersion":"646","creationTimestamp":"2024-09-16T10:56:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.mirror":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.seen":"2024-09-16T10:56:20.578394748Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8975 chars]
	I0916 10:58:41.347128  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:41.347145  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:41.347153  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:41.347159  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:41.348988  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:41.349005  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:41.349014  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:41.349019  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:41.349023  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:41.349028  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:41 GMT
	I0916 10:58:41.349034  174032 round_trippers.go:580]     Audit-Id: 0bf83603-2d02-41dc-9c11-65da9963170e
	I0916 10:58:41.349041  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:41.349204  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:41.349525  174032 pod_ready.go:103] pod "kube-apiserver-multinode-079070" in "kube-system" namespace has status "Ready":"False"
	I0916 10:58:41.844299  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-079070
	I0916 10:58:41.844321  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:41.844328  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:41.844332  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:41.846743  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:41.846768  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:41.846777  174032 round_trippers.go:580]     Audit-Id: 284269a5-66dc-433e-97b9-17810fb05334
	I0916 10:58:41.846781  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:41.846786  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:41.846790  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:41.846794  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:41.846800  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:41 GMT
	I0916 10:58:41.846953  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-079070","namespace":"kube-system","uid":"72784d35-4a94-476c-a4e9-dc3c6c3a8c46","resourceVersion":"646","creationTimestamp":"2024-09-16T10:56:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.mirror":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.seen":"2024-09-16T10:56:20.578394748Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8975 chars]
	I0916 10:58:41.847412  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:41.847425  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:41.847432  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:41.847435  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:41.849327  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:41.849353  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:41.849362  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:41.849389  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:41 GMT
	I0916 10:58:41.849395  174032 round_trippers.go:580]     Audit-Id: 22754033-a9eb-4bb1-8b1f-7e79eb226770
	I0916 10:58:41.849403  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:41.849410  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:41.849416  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:41.849531  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:42.343541  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-079070
	I0916 10:58:42.343568  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:42.343576  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:42.343579  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:42.345965  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:42.345987  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:42.345995  174032 round_trippers.go:580]     Audit-Id: bccbc1a2-b398-44b9-b8cb-b5f143f559bf
	I0916 10:58:42.346001  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:42.346005  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:42.346011  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:42.346014  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:42.346018  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:42 GMT
	I0916 10:58:42.346305  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-079070","namespace":"kube-system","uid":"72784d35-4a94-476c-a4e9-dc3c6c3a8c46","resourceVersion":"646","creationTimestamp":"2024-09-16T10:56:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.mirror":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.seen":"2024-09-16T10:56:20.578394748Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8975 chars]
	I0916 10:58:42.346798  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:42.346813  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:42.346820  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:42.346824  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:42.348608  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:42.348626  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:42.348633  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:42 GMT
	I0916 10:58:42.348639  174032 round_trippers.go:580]     Audit-Id: 97759470-370b-4ea5-8a17-aa82de600c23
	I0916 10:58:42.348645  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:42.348651  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:42.348659  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:42.348663  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:42.348765  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:42.844472  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-079070
	I0916 10:58:42.844496  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:42.844503  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:42.844508  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:42.846892  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:42.846916  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:42.846925  174032 round_trippers.go:580]     Audit-Id: a1336631-7899-484c-9b02-7d8750682077
	I0916 10:58:42.846929  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:42.846935  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:42.846939  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:42.846943  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:42.846947  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:42 GMT
	I0916 10:58:42.847102  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-079070","namespace":"kube-system","uid":"72784d35-4a94-476c-a4e9-dc3c6c3a8c46","resourceVersion":"646","creationTimestamp":"2024-09-16T10:56:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.mirror":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.seen":"2024-09-16T10:56:20.578394748Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8975 chars]
	I0916 10:58:42.847698  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:42.847715  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:42.847725  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:42.847729  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:42.849490  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:42.849505  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:42.849515  174032 round_trippers.go:580]     Audit-Id: b4ec4abb-4253-40f2-b4ea-61cfe1b6ec7c
	I0916 10:58:42.849518  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:42.849522  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:42.849524  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:42.849527  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:42.849529  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:42 GMT
	I0916 10:58:42.849704  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:43.344424  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-079070
	I0916 10:58:43.344451  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:43.344458  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:43.344463  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:43.346829  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:43.346851  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:43.346858  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:43.346862  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:43.346866  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:43.346869  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:43 GMT
	I0916 10:58:43.346875  174032 round_trippers.go:580]     Audit-Id: 400ec710-e5ee-40b2-a04e-fc3c394a6660
	I0916 10:58:43.346880  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:43.347063  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-079070","namespace":"kube-system","uid":"72784d35-4a94-476c-a4e9-dc3c6c3a8c46","resourceVersion":"646","creationTimestamp":"2024-09-16T10:56:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.mirror":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.seen":"2024-09-16T10:56:20.578394748Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8975 chars]
	I0916 10:58:43.347545  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:43.347561  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:43.347570  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:43.347574  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:43.349352  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:43.349373  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:43.349380  174032 round_trippers.go:580]     Audit-Id: 37efe32e-ffde-4b59-a58c-8f46e0e5d4e3
	I0916 10:58:43.349384  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:43.349388  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:43.349393  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:43.349397  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:43.349401  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:43 GMT
	I0916 10:58:43.349540  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:43.349822  174032 pod_ready.go:103] pod "kube-apiserver-multinode-079070" in "kube-system" namespace has status "Ready":"False"
	I0916 10:58:43.844231  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-079070
	I0916 10:58:43.844254  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:43.844262  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:43.844266  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:43.846620  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:43.846640  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:43.846649  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:43 GMT
	I0916 10:58:43.846658  174032 round_trippers.go:580]     Audit-Id: 7705533e-40e8-4015-8183-4ffb3cbfecb4
	I0916 10:58:43.846663  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:43.846668  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:43.846672  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:43.846677  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:43.846842  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-079070","namespace":"kube-system","uid":"72784d35-4a94-476c-a4e9-dc3c6c3a8c46","resourceVersion":"646","creationTimestamp":"2024-09-16T10:56:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.mirror":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.seen":"2024-09-16T10:56:20.578394748Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8975 chars]
	I0916 10:58:43.847417  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:43.847433  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:43.847444  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:43.847453  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:43.849137  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:43.849157  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:43.849163  174032 round_trippers.go:580]     Audit-Id: a3acaeb6-337c-4a0d-8b47-33b1b4901bc5
	I0916 10:58:43.849167  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:43.849170  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:43.849174  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:43.849177  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:43.849180  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:43 GMT
	I0916 10:58:43.849342  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:44.343925  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-079070
	I0916 10:58:44.343949  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:44.343956  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:44.343961  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:44.346269  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:44.346290  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:44.346297  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:44.346302  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:44 GMT
	I0916 10:58:44.346306  174032 round_trippers.go:580]     Audit-Id: e3c3dc98-0e13-45e7-894d-b34ee246ef34
	I0916 10:58:44.346317  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:44.346325  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:44.346329  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:44.346572  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-079070","namespace":"kube-system","uid":"72784d35-4a94-476c-a4e9-dc3c6c3a8c46","resourceVersion":"646","creationTimestamp":"2024-09-16T10:56:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.mirror":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.seen":"2024-09-16T10:56:20.578394748Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8975 chars]
	I0916 10:58:44.347027  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:44.347040  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:44.347047  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:44.347053  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:44.348827  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:44.348844  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:44.348852  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:44.348858  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:44 GMT
	I0916 10:58:44.348862  174032 round_trippers.go:580]     Audit-Id: 888e99b5-18cb-49e1-9b02-c0cc4adf8975
	I0916 10:58:44.348866  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:44.348870  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:44.348875  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:44.349016  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:44.843656  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-079070
	I0916 10:58:44.843683  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:44.843692  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:44.843696  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:44.846169  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:44.846189  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:44.846195  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:44.846198  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:44 GMT
	I0916 10:58:44.846202  174032 round_trippers.go:580]     Audit-Id: 066a5c14-49c4-40d9-b816-63853706ed06
	I0916 10:58:44.846206  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:44.846209  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:44.846211  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:44.846376  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-079070","namespace":"kube-system","uid":"72784d35-4a94-476c-a4e9-dc3c6c3a8c46","resourceVersion":"646","creationTimestamp":"2024-09-16T10:56:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.mirror":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.seen":"2024-09-16T10:56:20.578394748Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8975 chars]
	I0916 10:58:44.846837  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:44.846851  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:44.846859  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:44.846868  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:44.848841  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:44.848858  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:44.848864  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:44.848869  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:44 GMT
	I0916 10:58:44.848872  174032 round_trippers.go:580]     Audit-Id: 732a4425-d245-4d01-9284-f829b362e8b8
	I0916 10:58:44.848878  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:44.848881  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:44.848887  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:44.849053  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:45.343707  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-079070
	I0916 10:58:45.343759  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:45.343772  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:45.343780  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:45.346216  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:45.346242  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:45.346251  174032 round_trippers.go:580]     Audit-Id: 05bed2fd-a9e8-4b97-9a08-72ceb4908542
	I0916 10:58:45.346256  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:45.346259  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:45.346262  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:45.346265  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:45.346268  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:45 GMT
	I0916 10:58:45.346396  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-079070","namespace":"kube-system","uid":"72784d35-4a94-476c-a4e9-dc3c6c3a8c46","resourceVersion":"646","creationTimestamp":"2024-09-16T10:56:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.mirror":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.seen":"2024-09-16T10:56:20.578394748Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8975 chars]
	I0916 10:58:45.346884  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:45.346899  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:45.346906  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:45.346910  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:45.348619  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:45.348640  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:45.348649  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:45.348659  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:45.348662  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:45 GMT
	I0916 10:58:45.348665  174032 round_trippers.go:580]     Audit-Id: 9e33b98f-15f2-48d9-b4cc-3658d1d15d46
	I0916 10:58:45.348668  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:45.348672  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:45.348774  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:45.844439  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-079070
	I0916 10:58:45.844461  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:45.844472  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:45.844478  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:45.846803  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:45.846823  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:45.846829  174032 round_trippers.go:580]     Audit-Id: 0f55f119-d28f-4e0c-849b-17eeb51da29f
	I0916 10:58:45.846833  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:45.846835  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:45.846839  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:45.846842  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:45.846845  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:45 GMT
	I0916 10:58:45.847029  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-079070","namespace":"kube-system","uid":"72784d35-4a94-476c-a4e9-dc3c6c3a8c46","resourceVersion":"646","creationTimestamp":"2024-09-16T10:56:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.mirror":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.seen":"2024-09-16T10:56:20.578394748Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8975 chars]
	I0916 10:58:45.847585  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:45.847600  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:45.847608  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:45.847612  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:45.849401  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:45.849417  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:45.849421  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:45.849426  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:45.849430  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:45.849433  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:45 GMT
	I0916 10:58:45.849435  174032 round_trippers.go:580]     Audit-Id: 9362d16e-6ec0-42e1-a041-0d44e38d3f1e
	I0916 10:58:45.849437  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:45.849594  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:45.849919  174032 pod_ready.go:103] pod "kube-apiserver-multinode-079070" in "kube-system" namespace has status "Ready":"False"
	I0916 10:58:46.344381  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-079070
	I0916 10:58:46.344408  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:46.344419  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:46.344423  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:46.346820  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:46.346843  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:46.346853  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:46.346859  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:46.346863  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:46.346867  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:46 GMT
	I0916 10:58:46.346872  174032 round_trippers.go:580]     Audit-Id: 822cfe8f-5651-46fb-bd27-82afd3a7a5f1
	I0916 10:58:46.346875  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:46.347056  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-079070","namespace":"kube-system","uid":"72784d35-4a94-476c-a4e9-dc3c6c3a8c46","resourceVersion":"646","creationTimestamp":"2024-09-16T10:56:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.mirror":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.seen":"2024-09-16T10:56:20.578394748Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8975 chars]
	I0916 10:58:46.347591  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:46.347607  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:46.347617  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:46.347625  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:46.349457  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:46.349478  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:46.349490  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:46.349496  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:46.349501  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:46 GMT
	I0916 10:58:46.349506  174032 round_trippers.go:580]     Audit-Id: bc9a0056-d17a-4800-9880-29d1d9ad80d7
	I0916 10:58:46.349509  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:46.349513  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:46.349677  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:46.844454  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-079070
	I0916 10:58:46.844482  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:46.844492  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:46.844497  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:46.846582  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:46.846608  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:46.846619  174032 round_trippers.go:580]     Audit-Id: 71785f7a-f5e9-4ba4-b48c-4f39d3829687
	I0916 10:58:46.846624  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:46.846627  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:46.846633  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:46.846638  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:46.846642  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:46 GMT
	I0916 10:58:46.846861  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-079070","namespace":"kube-system","uid":"72784d35-4a94-476c-a4e9-dc3c6c3a8c46","resourceVersion":"747","creationTimestamp":"2024-09-16T10:56:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.mirror":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.seen":"2024-09-16T10:56:20.578394748Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8731 chars]
	I0916 10:58:46.847316  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:46.847330  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:46.847345  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:46.847351  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:46.849242  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:46.849260  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:46.849269  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:46 GMT
	I0916 10:58:46.849274  174032 round_trippers.go:580]     Audit-Id: 479fd7da-784b-4b90-81c9-839bea8ca5f3
	I0916 10:58:46.849279  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:46.849282  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:46.849285  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:46.849290  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:46.849404  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:46.849781  174032 pod_ready.go:93] pod "kube-apiserver-multinode-079070" in "kube-system" namespace has status "Ready":"True"
	I0916 10:58:46.849803  174032 pod_ready.go:82] duration metric: took 15.006531378s for pod "kube-apiserver-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:46.849822  174032 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:46.849899  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-079070
	I0916 10:58:46.849910  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:46.849919  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:46.849929  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:46.851598  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:46.851613  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:46.851619  174032 round_trippers.go:580]     Audit-Id: bfbdda84-d380-4b4d-a310-180269726f6b
	I0916 10:58:46.851623  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:46.851627  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:46.851633  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:46.851640  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:46.851644  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:46 GMT
	I0916 10:58:46.851812  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-079070","namespace":"kube-system","uid":"44a2f17c-654a-4d64-b474-70ce475c3afe","resourceVersion":"649","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"93bd5dba25d1e51504f9fc3f55fd27c8","kubernetes.io/config.mirror":"93bd5dba25d1e51504f9fc3f55fd27c8","kubernetes.io/config.seen":"2024-09-16T10:56:25.844752735Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8568 chars]
	I0916 10:58:46.852317  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:46.852331  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:46.852338  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:46.852342  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:46.854106  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:46.854123  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:46.854129  174032 round_trippers.go:580]     Audit-Id: edf7e8aa-ed43-4179-a160-9e27b7e9c3fa
	I0916 10:58:46.854132  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:46.854138  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:46.854143  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:46.854149  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:46.854153  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:46 GMT
	I0916 10:58:46.854246  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:47.350510  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-079070
	I0916 10:58:47.350533  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:47.350540  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:47.350545  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:47.352834  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:47.352857  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:47.352865  174032 round_trippers.go:580]     Audit-Id: 60896ef5-bab6-49c9-9d08-60815d69de00
	I0916 10:58:47.352871  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:47.352875  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:47.352879  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:47.352883  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:47.352888  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:47 GMT
	I0916 10:58:47.353074  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-079070","namespace":"kube-system","uid":"44a2f17c-654a-4d64-b474-70ce475c3afe","resourceVersion":"751","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"93bd5dba25d1e51504f9fc3f55fd27c8","kubernetes.io/config.mirror":"93bd5dba25d1e51504f9fc3f55fd27c8","kubernetes.io/config.seen":"2024-09-16T10:56:25.844752735Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8306 chars]
	I0916 10:58:47.353648  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:47.353665  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:47.353673  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:47.353679  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:47.355421  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:47.355441  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:47.355450  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:47.355456  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:47.355461  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:47.355469  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:47 GMT
	I0916 10:58:47.355476  174032 round_trippers.go:580]     Audit-Id: 12c80a33-4318-49f7-8f8a-70cf1d12fd57
	I0916 10:58:47.355480  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:47.355624  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:47.355943  174032 pod_ready.go:93] pod "kube-controller-manager-multinode-079070" in "kube-system" namespace has status "Ready":"True"
	I0916 10:58:47.355960  174032 pod_ready.go:82] duration metric: took 506.128248ms for pod "kube-controller-manager-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:47.355972  174032 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2vhmt" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:47.356021  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2vhmt
	I0916 10:58:47.356029  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:47.356035  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:47.356040  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:47.357804  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:47.357824  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:47.357833  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:47.357840  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:47.357845  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:47.357850  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:47 GMT
	I0916 10:58:47.357858  174032 round_trippers.go:580]     Audit-Id: b49954a3-01ab-496d-b9cc-6411590965d5
	I0916 10:58:47.357863  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:47.357973  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2vhmt","generateName":"kube-proxy-","namespace":"kube-system","uid":"6f3faf85-04e9-4840-855d-dd1ef9d4e463","resourceVersion":"664","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6388 chars]
	I0916 10:58:47.358384  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:47.358396  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:47.358403  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:47.358406  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:47.360048  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:47.360068  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:47.360077  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:47.360085  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:47.360090  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:47 GMT
	I0916 10:58:47.360096  174032 round_trippers.go:580]     Audit-Id: e309741e-8411-4bef-8393-10d01dc884d8
	I0916 10:58:47.360104  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:47.360107  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:47.360229  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:47.360523  174032 pod_ready.go:93] pod "kube-proxy-2vhmt" in "kube-system" namespace has status "Ready":"True"
	I0916 10:58:47.360538  174032 pod_ready.go:82] duration metric: took 4.560964ms for pod "kube-proxy-2vhmt" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:47.360547  174032 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9z4qh" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:47.360613  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9z4qh
	I0916 10:58:47.360625  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:47.360631  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:47.360635  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:47.362272  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:47.362289  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:47.362302  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:47 GMT
	I0916 10:58:47.362307  174032 round_trippers.go:580]     Audit-Id: c77bd686-5129-495c-8fe4-5ed826f912e2
	I0916 10:58:47.362311  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:47.362317  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:47.362321  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:47.362327  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:47.362472  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9z4qh","generateName":"kube-proxy-","namespace":"kube-system","uid":"7bae6f8c-6440-4b47-aaa2-e7bbc9dabd7d","resourceVersion":"580","creationTimestamp":"2024-09-16T10:57:29Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:57:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6183 chars]
	I0916 10:58:47.362866  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m03
	I0916 10:58:47.362882  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:47.362891  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:47.362896  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:47.364488  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:47.364500  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:47.364510  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:47 GMT
	I0916 10:58:47.364515  174032 round_trippers.go:580]     Audit-Id: eee8e036-1dea-4fd2-8408-0cefc37940a4
	I0916 10:58:47.364519  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:47.364523  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:47.364526  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:47.364530  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:47.364621  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070-m03","uid":"6e80e382-2eee-493d-a8de-e048ca27cfc5","resourceVersion":"606","creationTimestamp":"2024-09-16T10:57:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_57_29_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:57:29Z","fieldsType":"FieldsV1","fiel
dsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl": [truncated 4841 chars]
	I0916 10:58:47.364891  174032 pod_ready.go:93] pod "kube-proxy-9z4qh" in "kube-system" namespace has status "Ready":"True"
	I0916 10:58:47.364906  174032 pod_ready.go:82] duration metric: took 4.351869ms for pod "kube-proxy-9z4qh" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:47.364919  174032 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xkr65" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:47.364972  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xkr65
	I0916 10:58:47.364981  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:47.364990  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:47.364997  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:47.366535  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:47.366553  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:47.366562  174032 round_trippers.go:580]     Audit-Id: dd7a5ff7-2d68-4641-a943-8d18d8c5f9b0
	I0916 10:58:47.366567  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:47.366572  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:47.366578  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:47.366582  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:47.366585  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:47 GMT
	I0916 10:58:47.366717  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xkr65","generateName":"kube-proxy-","namespace":"kube-system","uid":"b8d1009a-f71f-4cb1-a2f0-510a2894874f","resourceVersion":"473","creationTimestamp":"2024-09-16T10:56:57Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6183 chars]
	I0916 10:58:47.367100  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m02
	I0916 10:58:47.367116  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:47.367125  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:47.367129  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:47.368571  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:47.368591  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:47.368599  174032 round_trippers.go:580]     Audit-Id: 81f16ec3-9763-4ab5-ad42-2cdb5767e20f
	I0916 10:58:47.368604  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:47.368608  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:47.368611  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:47.368619  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:47.368622  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:47 GMT
	I0916 10:58:47.368741  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070-m02","uid":"5bc3423c-6b73-4003-b0b3-5aa501e974c0","resourceVersion":"536","creationTimestamp":"2024-09-16T10:56:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_56_58_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:57Z","fieldsType":"FieldsV1","fiel
dsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl": [truncated 4991 chars]
	I0916 10:58:47.369040  174032 pod_ready.go:93] pod "kube-proxy-xkr65" in "kube-system" namespace has status "Ready":"True"
	I0916 10:58:47.369056  174032 pod_ready.go:82] duration metric: took 4.130492ms for pod "kube-proxy-xkr65" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:47.369067  174032 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:47.445393  174032 request.go:632] Waited for 76.245386ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 10:58:47.445470  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 10:58:47.445482  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:47.445494  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:47.445505  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:47.447963  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:47.447982  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:47.447988  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:47.447991  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:47.447994  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:47.447997  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:47 GMT
	I0916 10:58:47.448001  174032 round_trippers.go:580]     Audit-Id: ab28a302-1e9f-4c9c-8d0c-4952e6ab183b
	I0916 10:58:47.448004  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:47.448223  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-079070","namespace":"kube-system","uid":"cc5dd2a3-a136-42b4-a6f9-11c733cb38c4","resourceVersion":"740","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.mirror":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.seen":"2024-09-16T10:56:25.844756911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5188 chars]
	I0916 10:58:47.644913  174032 request.go:632] Waited for 196.280965ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:47.644970  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:47.644975  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:47.644982  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:47.644985  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:47.646989  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:47.647010  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:47.647019  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:47.647025  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:47.647028  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:47.647032  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:47.647035  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:47 GMT
	I0916 10:58:47.647039  174032 round_trippers.go:580]     Audit-Id: 333b4a2f-65ab-419a-b7b6-e4de1388699c
	I0916 10:58:47.647209  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:47.647511  174032 pod_ready.go:93] pod "kube-scheduler-multinode-079070" in "kube-system" namespace has status "Ready":"True"
	I0916 10:58:47.647530  174032 pod_ready.go:82] duration metric: took 278.452028ms for pod "kube-scheduler-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:47.647544  174032 pod_ready.go:39] duration metric: took 16.007018965s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:58:47.647570  174032 api_server.go:52] waiting for apiserver process to appear ...
	I0916 10:58:47.647639  174032 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:58:47.657971  174032 command_runner.go:130] > 1103
	I0916 10:58:47.658847  174032 api_server.go:72] duration metric: took 19.527709436s to wait for apiserver process to appear ...
	I0916 10:58:47.658869  174032 api_server.go:88] waiting for apiserver healthz status ...
	I0916 10:58:47.658889  174032 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0916 10:58:47.662482  174032 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0916 10:58:47.662566  174032 round_trippers.go:463] GET https://192.168.67.2:8443/version
	I0916 10:58:47.662577  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:47.662589  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:47.662599  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:47.663440  174032 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0916 10:58:47.663459  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:47.663468  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:47.663474  174032 round_trippers.go:580]     Content-Length: 263
	I0916 10:58:47.663478  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:47 GMT
	I0916 10:58:47.663484  174032 round_trippers.go:580]     Audit-Id: 5793a115-0501-437e-a094-3db76aa0cbae
	I0916 10:58:47.663488  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:47.663494  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:47.663497  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:47.663524  174032 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0916 10:58:47.663613  174032 api_server.go:141] control plane version: v1.31.1
	I0916 10:58:47.663631  174032 api_server.go:131] duration metric: took 4.754892ms to wait for apiserver health ...
	I0916 10:58:47.663641  174032 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 10:58:47.845087  174032 request.go:632] Waited for 181.351589ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:58:47.845160  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:58:47.845167  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:47.845177  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:47.845186  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:47.848022  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:47.848048  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:47.848062  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:47.848067  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:47 GMT
	I0916 10:58:47.848074  174032 round_trippers.go:580]     Audit-Id: 036ca4f1-e601-441d-9b3e-25adbc860e20
	I0916 10:58:47.848078  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:47.848083  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:47.848086  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:47.848720  174032 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"751"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"659","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 90107 chars]
	I0916 10:58:47.852797  174032 system_pods.go:59] 12 kube-system pods found
	I0916 10:58:47.852832  174032 system_pods.go:61] "coredns-7c65d6cfc9-ft9gh" [8052b6a1-7257-44d4-a318-740afd039d2c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 10:58:47.852841  174032 system_pods.go:61] "etcd-multinode-079070" [fdcbee48-0d3b-47d7-acd2-298979345c7a] Running
	I0916 10:58:47.852845  174032 system_pods.go:61] "kindnet-flmdv" [91449e63-0ca3-4dc6-92ef-e3c5ab102dae] Running
	I0916 10:58:47.852849  174032 system_pods.go:61] "kindnet-fs5x4" [3c4eb83d-3eba-427a-ac72-d8967f67abc1] Running
	I0916 10:58:47.852857  174032 system_pods.go:61] "kindnet-kxnzq" [bdf63c4c-0d22-4d74-b604-df3131d86f07] Running
	I0916 10:58:47.852864  174032 system_pods.go:61] "kube-apiserver-multinode-079070" [72784d35-4a94-476c-a4e9-dc3c6c3a8c46] Running
	I0916 10:58:47.852874  174032 system_pods.go:61] "kube-controller-manager-multinode-079070" [44a2f17c-654a-4d64-b474-70ce475c3afe] Running
	I0916 10:58:47.852879  174032 system_pods.go:61] "kube-proxy-2vhmt" [6f3faf85-04e9-4840-855d-dd1ef9d4e463] Running
	I0916 10:58:47.852884  174032 system_pods.go:61] "kube-proxy-9z4qh" [7bae6f8c-6440-4b47-aaa2-e7bbc9dabd7d] Running
	I0916 10:58:47.852888  174032 system_pods.go:61] "kube-proxy-xkr65" [b8d1009a-f71f-4cb1-a2f0-510a2894874f] Running
	I0916 10:58:47.852891  174032 system_pods.go:61] "kube-scheduler-multinode-079070" [cc5dd2a3-a136-42b4-a6f9-11c733cb38c4] Running
	I0916 10:58:47.852894  174032 system_pods.go:61] "storage-provisioner" [43862f2e-c773-468d-ab03-8b0bc0633ad4] Running
	I0916 10:58:47.852901  174032 system_pods.go:74] duration metric: took 189.253398ms to wait for pod list to return data ...
	I0916 10:58:47.852910  174032 default_sa.go:34] waiting for default service account to be created ...
	I0916 10:58:48.045324  174032 request.go:632] Waited for 192.33468ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:58:48.045378  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/default/serviceaccounts
	I0916 10:58:48.045383  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:48.045390  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:48.045395  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:48.048253  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:48.048271  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:48.048278  174032 round_trippers.go:580]     Audit-Id: a30cf111-7b16-4332-8d18-0c40e64b9201
	I0916 10:58:48.048282  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:48.048285  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:48.048287  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:48.048291  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:48.048301  174032 round_trippers.go:580]     Content-Length: 261
	I0916 10:58:48.048303  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:48 GMT
	I0916 10:58:48.048324  174032 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"751"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"4622bf83-82d0-4a2c-a46c-d6dbfa5ce9ea","resourceVersion":"300","creationTimestamp":"2024-09-16T10:56:30Z"}}]}
	I0916 10:58:48.048507  174032 default_sa.go:45] found service account: "default"
	I0916 10:58:48.048522  174032 default_sa.go:55] duration metric: took 195.605577ms for default service account to be created ...
	I0916 10:58:48.048530  174032 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 10:58:48.244777  174032 request.go:632] Waited for 196.175046ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:58:48.244866  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:58:48.244878  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:48.244889  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:48.244897  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:48.248065  174032 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:58:48.248091  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:48.248107  174032 round_trippers.go:580]     Audit-Id: a8fb2565-dd52-4543-8dcb-aec3467d918b
	I0916 10:58:48.248111  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:48.248115  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:48.248117  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:48.248121  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:48.248124  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:48 GMT
	I0916 10:58:48.248816  174032 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"751"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"659","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 90107 chars]
	I0916 10:58:48.251523  174032 system_pods.go:86] 12 kube-system pods found
	I0916 10:58:48.251557  174032 system_pods.go:89] "coredns-7c65d6cfc9-ft9gh" [8052b6a1-7257-44d4-a318-740afd039d2c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 10:58:48.251566  174032 system_pods.go:89] "etcd-multinode-079070" [fdcbee48-0d3b-47d7-acd2-298979345c7a] Running
	I0916 10:58:48.251573  174032 system_pods.go:89] "kindnet-flmdv" [91449e63-0ca3-4dc6-92ef-e3c5ab102dae] Running
	I0916 10:58:48.251578  174032 system_pods.go:89] "kindnet-fs5x4" [3c4eb83d-3eba-427a-ac72-d8967f67abc1] Running
	I0916 10:58:48.251588  174032 system_pods.go:89] "kindnet-kxnzq" [bdf63c4c-0d22-4d74-b604-df3131d86f07] Running
	I0916 10:58:48.251595  174032 system_pods.go:89] "kube-apiserver-multinode-079070" [72784d35-4a94-476c-a4e9-dc3c6c3a8c46] Running
	I0916 10:58:48.251603  174032 system_pods.go:89] "kube-controller-manager-multinode-079070" [44a2f17c-654a-4d64-b474-70ce475c3afe] Running
	I0916 10:58:48.251612  174032 system_pods.go:89] "kube-proxy-2vhmt" [6f3faf85-04e9-4840-855d-dd1ef9d4e463] Running
	I0916 10:58:48.251618  174032 system_pods.go:89] "kube-proxy-9z4qh" [7bae6f8c-6440-4b47-aaa2-e7bbc9dabd7d] Running
	I0916 10:58:48.251624  174032 system_pods.go:89] "kube-proxy-xkr65" [b8d1009a-f71f-4cb1-a2f0-510a2894874f] Running
	I0916 10:58:48.251630  174032 system_pods.go:89] "kube-scheduler-multinode-079070" [cc5dd2a3-a136-42b4-a6f9-11c733cb38c4] Running
	I0916 10:58:48.251639  174032 system_pods.go:89] "storage-provisioner" [43862f2e-c773-468d-ab03-8b0bc0633ad4] Running
	I0916 10:58:48.251649  174032 system_pods.go:126] duration metric: took 203.110618ms to wait for k8s-apps to be running ...
	I0916 10:58:48.251662  174032 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:58:48.251711  174032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:58:48.263215  174032 system_svc.go:56] duration metric: took 11.543309ms WaitForService to wait for kubelet
	I0916 10:58:48.263245  174032 kubeadm.go:582] duration metric: took 20.132110234s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:58:48.263267  174032 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:58:48.444612  174032 request.go:632] Waited for 181.263347ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes
	I0916 10:58:48.444689  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes
	I0916 10:58:48.444694  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:48.444701  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:48.444705  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:48.447312  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:48.447336  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:48.447345  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:48.447350  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:48.447355  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:48.447359  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:48 GMT
	I0916 10:58:48.447363  174032 round_trippers.go:580]     Audit-Id: 5a883412-a067-49d4-96ae-14ec2b1d15a1
	I0916 10:58:48.447368  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:48.447589  174032 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"751"},"items":[{"metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"manag
edFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1", [truncated 17104 chars]
	I0916 10:58:48.448521  174032 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:58:48.448550  174032 node_conditions.go:123] node cpu capacity is 8
	I0916 10:58:48.448565  174032 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:58:48.448572  174032 node_conditions.go:123] node cpu capacity is 8
	I0916 10:58:48.448578  174032 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:58:48.448583  174032 node_conditions.go:123] node cpu capacity is 8
	I0916 10:58:48.448591  174032 node_conditions.go:105] duration metric: took 185.318949ms to run NodePressure ...
	I0916 10:58:48.448604  174032 start.go:241] waiting for startup goroutines ...
	I0916 10:58:48.448618  174032 start.go:246] waiting for cluster config update ...
	I0916 10:58:48.448631  174032 start.go:255] writing updated cluster config ...
	I0916 10:58:48.451062  174032 out.go:201] 
	I0916 10:58:48.452645  174032 config.go:182] Loaded profile config "multinode-079070": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:58:48.452770  174032 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/config.json ...
	I0916 10:58:48.454480  174032 out.go:177] * Starting "multinode-079070-m02" worker node in "multinode-079070" cluster
	I0916 10:58:48.455796  174032 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 10:58:48.457280  174032 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:58:48.458700  174032 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:58:48.458729  174032 cache.go:56] Caching tarball of preloaded images
	I0916 10:58:48.458794  174032 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:58:48.458876  174032 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 10:58:48.458893  174032 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 10:58:48.459034  174032 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/config.json ...
	W0916 10:58:48.479080  174032 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:58:48.479107  174032 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:58:48.479207  174032 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:58:48.479229  174032 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:58:48.479233  174032 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:58:48.479254  174032 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:58:48.479265  174032 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:58:48.480426  174032 image.go:273] response: 
	I0916 10:58:48.534968  174032 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:58:48.534997  174032 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:58:48.535037  174032 start.go:360] acquireMachinesLock for multinode-079070-m02: {Name:mk1713c8fba020df744918162d1a483c7b41a015 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:58:48.535132  174032 start.go:364] duration metric: took 72.61µs to acquireMachinesLock for "multinode-079070-m02"
	I0916 10:58:48.535153  174032 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:58:48.535161  174032 fix.go:54] fixHost starting: m02
	I0916 10:58:48.535402  174032 cli_runner.go:164] Run: docker container inspect multinode-079070-m02 --format={{.State.Status}}
	I0916 10:58:48.552056  174032 fix.go:112] recreateIfNeeded on multinode-079070-m02: state=Stopped err=<nil>
	W0916 10:58:48.552090  174032 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:58:48.554323  174032 out.go:177] * Restarting existing docker container for "multinode-079070-m02" ...
	I0916 10:58:48.555546  174032 cli_runner.go:164] Run: docker start multinode-079070-m02
	I0916 10:58:48.826050  174032 cli_runner.go:164] Run: docker container inspect multinode-079070-m02 --format={{.State.Status}}
	I0916 10:58:48.844998  174032 kic.go:430] container "multinode-079070-m02" state is running.
	I0916 10:58:48.845405  174032 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-079070-m02
	I0916 10:58:48.863791  174032 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/config.json ...
	I0916 10:58:48.864076  174032 machine.go:93] provisionDockerMachine start ...
	I0916 10:58:48.864148  174032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070-m02
	I0916 10:58:48.882228  174032 main.go:141] libmachine: Using SSH client type: native
	I0916 10:58:48.882422  174032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32933 <nil> <nil>}
	I0916 10:58:48.882433  174032 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:58:48.883132  174032 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58622->127.0.0.1:32933: read: connection reset by peer
	I0916 10:58:52.023467  174032 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-079070-m02
	
	I0916 10:58:52.023502  174032 ubuntu.go:169] provisioning hostname "multinode-079070-m02"
	I0916 10:58:52.023552  174032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070-m02
	I0916 10:58:52.040365  174032 main.go:141] libmachine: Using SSH client type: native
	I0916 10:58:52.040560  174032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32933 <nil> <nil>}
	I0916 10:58:52.040578  174032 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-079070-m02 && echo "multinode-079070-m02" | sudo tee /etc/hostname
	I0916 10:58:52.182805  174032 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-079070-m02
	
	I0916 10:58:52.182870  174032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070-m02
	I0916 10:58:52.200017  174032 main.go:141] libmachine: Using SSH client type: native
	I0916 10:58:52.200184  174032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32933 <nil> <nil>}
	I0916 10:58:52.200200  174032 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-079070-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-079070-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-079070-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:58:52.335589  174032 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:58:52.335615  174032 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 10:58:52.335638  174032 ubuntu.go:177] setting up certificates
	I0916 10:58:52.335650  174032 provision.go:84] configureAuth start
	I0916 10:58:52.335710  174032 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-079070-m02
	I0916 10:58:52.352263  174032 provision.go:143] copyHostCerts
	I0916 10:58:52.352297  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:58:52.352333  174032 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 10:58:52.352342  174032 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:58:52.352405  174032 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 10:58:52.352476  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:58:52.352493  174032 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 10:58:52.352499  174032 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:58:52.352525  174032 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 10:58:52.352567  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:58:52.352588  174032 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 10:58:52.352594  174032 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:58:52.352617  174032 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 10:58:52.352670  174032 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.multinode-079070-m02 san=[127.0.0.1 192.168.67.3 localhost minikube multinode-079070-m02]
	I0916 10:58:52.508719  174032 provision.go:177] copyRemoteCerts
	I0916 10:58:52.508775  174032 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:58:52.508811  174032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070-m02
	I0916 10:58:52.528870  174032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32933 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070-m02/id_rsa Username:docker}
	I0916 10:58:52.624129  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:58:52.624198  174032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 10:58:52.645917  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:58:52.645988  174032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0916 10:58:52.667956  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:58:52.668028  174032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 10:58:52.689433  174032 provision.go:87] duration metric: took 353.769597ms to configureAuth
	I0916 10:58:52.689468  174032 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:58:52.689661  174032 config.go:182] Loaded profile config "multinode-079070": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:58:52.689675  174032 machine.go:96] duration metric: took 3.825583728s to provisionDockerMachine
	I0916 10:58:52.689682  174032 start.go:293] postStartSetup for "multinode-079070-m02" (driver="docker")
	I0916 10:58:52.689694  174032 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:58:52.689733  174032 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:58:52.689768  174032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070-m02
	I0916 10:58:52.706388  174032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32933 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070-m02/id_rsa Username:docker}
	I0916 10:58:52.800307  174032 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:58:52.803203  174032 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.4 LTS"
	I0916 10:58:52.803222  174032 command_runner.go:130] > NAME="Ubuntu"
	I0916 10:58:52.803228  174032 command_runner.go:130] > VERSION_ID="22.04"
	I0916 10:58:52.803233  174032 command_runner.go:130] > VERSION="22.04.4 LTS (Jammy Jellyfish)"
	I0916 10:58:52.803238  174032 command_runner.go:130] > VERSION_CODENAME=jammy
	I0916 10:58:52.803242  174032 command_runner.go:130] > ID=ubuntu
	I0916 10:58:52.803246  174032 command_runner.go:130] > ID_LIKE=debian
	I0916 10:58:52.803250  174032 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0916 10:58:52.803254  174032 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0916 10:58:52.803265  174032 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0916 10:58:52.803279  174032 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0916 10:58:52.803291  174032 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0916 10:58:52.803347  174032 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:58:52.803370  174032 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:58:52.803379  174032 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:58:52.803387  174032 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:58:52.803395  174032 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 10:58:52.803445  174032 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 10:58:52.803511  174032 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 10:58:52.803520  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /etc/ssl/certs/111892.pem
	I0916 10:58:52.803604  174032 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:58:52.811175  174032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:58:52.832816  174032 start.go:296] duration metric: took 143.115935ms for postStartSetup
	I0916 10:58:52.832893  174032 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:58:52.832942  174032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070-m02
	I0916 10:58:52.850103  174032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32933 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070-m02/id_rsa Username:docker}
	I0916 10:58:52.944321  174032 command_runner.go:130] > 32%
	I0916 10:58:52.944389  174032 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:58:52.948274  174032 command_runner.go:130] > 201G
	I0916 10:58:52.948522  174032 fix.go:56] duration metric: took 4.413358267s for fixHost
	I0916 10:58:52.948543  174032 start.go:83] releasing machines lock for "multinode-079070-m02", held for 4.41340044s
	I0916 10:58:52.948618  174032 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-079070-m02
	I0916 10:58:52.967368  174032 out.go:177] * Found network options:
	I0916 10:58:52.968658  174032 out.go:177]   - NO_PROXY=192.168.67.2
	W0916 10:58:52.970509  174032 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:58:52.970557  174032 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 10:58:52.970630  174032 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:58:52.970674  174032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070-m02
	I0916 10:58:52.970718  174032 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:58:52.970779  174032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070-m02
	I0916 10:58:52.988780  174032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32933 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070-m02/id_rsa Username:docker}
	I0916 10:58:52.989557  174032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32933 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070-m02/id_rsa Username:docker}
	I0916 10:58:53.154552  174032 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0916 10:58:53.154613  174032 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0916 10:58:53.154621  174032 command_runner.go:130] >   Size: 78        	Blocks: 8          IO Block: 4096   regular file
	I0916 10:58:53.154627  174032 command_runner.go:130] > Device: 100006h/1048582d	Inode: 821220      Links: 1
	I0916 10:58:53.154634  174032 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:58:53.154643  174032 command_runner.go:130] > Access: 2024-09-16 10:56:55.565386542 +0000
	I0916 10:58:53.154650  174032 command_runner.go:130] > Modify: 2024-09-16 10:56:55.537384074 +0000
	I0916 10:58:53.154655  174032 command_runner.go:130] > Change: 2024-09-16 10:56:55.537384074 +0000
	I0916 10:58:53.154662  174032 command_runner.go:130] >  Birth: 2024-09-16 10:56:55.537384074 +0000
	I0916 10:58:53.154717  174032 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 10:58:53.172095  174032 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:58:53.172180  174032 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:58:53.180224  174032 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:58:53.180250  174032 start.go:495] detecting cgroup driver to use...
	I0916 10:58:53.180281  174032 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:58:53.180328  174032 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 10:58:53.191091  174032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 10:58:53.201092  174032 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:58:53.201146  174032 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:58:53.212550  174032 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:58:53.222603  174032 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:58:53.304112  174032 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:58:53.380141  174032 docker.go:233] disabling docker service ...
	I0916 10:58:53.380207  174032 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:58:53.391524  174032 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:58:53.401677  174032 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:58:53.476102  174032 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:58:53.556064  174032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:58:53.566736  174032 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:58:53.581747  174032 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0916 10:58:53.581826  174032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 10:58:53.590718  174032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:58:53.599648  174032 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:58:53.599720  174032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:58:53.609114  174032 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:58:53.617896  174032 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:58:53.626734  174032 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:58:53.635569  174032 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:58:53.644341  174032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:58:53.653117  174032 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:58:53.662156  174032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:58:53.671277  174032 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:58:53.679102  174032 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0916 10:58:53.679186  174032 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:58:53.687376  174032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:58:53.766125  174032 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 10:58:53.875926  174032 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 10:58:53.875994  174032 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 10:58:53.879425  174032 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I0916 10:58:53.879447  174032 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0916 10:58:53.879453  174032 command_runner.go:130] > Device: 10000fh/1048591d	Inode: 169         Links: 1
	I0916 10:58:53.879462  174032 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:58:53.879468  174032 command_runner.go:130] > Access: 2024-09-16 10:58:53.835810135 +0000
	I0916 10:58:53.879475  174032 command_runner.go:130] > Modify: 2024-09-16 10:58:53.835810135 +0000
	I0916 10:58:53.879480  174032 command_runner.go:130] > Change: 2024-09-16 10:58:53.835810135 +0000
	I0916 10:58:53.879484  174032 command_runner.go:130] >  Birth: -
	I0916 10:58:53.879501  174032 start.go:563] Will wait 60s for crictl version
	I0916 10:58:53.879535  174032 ssh_runner.go:195] Run: which crictl
	I0916 10:58:53.882547  174032 command_runner.go:130] > /usr/bin/crictl
	I0916 10:58:53.882629  174032 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:58:53.914577  174032 command_runner.go:130] > Version:  0.1.0
	I0916 10:58:53.914601  174032 command_runner.go:130] > RuntimeName:  containerd
	I0916 10:58:53.914608  174032 command_runner.go:130] > RuntimeVersion:  1.7.22
	I0916 10:58:53.914612  174032 command_runner.go:130] > RuntimeApiVersion:  v1
	I0916 10:58:53.916585  174032 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 10:58:53.916641  174032 ssh_runner.go:195] Run: containerd --version
	I0916 10:58:53.937122  174032 command_runner.go:130] > containerd containerd.io 1.7.22 7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c
	I0916 10:58:53.938378  174032 ssh_runner.go:195] Run: containerd --version
	I0916 10:58:53.958496  174032 command_runner.go:130] > containerd containerd.io 1.7.22 7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c
	I0916 10:58:53.963073  174032 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 10:58:53.964480  174032 out.go:177]   - env NO_PROXY=192.168.67.2
	I0916 10:58:53.965727  174032 cli_runner.go:164] Run: docker network inspect multinode-079070 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:58:53.982110  174032 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0916 10:58:53.985568  174032 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:58:53.995751  174032 mustload.go:65] Loading cluster: multinode-079070
	I0916 10:58:53.995948  174032 config.go:182] Loaded profile config "multinode-079070": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:58:53.996157  174032 cli_runner.go:164] Run: docker container inspect multinode-079070 --format={{.State.Status}}
	I0916 10:58:54.012286  174032 host.go:66] Checking if "multinode-079070" exists ...
	I0916 10:58:54.012529  174032 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070 for IP: 192.168.67.3
	I0916 10:58:54.012540  174032 certs.go:194] generating shared ca certs ...
	I0916 10:58:54.012552  174032 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:58:54.012658  174032 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 10:58:54.012692  174032 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 10:58:54.012703  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:58:54.012720  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:58:54.012732  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:58:54.012744  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:58:54.012790  174032 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 10:58:54.012816  174032 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 10:58:54.012826  174032 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:58:54.012849  174032 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 10:58:54.012883  174032 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:58:54.012903  174032 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 10:58:54.012939  174032 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:58:54.012963  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /usr/share/ca-certificates/111892.pem
	I0916 10:58:54.012976  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:58:54.012993  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem -> /usr/share/ca-certificates/11189.pem
	I0916 10:58:54.013013  174032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:58:54.034700  174032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:58:54.056438  174032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:58:54.079803  174032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:58:54.102375  174032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 10:58:54.124303  174032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:58:54.145833  174032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 10:58:54.168336  174032 ssh_runner.go:195] Run: openssl version
	I0916 10:58:54.173502  174032 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0916 10:58:54.173588  174032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 10:58:54.182279  174032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 10:58:54.185611  174032 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 10:58:54.185642  174032 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 10:58:54.185682  174032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 10:58:54.191513  174032 command_runner.go:130] > 3ec20f2e
	I0916 10:58:54.191727  174032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:58:54.199681  174032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:58:54.207920  174032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:58:54.210888  174032 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:58:54.210934  174032 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:58:54.210965  174032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:58:54.216845  174032 command_runner.go:130] > b5213941
	I0916 10:58:54.217064  174032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:58:54.224962  174032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 10:58:54.233539  174032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 10:58:54.236639  174032 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 10:58:54.236671  174032 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 10:58:54.236712  174032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 10:58:54.242507  174032 command_runner.go:130] > 51391683
	I0916 10:58:54.242769  174032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 10:58:54.250530  174032 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:58:54.253480  174032 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:58:54.253550  174032 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:58:54.253588  174032 kubeadm.go:934] updating node {m02 192.168.67.3 8443 v1.31.1 containerd false true} ...
	I0916 10:58:54.253679  174032 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-079070-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-079070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:58:54.253732  174032 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:58:54.260876  174032 command_runner.go:130] > kubeadm
	I0916 10:58:54.260900  174032 command_runner.go:130] > kubectl
	I0916 10:58:54.260906  174032 command_runner.go:130] > kubelet
	I0916 10:58:54.261622  174032 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:58:54.261673  174032 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0916 10:58:54.269639  174032 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0916 10:58:54.285931  174032 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:58:54.301834  174032 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0916 10:58:54.304927  174032 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:58:54.314637  174032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:58:54.393203  174032 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:58:54.404037  174032 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:false Worker:true}
	I0916 10:58:54.404362  174032 config.go:182] Loaded profile config "multinode-079070": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:58:54.406286  174032 out.go:177] * Verifying Kubernetes components...
	I0916 10:58:54.407532  174032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:58:54.485143  174032 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:58:54.496216  174032 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:58:54.496482  174032 kapi.go:59] client config for multinode-079070: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:58:54.496760  174032 node_ready.go:35] waiting up to 6m0s for node "multinode-079070-m02" to be "Ready" ...
	I0916 10:58:54.496844  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m02
	I0916 10:58:54.496855  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:54.496863  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:54.496870  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:54.498869  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:54.498890  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:54.498900  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:54 GMT
	I0916 10:58:54.498906  174032 round_trippers.go:580]     Audit-Id: 2b4f9d60-ae9d-42b6-a59f-10947d9460ad
	I0916 10:58:54.498911  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:54.498917  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:54.498922  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:54.498929  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:54.499038  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070-m02","uid":"5bc3423c-6b73-4003-b0b3-5aa501e974c0","resourceVersion":"536","creationTimestamp":"2024-09-16T10:56:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_56_58_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:57Z","fieldsType":"FieldsV1","fiel
dsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl": [truncated 4991 chars]
	I0916 10:58:54.499413  174032 node_ready.go:49] node "multinode-079070-m02" has status "Ready":"True"
	I0916 10:58:54.499433  174032 node_ready.go:38] duration metric: took 2.655483ms for node "multinode-079070-m02" to be "Ready" ...
	I0916 10:58:54.499446  174032 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:58:54.499522  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:58:54.499533  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:54.499543  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:54.499551  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:54.502363  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:54.502386  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:54.502393  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:54 GMT
	I0916 10:58:54.502397  174032 round_trippers.go:580]     Audit-Id: 5a0adfa8-9b57-402e-bd27-e3c0973defac
	I0916 10:58:54.502401  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:54.502404  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:54.502419  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:54.502423  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:54.504131  174032 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"755"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"659","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 90107 chars]
	I0916 10:58:54.507231  174032 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace to be "Ready" ...
	I0916 10:58:54.507310  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:58:54.507321  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:54.507327  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:54.507333  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:54.509267  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:54.509284  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:54.509292  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:54.509298  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:54.509302  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:54.509307  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:54.509311  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:54 GMT
	I0916 10:58:54.509318  174032 round_trippers.go:580]     Audit-Id: 4d703226-3c88-4da0-a4a2-9dfb29feeef7
	I0916 10:58:54.509428  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"659","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 10:58:54.509855  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:54.509869  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:54.509876  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:54.509879  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:54.511581  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:54.511596  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:54.511605  174032 round_trippers.go:580]     Audit-Id: 867bc651-c010-43f0-a2b2-348249e88481
	I0916 10:58:54.511612  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:54.511617  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:54.511623  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:54.511627  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:54.511633  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:54 GMT
	I0916 10:58:54.511799  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:55.008479  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:58:55.008504  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:55.008513  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:55.008517  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:55.010901  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:55.010936  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:55.010945  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:55.010952  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:55.010956  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:55.010961  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:55.010967  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:55 GMT
	I0916 10:58:55.010971  174032 round_trippers.go:580]     Audit-Id: 1c0c5186-db69-47b1-9c54-9e5800e238a0
	I0916 10:58:55.011136  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"659","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 10:58:55.011616  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:55.011628  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:55.011635  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:55.011640  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:55.013410  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:55.013431  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:55.013440  174032 round_trippers.go:580]     Audit-Id: 448b7ee3-b581-475e-8f47-2e1742a75aa1
	I0916 10:58:55.013446  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:55.013450  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:55.013453  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:55.013457  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:55.013461  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:55 GMT
	I0916 10:58:55.013557  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:55.508215  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:58:55.508239  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:55.508247  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:55.508251  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:55.510999  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:55.511057  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:55.511091  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:55.511099  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:55.511103  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:55.511106  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:55.511109  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:55 GMT
	I0916 10:58:55.511127  174032 round_trippers.go:580]     Audit-Id: 3581febc-00b0-4c48-991b-d4d68706f8a6
	I0916 10:58:55.511261  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"659","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 10:58:55.511838  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:55.511855  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:55.511862  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:55.511866  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:55.513771  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:55.513789  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:55.513796  174032 round_trippers.go:580]     Audit-Id: e9677950-8026-4a5a-9f78-b6d76e3ef814
	I0916 10:58:55.513800  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:55.513802  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:55.513806  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:55.513809  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:55.513812  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:55 GMT
	I0916 10:58:55.513980  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:56.007579  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:58:56.007601  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:56.007609  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:56.007613  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:56.009942  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:56.010051  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:56.010064  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:56.010071  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:56 GMT
	I0916 10:58:56.010077  174032 round_trippers.go:580]     Audit-Id: f88388ff-983d-4312-865a-f5b70b9e4e0d
	I0916 10:58:56.010084  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:56.010089  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:56.010094  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:56.010249  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"659","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 10:58:56.010829  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:56.010852  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:56.010859  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:56.010864  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:56.012709  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:56.012727  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:56.012734  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:56.012738  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:56 GMT
	I0916 10:58:56.012743  174032 round_trippers.go:580]     Audit-Id: 761dc6e0-5765-4b4f-99bb-97436085e186
	I0916 10:58:56.012746  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:56.012748  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:56.012750  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:56.012904  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:56.507484  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:58:56.507509  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:56.507516  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:56.507521  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:56.509747  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:56.509767  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:56.509774  174032 round_trippers.go:580]     Audit-Id: adfc4923-0975-45c5-9047-716a475f2619
	I0916 10:58:56.509779  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:56.509782  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:56.509786  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:56.509791  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:56.509796  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:56 GMT
	I0916 10:58:56.510007  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"659","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 10:58:56.510615  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:56.510633  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:56.510643  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:56.510655  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:56.512777  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:56.512804  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:56.512812  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:56.512818  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:56 GMT
	I0916 10:58:56.512822  174032 round_trippers.go:580]     Audit-Id: 1da28e70-7533-41b7-acaa-9d33eed1d400
	I0916 10:58:56.512843  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:56.512847  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:56.512851  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:56.513004  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:56.513296  174032 pod_ready.go:103] pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace has status "Ready":"False"
	I0916 10:58:57.007772  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:58:57.007796  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:57.007804  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:57.007807  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:57.010125  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:57.010149  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:57.010156  174032 round_trippers.go:580]     Audit-Id: 0d668666-b85f-4829-84be-32b2ac7094e0
	I0916 10:58:57.010161  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:57.010164  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:57.010166  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:57.010169  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:57.010172  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:57 GMT
	I0916 10:58:57.010285  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"659","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 10:58:57.010860  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:57.010877  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:57.010884  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:57.010887  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:57.012855  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:57.012873  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:57.012880  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:57.012892  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:57.012896  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:57.012899  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:57 GMT
	I0916 10:58:57.012903  174032 round_trippers.go:580]     Audit-Id: 4c346195-b4b3-40e3-a290-425940b4ab19
	I0916 10:58:57.012907  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:57.013035  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:57.507667  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:58:57.507707  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:57.507716  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:57.507721  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:57.510173  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:57.510199  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:57.510209  174032 round_trippers.go:580]     Audit-Id: 809ba959-b165-451b-96ce-803469e638c2
	I0916 10:58:57.510214  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:57.510218  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:57.510222  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:57.510224  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:57.510228  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:57 GMT
	I0916 10:58:57.510346  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"659","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 10:58:57.510856  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:57.510869  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:57.510877  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:57.510881  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:57.512682  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:57.512699  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:57.512705  174032 round_trippers.go:580]     Audit-Id: e33d1563-1d03-4e72-966e-7cc6d71407b9
	I0916 10:58:57.512710  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:57.512712  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:57.512716  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:57.512719  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:57.512724  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:57 GMT
	I0916 10:58:57.512853  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:58.007505  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:58:58.007547  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:58.007555  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:58.007571  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:58.009985  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:58.010030  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:58.010042  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:58.010047  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:58.010052  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:58.010057  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:58.010064  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:58 GMT
	I0916 10:58:58.010068  174032 round_trippers.go:580]     Audit-Id: 7a8124e0-49df-400a-bb19-f73b1f908e3d
	I0916 10:58:58.010274  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"659","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 10:58:58.010800  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:58.010817  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:58.010827  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:58.010833  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:58.012929  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:58.012950  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:58.012959  174032 round_trippers.go:580]     Audit-Id: cda98a01-6d35-4d79-a939-cf6c6cd321d1
	I0916 10:58:58.012964  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:58.012969  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:58.012973  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:58.012978  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:58.012985  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:58 GMT
	I0916 10:58:58.013121  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:58.507989  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:58:58.508012  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:58.508020  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:58.508024  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:58.510269  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:58.510295  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:58.510304  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:58.510313  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:58.510320  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:58.510324  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:58 GMT
	I0916 10:58:58.510330  174032 round_trippers.go:580]     Audit-Id: 208e7608-9c15-45a9-b104-e49c6df60fb6
	I0916 10:58:58.510335  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:58.510486  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"659","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 10:58:58.510962  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:58.510975  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:58.510983  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:58.510987  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:58.512922  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:58.512943  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:58.512951  174032 round_trippers.go:580]     Audit-Id: 16b26037-b9a6-42e6-8058-0623542fe5d4
	I0916 10:58:58.512955  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:58.512961  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:58.512964  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:58.512969  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:58.512973  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:58 GMT
	I0916 10:58:58.513076  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:58.513365  174032 pod_ready.go:103] pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace has status "Ready":"False"
	I0916 10:58:59.007506  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:58:59.007527  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:59.007534  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:59.007540  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:59.009780  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:59.009802  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:59.009811  174032 round_trippers.go:580]     Audit-Id: fee777b6-4b7b-4ec1-9e52-4f00913ef01f
	I0916 10:58:59.009816  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:59.009823  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:59.009831  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:59.009836  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:59.009843  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:59 GMT
	I0916 10:58:59.010009  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"659","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 10:58:59.010493  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:59.010509  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:59.010515  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:59.010519  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:59.012304  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:59.012328  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:59.012338  174032 round_trippers.go:580]     Audit-Id: 4e10a290-7170-487b-84c3-53269a559695
	I0916 10:58:59.012344  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:59.012349  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:59.012355  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:59.012359  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:59.012364  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:59 GMT
	I0916 10:58:59.012520  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:58:59.507860  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:58:59.507887  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:59.507895  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:59.507900  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:59.510214  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:58:59.510237  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:59.510244  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:59.510251  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:59 GMT
	I0916 10:58:59.510256  174032 round_trippers.go:580]     Audit-Id: 07f61bba-4bf4-4b8f-adb8-b1cfaed15057
	I0916 10:58:59.510260  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:59.510264  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:59.510268  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:59.510399  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"659","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 10:58:59.510848  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:58:59.510861  174032 round_trippers.go:469] Request Headers:
	I0916 10:58:59.510868  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:58:59.510873  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:58:59.512808  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:58:59.512825  174032 round_trippers.go:577] Response Headers:
	I0916 10:58:59.512831  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:58:59.512834  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:58:59.512837  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:58:59.512839  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:58:59 GMT
	I0916 10:58:59.512843  174032 round_trippers.go:580]     Audit-Id: f8e4a74e-9a42-4b8c-8d74-e5186fa33f7f
	I0916 10:58:59.512848  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:58:59.512968  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:59:00.007689  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:59:00.007711  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:00.007719  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:00.007724  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:00.010020  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:00.010044  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:00.010053  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:00 GMT
	I0916 10:59:00.010058  174032 round_trippers.go:580]     Audit-Id: 0f2e0368-d96c-4291-be95-f01661a2c574
	I0916 10:59:00.010062  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:00.010066  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:00.010071  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:00.010078  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:00.010185  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"659","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 10:59:00.010653  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:59:00.010667  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:00.010674  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:00.010679  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:00.012474  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:59:00.012505  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:00.012515  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:00 GMT
	I0916 10:59:00.012521  174032 round_trippers.go:580]     Audit-Id: e23b38bd-d53e-4cb1-af1c-955bf3019b44
	I0916 10:59:00.012526  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:00.012528  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:00.012531  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:00.012535  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:00.012677  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:59:00.508364  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:59:00.508389  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:00.508397  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:00.508400  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:00.510716  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:00.510744  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:00.510754  174032 round_trippers.go:580]     Audit-Id: 003bd5de-0b59-4043-a8b9-0686e2a4581f
	I0916 10:59:00.510758  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:00.510764  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:00.510769  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:00.510773  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:00.510779  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:00 GMT
	I0916 10:59:00.510969  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"659","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 10:59:00.511442  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:59:00.511454  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:00.511459  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:00.511464  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:00.513637  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:00.513660  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:00.513670  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:00.513676  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:00.513683  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:00.513687  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:00 GMT
	I0916 10:59:00.513692  174032 round_trippers.go:580]     Audit-Id: 2e67cbab-a6e5-40a8-a530-1a8aac3582da
	I0916 10:59:00.513696  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:00.513818  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:59:00.514119  174032 pod_ready.go:103] pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace has status "Ready":"False"
	I0916 10:59:01.007469  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:59:01.007493  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:01.007500  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:01.007506  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:01.009867  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:01.009892  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:01.009902  174032 round_trippers.go:580]     Audit-Id: 098f85da-5425-4a1e-998b-1b668a2c755e
	I0916 10:59:01.009908  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:01.009912  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:01.009918  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:01.009923  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:01.009927  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:01 GMT
	I0916 10:59:01.010052  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"659","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 10:59:01.010546  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:59:01.010564  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:01.010572  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:01.010575  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:01.012444  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:59:01.012464  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:01.012470  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:01.012474  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:01.012477  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:01 GMT
	I0916 10:59:01.012480  174032 round_trippers.go:580]     Audit-Id: 5e7db96c-4cef-4358-b2b0-9834a6f63f3f
	I0916 10:59:01.012481  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:01.012484  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:01.012648  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:59:01.508375  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:59:01.508404  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:01.508414  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:01.508417  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:01.510675  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:01.510700  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:01.510710  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:01.510715  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:01.510719  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:01 GMT
	I0916 10:59:01.510725  174032 round_trippers.go:580]     Audit-Id: 5c5aea62-5e17-44e5-be89-0c9b33e62296
	I0916 10:59:01.510730  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:01.510734  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:01.510846  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"659","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 10:59:01.511339  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:59:01.511353  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:01.511359  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:01.511365  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:01.513249  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:59:01.513266  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:01.513275  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:01.513279  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:01.513289  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:01.513294  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:01 GMT
	I0916 10:59:01.513301  174032 round_trippers.go:580]     Audit-Id: 29ddf1dc-051e-49eb-9490-466635b037c9
	I0916 10:59:01.513305  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:01.513417  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:59:02.008379  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:59:02.008401  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:02.008409  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:02.008412  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:02.010624  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:02.010645  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:02.010654  174032 round_trippers.go:580]     Audit-Id: 8f0ed3e0-7bd5-4030-bae5-41d5a42c7ddb
	I0916 10:59:02.010660  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:02.010664  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:02.010668  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:02.010672  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:02.010676  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:02 GMT
	I0916 10:59:02.010808  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"659","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 10:59:02.011393  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:59:02.011410  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:02.011419  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:02.011425  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:02.013471  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:02.013488  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:02.013494  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:02.013498  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:02.013504  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:02 GMT
	I0916 10:59:02.013507  174032 round_trippers.go:580]     Audit-Id: 7626aa9b-4ff9-42df-8462-2805ee2ef9eb
	I0916 10:59:02.013509  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:02.013512  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:02.013652  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:59:02.508375  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:59:02.508401  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:02.508409  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:02.508414  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:02.510702  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:02.510724  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:02.510734  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:02 GMT
	I0916 10:59:02.510740  174032 round_trippers.go:580]     Audit-Id: 8e23b4f9-57ad-4af8-9fa6-6ed9791529dc
	I0916 10:59:02.510746  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:02.510751  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:02.510755  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:02.510760  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:02.510872  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"659","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 10:59:02.511528  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:59:02.511551  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:02.511560  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:02.511566  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:02.513266  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:59:02.513287  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:02.513296  174032 round_trippers.go:580]     Audit-Id: 0618fa7e-66cb-4e1d-bda3-80835eda3cfc
	I0916 10:59:02.513302  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:02.513307  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:02.513310  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:02.513313  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:02.513316  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:02 GMT
	I0916 10:59:02.513436  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:59:03.008165  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:59:03.008194  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:03.008203  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:03.008208  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:03.010485  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:03.010509  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:03.010517  174032 round_trippers.go:580]     Audit-Id: f7e3a63c-d576-4b65-a50f-b8d1040926fa
	I0916 10:59:03.010524  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:03.010534  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:03.010539  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:03.010544  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:03.010548  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:03 GMT
	I0916 10:59:03.010681  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"659","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 10:59:03.011149  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:59:03.011161  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:03.011168  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:03.011175  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:03.013023  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:59:03.013044  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:03.013053  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:03.013061  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:03.013065  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:03 GMT
	I0916 10:59:03.013069  174032 round_trippers.go:580]     Audit-Id: 044d1a7f-a4fe-49f7-9c43-1ecdd06b2ae5
	I0916 10:59:03.013074  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:03.013078  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:03.013217  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:59:03.013538  174032 pod_ready.go:103] pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace has status "Ready":"False"
	I0916 10:59:03.507829  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:59:03.507853  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:03.507863  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:03.507869  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:03.510214  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:03.510238  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:03.510248  174032 round_trippers.go:580]     Audit-Id: 669db378-6812-4fe0-a6a1-0cd94fb0179d
	I0916 10:59:03.510254  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:03.510259  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:03.510263  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:03.510266  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:03.510272  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:03 GMT
	I0916 10:59:03.510462  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"659","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 10:59:03.510958  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:59:03.510973  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:03.510980  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:03.510986  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:03.512948  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:59:03.512978  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:03.512991  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:03.513000  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:03.513005  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:03 GMT
	I0916 10:59:03.513011  174032 round_trippers.go:580]     Audit-Id: 5e37cb29-2df9-49ec-9de4-73830d86741d
	I0916 10:59:03.513015  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:03.513023  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:03.513158  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:59:04.007829  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:59:04.007855  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:04.007871  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:04.007876  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:04.010082  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:04.010100  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:04.010106  174032 round_trippers.go:580]     Audit-Id: f22da3ab-284b-4364-a6d0-8bc7f88cee18
	I0916 10:59:04.010110  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:04.010113  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:04.010116  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:04.010119  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:04.010122  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:04 GMT
	I0916 10:59:04.010318  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"659","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 10:59:04.010870  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:59:04.010896  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:04.010904  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:04.010908  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:04.012661  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:59:04.012689  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:04.012699  174032 round_trippers.go:580]     Audit-Id: 911fa2da-3458-43c5-965c-7e957d0e7d9a
	I0916 10:59:04.012703  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:04.012708  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:04.012712  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:04.012716  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:04.012726  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:04 GMT
	I0916 10:59:04.012878  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:59:04.507516  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:59:04.507546  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:04.507556  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:04.507562  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:04.509674  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:04.509738  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:04.509752  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:04.509758  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:04.509762  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:04.509766  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:04.509770  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:04 GMT
	I0916 10:59:04.509774  174032 round_trippers.go:580]     Audit-Id: 926a1957-5ced-49b8-a92c-acc48e435a8d
	I0916 10:59:04.509880  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"659","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 10:59:04.510383  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:59:04.510396  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:04.510402  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:04.510406  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:04.512257  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:59:04.512314  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:04.512329  174032 round_trippers.go:580]     Audit-Id: 4559b0e9-1d15-405c-ac11-44099c1fd0b5
	I0916 10:59:04.512337  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:04.512343  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:04.512348  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:04.512353  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:04.512357  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:04 GMT
	I0916 10:59:04.512463  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:59:05.008100  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:59:05.008128  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:05.008136  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:05.008141  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:05.010277  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:05.010332  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:05.010342  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:05.010350  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:05 GMT
	I0916 10:59:05.010357  174032 round_trippers.go:580]     Audit-Id: c448febc-65b1-41fb-bf62-83da4a31c540
	I0916 10:59:05.010362  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:05.010367  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:05.010378  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:05.010531  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"659","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 10:59:05.011065  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:59:05.011087  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:05.011097  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:05.011105  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:05.012931  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:59:05.012948  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:05.012958  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:05 GMT
	I0916 10:59:05.012963  174032 round_trippers.go:580]     Audit-Id: e1fecea2-f73a-49da-9546-d31436226953
	I0916 10:59:05.012967  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:05.012975  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:05.012980  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:05.012985  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:05.013082  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:59:05.507543  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:59:05.507573  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:05.507580  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:05.507584  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:05.510059  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:05.510083  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:05.510091  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:05.510097  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:05.510104  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:05.510110  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:05.510115  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:05 GMT
	I0916 10:59:05.510121  174032 round_trippers.go:580]     Audit-Id: ebe15480-a7fc-4ec9-b22d-803371136b5c
	I0916 10:59:05.510249  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"659","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 10:59:05.510753  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:59:05.510768  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:05.510775  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:05.510778  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:05.512779  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:59:05.512800  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:05.512810  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:05.512816  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:05.512825  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:05.512834  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:05 GMT
	I0916 10:59:05.512838  174032 round_trippers.go:580]     Audit-Id: 61ad546a-03f1-4fc1-b606-59c4d0cab54f
	I0916 10:59:05.512846  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:05.512956  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:59:05.513267  174032 pod_ready.go:103] pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace has status "Ready":"False"
	I0916 10:59:06.007470  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:59:06.007492  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:06.007503  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:06.007507  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:06.009933  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:06.009958  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:06.009967  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:06.009974  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:06.009979  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:06.009984  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:06.009988  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:06 GMT
	I0916 10:59:06.009994  174032 round_trippers.go:580]     Audit-Id: aeacaac6-f127-44d2-b9ab-19ac711e423e
	I0916 10:59:06.010109  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"659","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 10:59:06.010659  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:59:06.010674  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:06.010680  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:06.010684  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:06.012626  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:59:06.012647  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:06.012655  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:06.012659  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:06.012664  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:06.012667  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:06.012673  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:06 GMT
	I0916 10:59:06.012676  174032 round_trippers.go:580]     Audit-Id: 2e4f7037-87b6-48c8-96de-c22333bc1467
	I0916 10:59:06.012788  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:59:06.508470  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:59:06.508494  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:06.508502  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:06.508506  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:06.510853  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:06.510878  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:06.510888  174032 round_trippers.go:580]     Audit-Id: ae28996f-2598-4c6c-9fca-3e6e162c9519
	I0916 10:59:06.510894  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:06.510899  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:06.510904  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:06.510909  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:06.510915  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:06 GMT
	I0916 10:59:06.511022  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"659","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 10:59:06.511513  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:59:06.511531  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:06.511538  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:06.511540  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:06.513490  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:59:06.513510  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:06.513519  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:06.513524  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:06.513530  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:06 GMT
	I0916 10:59:06.513533  174032 round_trippers.go:580]     Audit-Id: da009747-790a-4a7d-8750-b6e253d583f0
	I0916 10:59:06.513537  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:06.513546  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:06.513660  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:59:07.008299  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:59:07.008324  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:07.008333  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:07.008352  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:07.010653  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:07.010684  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:07.010694  174032 round_trippers.go:580]     Audit-Id: f10c76ab-37e1-45b8-bd5d-31a22ffd6990
	I0916 10:59:07.010700  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:07.010706  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:07.010710  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:07.010715  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:07.010720  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:07 GMT
	I0916 10:59:07.010863  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"659","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 10:59:07.011353  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:59:07.011371  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:07.011378  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:07.011382  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:07.013468  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:07.013506  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:07.013514  174032 round_trippers.go:580]     Audit-Id: 53a41ce1-2ffc-47a9-a43e-60313f1f3865
	I0916 10:59:07.013520  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:07.013525  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:07.013532  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:07.013537  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:07.013540  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:07 GMT
	I0916 10:59:07.013640  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:59:07.508316  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:59:07.508342  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:07.508353  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:07.508359  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:07.510914  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:07.510939  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:07.510948  174032 round_trippers.go:580]     Audit-Id: e7f6eb82-e755-41f4-bf2c-0fdc85fc7c59
	I0916 10:59:07.510953  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:07.510957  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:07.510960  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:07.510965  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:07.510969  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:07 GMT
	I0916 10:59:07.511158  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"659","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 10:59:07.511659  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:59:07.511674  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:07.511681  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:07.511685  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:07.513598  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:59:07.513619  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:07.513625  174032 round_trippers.go:580]     Audit-Id: 3055dd82-99f2-4bfd-9451-fb518fa2255a
	I0916 10:59:07.513629  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:07.513632  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:07.513635  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:07.513639  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:07.513641  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:07 GMT
	I0916 10:59:07.513783  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:59:07.514087  174032 pod_ready.go:103] pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace has status "Ready":"False"
	I0916 10:59:08.007443  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:59:08.007464  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:08.007472  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:08.007478  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:08.009869  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:08.009895  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:08.009905  174032 round_trippers.go:580]     Audit-Id: c27c0bdd-bcde-4f37-967a-5e6112a6b25e
	I0916 10:59:08.009913  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:08.009920  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:08.009926  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:08.009930  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:08.009934  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:08 GMT
	I0916 10:59:08.010060  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"659","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 10:59:08.010554  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:59:08.010570  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:08.010577  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:08.010583  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:08.012421  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:59:08.012440  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:08.012446  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:08.012451  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:08.012456  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:08 GMT
	I0916 10:59:08.012458  174032 round_trippers.go:580]     Audit-Id: 5c101987-769b-45dd-99b1-c4e685c274e2
	I0916 10:59:08.012461  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:08.012464  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:08.012582  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:59:08.507509  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:59:08.507535  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:08.507545  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:08.507550  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:08.509885  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:08.509905  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:08.509911  174032 round_trippers.go:580]     Audit-Id: 498c9167-7fb6-4f38-bac3-612e228099b2
	I0916 10:59:08.509915  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:08.509919  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:08.509921  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:08.509923  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:08.509926  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:08 GMT
	I0916 10:59:08.510028  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"659","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 10:59:08.510566  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:59:08.510583  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:08.510591  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:08.510596  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:08.512660  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:08.512686  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:08.512697  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:08 GMT
	I0916 10:59:08.512704  174032 round_trippers.go:580]     Audit-Id: 4e5dadc5-c2c1-46d1-abbd-4b37afa5eb15
	I0916 10:59:08.512709  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:08.512714  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:08.512718  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:08.512723  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:08.512927  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:59:09.007501  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:59:09.007527  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:09.007537  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:09.007543  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:09.009946  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:09.009974  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:09.009983  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:09.009987  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:09.009991  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:09.009995  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:09.009999  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:09 GMT
	I0916 10:59:09.010003  174032 round_trippers.go:580]     Audit-Id: f312ab5d-0c49-4563-b79e-35c124ec6dec
	I0916 10:59:09.010184  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"659","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 10:59:09.010775  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:59:09.010795  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:09.010802  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:09.010806  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:09.012817  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:59:09.012837  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:09.012844  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:09.012849  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:09 GMT
	I0916 10:59:09.012852  174032 round_trippers.go:580]     Audit-Id: dcf295e0-d36c-4d7f-af15-fafadb8b9013
	I0916 10:59:09.012857  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:09.012863  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:09.012867  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:09.012996  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:59:09.507641  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:59:09.507676  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:09.507686  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:09.507691  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:09.510030  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:09.510059  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:09.510068  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:09 GMT
	I0916 10:59:09.510075  174032 round_trippers.go:580]     Audit-Id: ff370e1f-0360-42fb-92f1-97d24edc6690
	I0916 10:59:09.510082  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:09.510087  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:09.510091  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:09.510097  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:09.510220  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"659","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 10:59:09.510815  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:59:09.510835  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:09.510845  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:09.510854  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:09.512800  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:59:09.512835  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:09.512846  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:09 GMT
	I0916 10:59:09.512852  174032 round_trippers.go:580]     Audit-Id: 9d840e44-591f-472e-be3a-fd1e61976c55
	I0916 10:59:09.512855  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:09.512858  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:09.512861  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:09.512863  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:09.512984  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:59:10.007542  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:59:10.007568  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:10.007576  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:10.007580  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:10.009891  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:10.009911  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:10.009917  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:10.009923  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:10 GMT
	I0916 10:59:10.009928  174032 round_trippers.go:580]     Audit-Id: 9aebbd21-6561-4b0a-932f-f793618a122c
	I0916 10:59:10.009933  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:10.009936  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:10.009940  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:10.010058  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"659","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 10:59:10.010523  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:59:10.010537  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:10.010544  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:10.010548  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:10.012610  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:10.012633  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:10.012643  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:10.012649  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:10.012655  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:10 GMT
	I0916 10:59:10.012662  174032 round_trippers.go:580]     Audit-Id: 0310f495-545e-47c7-b168-ea189a8cae74
	I0916 10:59:10.012667  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:10.012673  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:10.012801  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:59:10.013155  174032 pod_ready.go:103] pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace has status "Ready":"False"
	I0916 10:59:10.507441  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:59:10.507465  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:10.507473  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:10.507478  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:10.510154  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:10.510175  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:10.510181  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:10.510186  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:10.510189  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:10.510192  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:10.510196  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:10 GMT
	I0916 10:59:10.510202  174032 round_trippers.go:580]     Audit-Id: bf3fc740-aebe-47ea-9e19-4875c4195150
	I0916 10:59:10.510370  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"659","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 10:59:10.510872  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:59:10.510889  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:10.510899  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:10.510904  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:10.512704  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:59:10.512720  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:10.512725  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:10.512728  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:10.512733  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:10.512739  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:10.512745  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:10 GMT
	I0916 10:59:10.512748  174032 round_trippers.go:580]     Audit-Id: 4631b03b-9bea-4da6-aebc-17e357252b0d
	I0916 10:59:10.512877  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:59:11.007539  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:59:11.007568  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:11.007578  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:11.007597  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:11.010241  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:11.010260  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:11.010266  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:11.010269  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:11.010273  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:11 GMT
	I0916 10:59:11.010276  174032 round_trippers.go:580]     Audit-Id: 46b79169-2d04-401b-b8b4-cfb70f3b7c9f
	I0916 10:59:11.010279  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:11.010283  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:11.010424  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"659","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 10:59:11.010909  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:59:11.010925  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:11.010932  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:11.010936  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:11.012867  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:59:11.012886  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:11.012892  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:11.012897  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:11.012900  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:11 GMT
	I0916 10:59:11.012904  174032 round_trippers.go:580]     Audit-Id: f3f9c5f3-e89f-4d08-8118-08f4020e3530
	I0916 10:59:11.012907  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:11.012911  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:11.013015  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:59:11.507648  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:59:11.507681  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:11.507691  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:11.507815  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:11.510050  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:11.510072  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:11.510081  174032 round_trippers.go:580]     Audit-Id: b82d5358-5aa2-4e08-af05-ee2c49b2a65f
	I0916 10:59:11.510088  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:11.510092  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:11.510096  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:11.510101  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:11.510106  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:11 GMT
	I0916 10:59:11.510206  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"659","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 10:59:11.510723  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:59:11.510740  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:11.510749  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:11.510755  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:11.512821  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:11.512848  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:11.512857  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:11 GMT
	I0916 10:59:11.512863  174032 round_trippers.go:580]     Audit-Id: 0d844f16-96bb-4088-8dfe-cd4825070415
	I0916 10:59:11.512868  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:11.512872  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:11.512876  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:11.512883  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:11.512992  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:59:12.008008  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:59:12.008031  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:12.008039  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:12.008043  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:12.010165  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:12.010191  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:12.010202  174032 round_trippers.go:580]     Audit-Id: b283ddb0-b16c-4d85-85e9-451b9f1fbee5
	I0916 10:59:12.010208  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:12.010214  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:12.010219  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:12.010224  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:12.010228  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:12 GMT
	I0916 10:59:12.010346  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"816","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6693 chars]
	I0916 10:59:12.010820  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:59:12.010835  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:12.010842  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:12.010848  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:12.012726  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:59:12.012746  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:12.012752  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:12.012757  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:12.012760  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:12 GMT
	I0916 10:59:12.012763  174032 round_trippers.go:580]     Audit-Id: 683fcac2-3b1b-4701-9998-b4984227cbce
	I0916 10:59:12.012767  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:12.012770  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:12.012917  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:59:12.013238  174032 pod_ready.go:93] pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace has status "Ready":"True"
	I0916 10:59:12.013254  174032 pod_ready.go:82] duration metric: took 17.506001921s for pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace to be "Ready" ...
	I0916 10:59:12.013272  174032 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:59:12.013342  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-079070
	I0916 10:59:12.013351  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:12.013365  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:12.013375  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:12.015345  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:59:12.015367  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:12.015377  174032 round_trippers.go:580]     Audit-Id: ceda42a3-6fd2-466e-8624-f1f2c572b623
	I0916 10:59:12.015383  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:12.015387  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:12.015393  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:12.015398  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:12.015405  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:12 GMT
	I0916 10:59:12.015544  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-079070","namespace":"kube-system","uid":"fdcbee48-0d3b-47d7-acd2-298979345c7a","resourceVersion":"749","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"4cd98cb286c25ab6542db09649b1ab0f","kubernetes.io/config.mirror":"4cd98cb286c25ab6542db09649b1ab0f","kubernetes.io/config.seen":"2024-09-16T10:56:25.844758522Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6653 chars]
	I0916 10:59:12.015963  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:59:12.015975  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:12.015983  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:12.016019  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:12.017963  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:59:12.017978  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:12.017985  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:12.017988  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:12 GMT
	I0916 10:59:12.017991  174032 round_trippers.go:580]     Audit-Id: f7621a46-e441-4460-b31b-a63b4261fdd1
	I0916 10:59:12.017994  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:12.017997  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:12.018001  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:12.018194  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:59:12.018572  174032 pod_ready.go:93] pod "etcd-multinode-079070" in "kube-system" namespace has status "Ready":"True"
	I0916 10:59:12.018592  174032 pod_ready.go:82] duration metric: took 5.309138ms for pod "etcd-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:59:12.018616  174032 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:59:12.018684  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-079070
	I0916 10:59:12.018694  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:12.018704  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:12.018712  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:12.020587  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:59:12.020609  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:12.020619  174032 round_trippers.go:580]     Audit-Id: 03401f5f-019e-40bb-bc0e-49ad241f40ef
	I0916 10:59:12.020625  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:12.020631  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:12.020641  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:12.020644  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:12.020648  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:12 GMT
	I0916 10:59:12.020791  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-079070","namespace":"kube-system","uid":"72784d35-4a94-476c-a4e9-dc3c6c3a8c46","resourceVersion":"747","creationTimestamp":"2024-09-16T10:56:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.mirror":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.seen":"2024-09-16T10:56:20.578394748Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8731 chars]
	I0916 10:59:12.021197  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:59:12.021209  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:12.021215  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:12.021219  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:12.024664  174032 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:59:12.024690  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:12.024699  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:12.024704  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:12.024710  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:12.024716  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:12 GMT
	I0916 10:59:12.024720  174032 round_trippers.go:580]     Audit-Id: 501672ec-b687-49f1-9b15-d43317ddf14f
	I0916 10:59:12.024725  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:12.024819  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:59:12.025125  174032 pod_ready.go:93] pod "kube-apiserver-multinode-079070" in "kube-system" namespace has status "Ready":"True"
	I0916 10:59:12.025140  174032 pod_ready.go:82] duration metric: took 6.514049ms for pod "kube-apiserver-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:59:12.025150  174032 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:59:12.025209  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-079070
	I0916 10:59:12.025217  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:12.025223  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:12.025227  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:12.026885  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:59:12.026899  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:12.026904  174032 round_trippers.go:580]     Audit-Id: e285d752-357e-4a93-958b-3ec978a6d662
	I0916 10:59:12.026908  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:12.026911  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:12.026914  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:12.026916  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:12.026919  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:12 GMT
	I0916 10:59:12.027110  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-079070","namespace":"kube-system","uid":"44a2f17c-654a-4d64-b474-70ce475c3afe","resourceVersion":"751","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"93bd5dba25d1e51504f9fc3f55fd27c8","kubernetes.io/config.mirror":"93bd5dba25d1e51504f9fc3f55fd27c8","kubernetes.io/config.seen":"2024-09-16T10:56:25.844752735Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8306 chars]
	I0916 10:59:12.027549  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:59:12.027562  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:12.027570  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:12.027573  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:12.029091  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:59:12.029125  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:12.029133  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:12 GMT
	I0916 10:59:12.029138  174032 round_trippers.go:580]     Audit-Id: 4bb3b19b-88d5-459e-80cf-bfdd0c5dbc9b
	I0916 10:59:12.029142  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:12.029146  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:12.029154  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:12.029162  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:12.029282  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:59:12.029647  174032 pod_ready.go:93] pod "kube-controller-manager-multinode-079070" in "kube-system" namespace has status "Ready":"True"
	I0916 10:59:12.029670  174032 pod_ready.go:82] duration metric: took 4.511279ms for pod "kube-controller-manager-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:59:12.029681  174032 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2vhmt" in "kube-system" namespace to be "Ready" ...
	I0916 10:59:12.029730  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2vhmt
	I0916 10:59:12.029738  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:12.029744  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:12.029747  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:12.031401  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:59:12.031422  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:12.031432  174032 round_trippers.go:580]     Audit-Id: 7c357af2-d617-4e02-923d-f8040d764133
	I0916 10:59:12.031437  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:12.031440  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:12.031443  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:12.031446  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:12.031448  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:12 GMT
	I0916 10:59:12.031571  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2vhmt","generateName":"kube-proxy-","namespace":"kube-system","uid":"6f3faf85-04e9-4840-855d-dd1ef9d4e463","resourceVersion":"664","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6388 chars]
	I0916 10:59:12.031995  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:59:12.032008  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:12.032015  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:12.032019  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:12.033559  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:59:12.033575  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:12.033583  174032 round_trippers.go:580]     Audit-Id: 13086788-d03b-4849-a143-9d805ea10670
	I0916 10:59:12.033590  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:12.033594  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:12.033598  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:12.033603  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:12.033608  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:12 GMT
	I0916 10:59:12.033692  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:59:12.033969  174032 pod_ready.go:93] pod "kube-proxy-2vhmt" in "kube-system" namespace has status "Ready":"True"
	I0916 10:59:12.033984  174032 pod_ready.go:82] duration metric: took 4.297762ms for pod "kube-proxy-2vhmt" in "kube-system" namespace to be "Ready" ...
	I0916 10:59:12.033993  174032 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9z4qh" in "kube-system" namespace to be "Ready" ...
	I0916 10:59:12.208387  174032 request.go:632] Waited for 174.334525ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9z4qh
	I0916 10:59:12.208482  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9z4qh
	I0916 10:59:12.208494  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:12.208501  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:12.208509  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:12.210925  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:12.210945  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:12.210953  174032 round_trippers.go:580]     Audit-Id: abac385b-bc72-460e-ae9e-616cbfb0054c
	I0916 10:59:12.210958  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:12.210966  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:12.210972  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:12.210977  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:12.210981  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:12 GMT
	I0916 10:59:12.211093  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9z4qh","generateName":"kube-proxy-","namespace":"kube-system","uid":"7bae6f8c-6440-4b47-aaa2-e7bbc9dabd7d","resourceVersion":"580","creationTimestamp":"2024-09-16T10:57:29Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:57:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6183 chars]
	I0916 10:59:12.408928  174032 request.go:632] Waited for 197.297452ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m03
	I0916 10:59:12.409016  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m03
	I0916 10:59:12.409024  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:12.409034  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:12.409045  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:12.411222  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:12.411243  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:12.411249  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:12.411252  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:12.411256  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:12.411261  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:12 GMT
	I0916 10:59:12.411265  174032 round_trippers.go:580]     Audit-Id: b015dcf7-e0cd-43ce-97a1-e633e1a99cd0
	I0916 10:59:12.411269  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:12.411392  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070-m03","uid":"6e80e382-2eee-493d-a8de-e048ca27cfc5","resourceVersion":"606","creationTimestamp":"2024-09-16T10:57:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_57_29_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:57:29Z","fieldsType":"FieldsV1","fiel
dsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl": [truncated 4841 chars]
	I0916 10:59:12.411888  174032 pod_ready.go:93] pod "kube-proxy-9z4qh" in "kube-system" namespace has status "Ready":"True"
	I0916 10:59:12.411917  174032 pod_ready.go:82] duration metric: took 377.916778ms for pod "kube-proxy-9z4qh" in "kube-system" namespace to be "Ready" ...
	I0916 10:59:12.411932  174032 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xkr65" in "kube-system" namespace to be "Ready" ...
	I0916 10:59:12.608785  174032 request.go:632] Waited for 196.773542ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xkr65
	I0916 10:59:12.608845  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xkr65
	I0916 10:59:12.608851  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:12.608859  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:12.608872  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:12.611267  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:12.611295  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:12.611303  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:12.611315  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:12.611320  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:12 GMT
	I0916 10:59:12.611324  174032 round_trippers.go:580]     Audit-Id: ca63bbfe-4d1b-4ed2-a703-9a23caba9781
	I0916 10:59:12.611328  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:12.611332  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:12.611502  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xkr65","generateName":"kube-proxy-","namespace":"kube-system","uid":"b8d1009a-f71f-4cb1-a2f0-510a2894874f","resourceVersion":"768","creationTimestamp":"2024-09-16T10:56:57Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6396 chars]
	I0916 10:59:12.808356  174032 request.go:632] Waited for 196.362809ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m02
	I0916 10:59:12.808440  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m02
	I0916 10:59:12.808446  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:12.808455  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:12.808468  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:12.810777  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:12.810801  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:12.810808  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:12.810814  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:12 GMT
	I0916 10:59:12.810820  174032 round_trippers.go:580]     Audit-Id: 0d65e8f7-030a-4c9c-9f9c-a5db5a14c38d
	I0916 10:59:12.810824  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:12.810830  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:12.810835  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:12.810953  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070-m02","uid":"5bc3423c-6b73-4003-b0b3-5aa501e974c0","resourceVersion":"757","creationTimestamp":"2024-09-16T10:56:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_56_58_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:57Z","fieldsType":"FieldsV1","fiel
dsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl": [truncated 5023 chars]
	I0916 10:59:12.811273  174032 pod_ready.go:93] pod "kube-proxy-xkr65" in "kube-system" namespace has status "Ready":"True"
	I0916 10:59:12.811290  174032 pod_ready.go:82] duration metric: took 399.351319ms for pod "kube-proxy-xkr65" in "kube-system" namespace to be "Ready" ...
	I0916 10:59:12.811300  174032 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:59:13.008305  174032 request.go:632] Waited for 196.925331ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 10:59:13.008382  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 10:59:13.008409  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:13.008420  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:13.008424  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:13.010817  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:13.010839  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:13.010845  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:13.010850  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:13.010854  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:13.010856  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:13 GMT
	I0916 10:59:13.010858  174032 round_trippers.go:580]     Audit-Id: 36b1aa46-e074-4aa6-9e1b-4a387d9b1dfe
	I0916 10:59:13.010863  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:13.011009  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-079070","namespace":"kube-system","uid":"cc5dd2a3-a136-42b4-a6f9-11c733cb38c4","resourceVersion":"740","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.mirror":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.seen":"2024-09-16T10:56:25.844756911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5188 chars]
	I0916 10:59:13.208868  174032 request.go:632] Waited for 197.428482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:59:13.208968  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:59:13.208986  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:13.209001  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:13.209012  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:13.211272  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:13.211293  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:13.211302  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:13.211321  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:13 GMT
	I0916 10:59:13.211327  174032 round_trippers.go:580]     Audit-Id: 70baf94b-372b-4e90-9919-5f0f44163fc5
	I0916 10:59:13.211334  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:13.211342  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:13.211347  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:13.211465  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:59:13.211889  174032 pod_ready.go:93] pod "kube-scheduler-multinode-079070" in "kube-system" namespace has status "Ready":"True"
	I0916 10:59:13.211910  174032 pod_ready.go:82] duration metric: took 400.601809ms for pod "kube-scheduler-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:59:13.211923  174032 pod_ready.go:39] duration metric: took 18.712462933s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:59:13.211947  174032 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:59:13.212005  174032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:59:13.223013  174032 system_svc.go:56] duration metric: took 11.056214ms WaitForService to wait for kubelet
	I0916 10:59:13.223044  174032 kubeadm.go:582] duration metric: took 18.818970589s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:59:13.223064  174032 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:59:13.408514  174032 request.go:632] Waited for 185.356385ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes
	I0916 10:59:13.408600  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes
	I0916 10:59:13.408606  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:13.408613  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:13.408617  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:13.410885  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:13.410909  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:13.410919  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:13.410925  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:13.410928  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:13.410932  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:13 GMT
	I0916 10:59:13.410937  174032 round_trippers.go:580]     Audit-Id: 57943b82-16d0-4c85-a659-c3da2049992b
	I0916 10:59:13.410941  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:13.411224  174032 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"823"},"items":[{"metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"manag
edFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1", [truncated 17136 chars]
	I0916 10:59:13.411922  174032 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:59:13.411943  174032 node_conditions.go:123] node cpu capacity is 8
	I0916 10:59:13.411957  174032 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:59:13.411963  174032 node_conditions.go:123] node cpu capacity is 8
	I0916 10:59:13.411968  174032 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:59:13.411974  174032 node_conditions.go:123] node cpu capacity is 8
	I0916 10:59:13.411980  174032 node_conditions.go:105] duration metric: took 188.910094ms to run NodePressure ...
	I0916 10:59:13.411994  174032 start.go:241] waiting for startup goroutines ...
	I0916 10:59:13.412023  174032 start.go:255] writing updated cluster config ...
	I0916 10:59:13.414049  174032 out.go:201] 
	I0916 10:59:13.415518  174032 config.go:182] Loaded profile config "multinode-079070": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:59:13.415629  174032 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/config.json ...
	I0916 10:59:13.417097  174032 out.go:177] * Starting "multinode-079070-m03" worker node in "multinode-079070" cluster
	I0916 10:59:13.418378  174032 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 10:59:13.419796  174032 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:59:13.421526  174032 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:59:13.421566  174032 cache.go:56] Caching tarball of preloaded images
	I0916 10:59:13.421566  174032 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:59:13.421682  174032 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 10:59:13.421700  174032 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 10:59:13.421843  174032 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/config.json ...
	W0916 10:59:13.441182  174032 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:59:13.441201  174032 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:59:13.441291  174032 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:59:13.441325  174032 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:59:13.441330  174032 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:59:13.441337  174032 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:59:13.441342  174032 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:59:13.442419  174032 image.go:273] response: 
	I0916 10:59:13.503895  174032 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:59:13.503933  174032 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:59:13.503971  174032 start.go:360] acquireMachinesLock for multinode-079070-m03: {Name:mkcbeccd1e9d15374a4cd20dc5b03524fb0afaa2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:59:13.504042  174032 start.go:364] duration metric: took 46.332µs to acquireMachinesLock for "multinode-079070-m03"
	I0916 10:59:13.504067  174032 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:59:13.504077  174032 fix.go:54] fixHost starting: m03
	I0916 10:59:13.504323  174032 cli_runner.go:164] Run: docker container inspect multinode-079070-m03 --format={{.State.Status}}
	I0916 10:59:13.521518  174032 fix.go:112] recreateIfNeeded on multinode-079070-m03: state=Stopped err=<nil>
	W0916 10:59:13.521548  174032 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:59:13.523674  174032 out.go:177] * Restarting existing docker container for "multinode-079070-m03" ...
	I0916 10:59:13.524969  174032 cli_runner.go:164] Run: docker start multinode-079070-m03
	I0916 10:59:13.819597  174032 cli_runner.go:164] Run: docker container inspect multinode-079070-m03 --format={{.State.Status}}
	I0916 10:59:13.838548  174032 kic.go:430] container "multinode-079070-m03" state is running.
	I0916 10:59:13.838987  174032 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-079070-m03
	I0916 10:59:13.858696  174032 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/config.json ...
	I0916 10:59:13.859016  174032 machine.go:93] provisionDockerMachine start ...
	I0916 10:59:13.859084  174032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070-m03
	I0916 10:59:13.877877  174032 main.go:141] libmachine: Using SSH client type: native
	I0916 10:59:13.878129  174032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32938 <nil> <nil>}
	I0916 10:59:13.878153  174032 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:59:13.878837  174032 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54588->127.0.0.1:32938: read: connection reset by peer
	I0916 10:59:17.011670  174032 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-079070-m03
	
	I0916 10:59:17.011702  174032 ubuntu.go:169] provisioning hostname "multinode-079070-m03"
	I0916 10:59:17.011860  174032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070-m03
	I0916 10:59:17.029485  174032 main.go:141] libmachine: Using SSH client type: native
	I0916 10:59:17.029708  174032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32938 <nil> <nil>}
	I0916 10:59:17.029727  174032 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-079070-m03 && echo "multinode-079070-m03" | sudo tee /etc/hostname
	I0916 10:59:17.175492  174032 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-079070-m03
	
	I0916 10:59:17.175586  174032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070-m03
	I0916 10:59:17.193070  174032 main.go:141] libmachine: Using SSH client type: native
	I0916 10:59:17.193250  174032 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32938 <nil> <nil>}
	I0916 10:59:17.193271  174032 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-079070-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-079070-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-079070-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 10:59:17.328139  174032 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 10:59:17.328176  174032 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 10:59:17.328195  174032 ubuntu.go:177] setting up certificates
	I0916 10:59:17.328207  174032 provision.go:84] configureAuth start
	I0916 10:59:17.328255  174032 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-079070-m03
	I0916 10:59:17.345525  174032 provision.go:143] copyHostCerts
	I0916 10:59:17.345560  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:59:17.345605  174032 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 10:59:17.345614  174032 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 10:59:17.345675  174032 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 10:59:17.345756  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:59:17.345774  174032 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 10:59:17.345781  174032 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 10:59:17.345806  174032 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 10:59:17.345847  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:59:17.345864  174032 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 10:59:17.345869  174032 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 10:59:17.345891  174032 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 10:59:17.345938  174032 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.multinode-079070-m03 san=[127.0.0.1 192.168.67.4 localhost minikube multinode-079070-m03]
	I0916 10:59:17.483670  174032 provision.go:177] copyRemoteCerts
	I0916 10:59:17.483749  174032 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 10:59:17.483797  174032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070-m03
	I0916 10:59:17.501969  174032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32938 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070-m03/id_rsa Username:docker}
	I0916 10:59:17.596593  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 10:59:17.596663  174032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 10:59:17.619563  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 10:59:17.619641  174032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0916 10:59:17.644666  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 10:59:17.644752  174032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 10:59:17.669276  174032 provision.go:87] duration metric: took 341.055611ms to configureAuth
	I0916 10:59:17.669308  174032 ubuntu.go:193] setting minikube options for container-runtime
	I0916 10:59:17.669546  174032 config.go:182] Loaded profile config "multinode-079070": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:59:17.669560  174032 machine.go:96] duration metric: took 3.8105274s to provisionDockerMachine
	I0916 10:59:17.669568  174032 start.go:293] postStartSetup for "multinode-079070-m03" (driver="docker")
	I0916 10:59:17.669582  174032 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 10:59:17.669746  174032 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 10:59:17.669809  174032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070-m03
	I0916 10:59:17.687419  174032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32938 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070-m03/id_rsa Username:docker}
	I0916 10:59:17.785260  174032 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 10:59:17.788676  174032 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.4 LTS"
	I0916 10:59:17.788698  174032 command_runner.go:130] > NAME="Ubuntu"
	I0916 10:59:17.788704  174032 command_runner.go:130] > VERSION_ID="22.04"
	I0916 10:59:17.788717  174032 command_runner.go:130] > VERSION="22.04.4 LTS (Jammy Jellyfish)"
	I0916 10:59:17.788726  174032 command_runner.go:130] > VERSION_CODENAME=jammy
	I0916 10:59:17.788731  174032 command_runner.go:130] > ID=ubuntu
	I0916 10:59:17.788737  174032 command_runner.go:130] > ID_LIKE=debian
	I0916 10:59:17.788743  174032 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0916 10:59:17.788751  174032 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0916 10:59:17.788763  174032 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0916 10:59:17.788771  174032 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0916 10:59:17.788777  174032 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0916 10:59:17.788875  174032 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 10:59:17.788911  174032 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 10:59:17.788927  174032 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 10:59:17.788936  174032 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 10:59:17.788951  174032 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 10:59:17.789016  174032 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 10:59:17.789119  174032 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 10:59:17.789131  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /etc/ssl/certs/111892.pem
	I0916 10:59:17.789226  174032 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 10:59:17.797673  174032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:59:17.820821  174032 start.go:296] duration metric: took 151.234077ms for postStartSetup
	I0916 10:59:17.820910  174032 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:59:17.820958  174032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070-m03
	I0916 10:59:17.841132  174032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32938 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070-m03/id_rsa Username:docker}
	I0916 10:59:17.932545  174032 command_runner.go:130] > 32%
	I0916 10:59:17.932617  174032 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 10:59:17.937059  174032 command_runner.go:130] > 201G
	I0916 10:59:17.937090  174032 fix.go:56] duration metric: took 4.433012932s for fixHost
	I0916 10:59:17.937100  174032 start.go:83] releasing machines lock for "multinode-079070-m03", held for 4.433046922s
	I0916 10:59:17.937157  174032 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-079070-m03
	I0916 10:59:17.956769  174032 out.go:177] * Found network options:
	I0916 10:59:17.958285  174032 out.go:177]   - NO_PROXY=192.168.67.2,192.168.67.3
	W0916 10:59:17.959617  174032 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:59:17.959643  174032 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:59:17.959670  174032 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 10:59:17.959681  174032 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 10:59:17.959768  174032 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 10:59:17.959810  174032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070-m03
	I0916 10:59:17.959853  174032 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 10:59:17.959915  174032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070-m03
	I0916 10:59:17.978461  174032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32938 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070-m03/id_rsa Username:docker}
	I0916 10:59:17.979720  174032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32938 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070-m03/id_rsa Username:docker}
	I0916 10:59:18.068164  174032 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0916 10:59:18.068191  174032 command_runner.go:130] >   Size: 78        	Blocks: 8          IO Block: 4096   regular file
	I0916 10:59:18.068200  174032 command_runner.go:130] > Device: ech/236d	Inode: 830497      Links: 1
	I0916 10:59:18.068211  174032 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:59:18.068222  174032 command_runner.go:130] > Access: 2024-09-16 10:59:14.249609249 +0000
	I0916 10:59:18.068229  174032 command_runner.go:130] > Modify: 2024-09-16 10:57:50.930266128 +0000
	I0916 10:59:18.068236  174032 command_runner.go:130] > Change: 2024-09-16 10:57:50.930266128 +0000
	I0916 10:59:18.068244  174032 command_runner.go:130] >  Birth: 2024-09-16 10:57:50.930266128 +0000
	I0916 10:59:18.068486  174032 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 10:59:18.147496  174032 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0916 10:59:18.147562  174032 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 10:59:18.147632  174032 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 10:59:18.156094  174032 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 10:59:18.156119  174032 start.go:495] detecting cgroup driver to use...
	I0916 10:59:18.156158  174032 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 10:59:18.156205  174032 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 10:59:18.168131  174032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 10:59:18.178700  174032 docker.go:217] disabling cri-docker service (if available) ...
	I0916 10:59:18.178762  174032 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 10:59:18.190353  174032 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 10:59:18.200952  174032 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 10:59:18.268414  174032 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 10:59:18.342456  174032 docker.go:233] disabling docker service ...
	I0916 10:59:18.342526  174032 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 10:59:18.354246  174032 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 10:59:18.365076  174032 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 10:59:18.444442  174032 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 10:59:18.527398  174032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 10:59:18.538160  174032 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 10:59:18.553485  174032 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0916 10:59:18.553584  174032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 10:59:18.563221  174032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 10:59:18.572687  174032 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 10:59:18.572759  174032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 10:59:18.582592  174032 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:59:18.592081  174032 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 10:59:18.601258  174032 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 10:59:18.611246  174032 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 10:59:18.620399  174032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 10:59:18.629799  174032 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 10:59:18.640437  174032 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 10:59:18.650121  174032 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 10:59:18.657307  174032 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0916 10:59:18.657999  174032 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 10:59:18.666929  174032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:59:18.745099  174032 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 10:59:18.842820  174032 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 10:59:18.842882  174032 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 10:59:18.846455  174032 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I0916 10:59:18.846475  174032 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0916 10:59:18.846481  174032 command_runner.go:130] > Device: 100054h/1048660d	Inode: 169         Links: 1
	I0916 10:59:18.846488  174032 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 10:59:18.846494  174032 command_runner.go:130] > Access: 2024-09-16 10:59:18.810011167 +0000
	I0916 10:59:18.846499  174032 command_runner.go:130] > Modify: 2024-09-16 10:59:18.810011167 +0000
	I0916 10:59:18.846503  174032 command_runner.go:130] > Change: 2024-09-16 10:59:18.810011167 +0000
	I0916 10:59:18.846507  174032 command_runner.go:130] >  Birth: -
	I0916 10:59:18.846523  174032 start.go:563] Will wait 60s for crictl version
	I0916 10:59:18.846565  174032 ssh_runner.go:195] Run: which crictl
	I0916 10:59:18.849700  174032 command_runner.go:130] > /usr/bin/crictl
	I0916 10:59:18.849775  174032 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 10:59:18.879971  174032 command_runner.go:130] > Version:  0.1.0
	I0916 10:59:18.879991  174032 command_runner.go:130] > RuntimeName:  containerd
	I0916 10:59:18.879998  174032 command_runner.go:130] > RuntimeVersion:  1.7.22
	I0916 10:59:18.880002  174032 command_runner.go:130] > RuntimeApiVersion:  v1
	I0916 10:59:18.882055  174032 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 10:59:18.882105  174032 ssh_runner.go:195] Run: containerd --version
	I0916 10:59:18.903868  174032 command_runner.go:130] > containerd containerd.io 1.7.22 7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c
	I0916 10:59:18.903951  174032 ssh_runner.go:195] Run: containerd --version
	I0916 10:59:18.924859  174032 command_runner.go:130] > containerd containerd.io 1.7.22 7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c
	I0916 10:59:18.928378  174032 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 10:59:18.929610  174032 out.go:177]   - env NO_PROXY=192.168.67.2
	I0916 10:59:18.930895  174032 out.go:177]   - env NO_PROXY=192.168.67.2,192.168.67.3
	I0916 10:59:18.932188  174032 cli_runner.go:164] Run: docker network inspect multinode-079070 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 10:59:18.949408  174032 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0916 10:59:18.953854  174032 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:59:18.964562  174032 mustload.go:65] Loading cluster: multinode-079070
	I0916 10:59:18.964762  174032 config.go:182] Loaded profile config "multinode-079070": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:59:18.964954  174032 cli_runner.go:164] Run: docker container inspect multinode-079070 --format={{.State.Status}}
	I0916 10:59:18.982965  174032 host.go:66] Checking if "multinode-079070" exists ...
	I0916 10:59:18.983235  174032 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070 for IP: 192.168.67.4
	I0916 10:59:18.983246  174032 certs.go:194] generating shared ca certs ...
	I0916 10:59:18.983260  174032 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:59:18.983369  174032 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 10:59:18.983405  174032 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 10:59:18.983415  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 10:59:18.983427  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 10:59:18.983439  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 10:59:18.983451  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 10:59:18.983497  174032 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 10:59:18.983523  174032 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 10:59:18.983533  174032 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 10:59:18.983554  174032 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 10:59:18.983593  174032 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 10:59:18.983616  174032 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 10:59:18.983656  174032 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 10:59:18.983684  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:59:18.983697  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem -> /usr/share/ca-certificates/11189.pem
	I0916 10:59:18.983710  174032 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /usr/share/ca-certificates/111892.pem
	I0916 10:59:18.983727  174032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 10:59:19.008089  174032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 10:59:19.030985  174032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 10:59:19.053526  174032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 10:59:19.076838  174032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 10:59:19.100068  174032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 10:59:19.123592  174032 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 10:59:19.146345  174032 ssh_runner.go:195] Run: openssl version
	I0916 10:59:19.151206  174032 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0916 10:59:19.151443  174032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 10:59:19.161000  174032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:59:19.164797  174032 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:59:19.164846  174032 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:59:19.164883  174032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 10:59:19.171772  174032 command_runner.go:130] > b5213941
	I0916 10:59:19.171883  174032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 10:59:19.180560  174032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 10:59:19.189975  174032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 10:59:19.193423  174032 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 10:59:19.193459  174032 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 10:59:19.193505  174032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 10:59:19.199851  174032 command_runner.go:130] > 51391683
	I0916 10:59:19.200057  174032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 10:59:19.208698  174032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 10:59:19.218097  174032 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 10:59:19.221515  174032 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 10:59:19.221580  174032 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 10:59:19.221624  174032 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 10:59:19.227937  174032 command_runner.go:130] > 3ec20f2e
	I0916 10:59:19.228079  174032 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 10:59:19.236881  174032 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 10:59:19.240412  174032 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:59:19.240449  174032 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 10:59:19.240493  174032 kubeadm.go:934] updating node {m03 192.168.67.4 0 v1.31.1  false true} ...
	I0916 10:59:19.240592  174032 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-079070-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-079070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 10:59:19.240644  174032 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 10:59:19.248909  174032 command_runner.go:130] > kubeadm
	I0916 10:59:19.248925  174032 command_runner.go:130] > kubectl
	I0916 10:59:19.248929  174032 command_runner.go:130] > kubelet
	I0916 10:59:19.248947  174032 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 10:59:19.248999  174032 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0916 10:59:19.257352  174032 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0916 10:59:19.275039  174032 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 10:59:19.293404  174032 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0916 10:59:19.296848  174032 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 10:59:19.307536  174032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:59:19.388547  174032 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:59:19.399978  174032 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.67.4 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0916 10:59:19.400264  174032 config.go:182] Loaded profile config "multinode-079070": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:59:19.402321  174032 out.go:177] * Verifying Kubernetes components...
	I0916 10:59:19.403637  174032 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 10:59:19.475771  174032 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 10:59:19.486903  174032 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:59:19.487108  174032 kapi.go:59] client config for multinode-079070: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 10:59:19.487346  174032 node_ready.go:35] waiting up to 6m0s for node "multinode-079070-m03" to be "Ready" ...
	I0916 10:59:19.487427  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m03
	I0916 10:59:19.487437  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:19.487444  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:19.487449  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:19.489754  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:19.489776  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:19.489783  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:19 GMT
	I0916 10:59:19.489786  174032 round_trippers.go:580]     Audit-Id: 3bf28568-f348-4e0a-a315-3dd3252270c0
	I0916 10:59:19.489788  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:19.489792  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:19.489795  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:19.489797  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:19.489964  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070-m03","uid":"6e80e382-2eee-493d-a8de-e048ca27cfc5","resourceVersion":"827","creationTimestamp":"2024-09-16T10:57:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_57_29_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:57:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metada
ta":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} [truncated 5500 chars]
	I0916 10:59:19.987591  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m03
	I0916 10:59:19.987616  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:19.987629  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:19.987634  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:19.989876  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:19.989897  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:19.989906  174032 round_trippers.go:580]     Audit-Id: a6792f44-0e3c-471d-96c7-885e3a8b69bd
	I0916 10:59:19.989911  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:19.989916  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:19.989920  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:19.989924  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:19.989928  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:19 GMT
	I0916 10:59:19.990105  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070-m03","uid":"6e80e382-2eee-493d-a8de-e048ca27cfc5","resourceVersion":"827","creationTimestamp":"2024-09-16T10:57:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_57_29_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:57:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metada
ta":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} [truncated 5500 chars]
	I0916 10:59:20.487787  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m03
	I0916 10:59:20.487812  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:20.487819  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:20.487822  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:20.489999  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:20.490020  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:20.490027  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:20 GMT
	I0916 10:59:20.490033  174032 round_trippers.go:580]     Audit-Id: da5b5d78-4961-4c38-829a-4e1525e09c94
	I0916 10:59:20.490039  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:20.490044  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:20.490049  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:20.490055  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:20.490244  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070-m03","uid":"6e80e382-2eee-493d-a8de-e048ca27cfc5","resourceVersion":"839","creationTimestamp":"2024-09-16T10:57:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_57_29_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:57:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metada
ta":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} [truncated 5597 chars]
	I0916 10:59:20.987905  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m03
	I0916 10:59:20.987933  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:20.987940  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:20.987944  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:20.990012  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:20.990031  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:20.990040  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:20.990044  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:20.990048  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:20.990052  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:20 GMT
	I0916 10:59:20.990054  174032 round_trippers.go:580]     Audit-Id: d0bfc9b6-ef16-49f1-9d56-a0653bded70e
	I0916 10:59:20.990058  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:20.990201  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070-m03","uid":"6e80e382-2eee-493d-a8de-e048ca27cfc5","resourceVersion":"839","creationTimestamp":"2024-09-16T10:57:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_57_29_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:57:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metada
ta":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} [truncated 5597 chars]
	I0916 10:59:21.487795  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m03
	I0916 10:59:21.487822  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:21.487830  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:21.487836  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:21.489981  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:21.490005  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:21.490015  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:21.490021  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:21.490026  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:21.490031  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:21 GMT
	I0916 10:59:21.490036  174032 round_trippers.go:580]     Audit-Id: 9cf1192b-8f9b-42a1-9345-348a35565f3a
	I0916 10:59:21.490042  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:21.490158  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070-m03","uid":"6e80e382-2eee-493d-a8de-e048ca27cfc5","resourceVersion":"867","creationTimestamp":"2024-09-16T10:57:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_57_29_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:57:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metada
ta":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} [truncated 5170 chars]
	I0916 10:59:21.490489  174032 node_ready.go:49] node "multinode-079070-m03" has status "Ready":"True"
	I0916 10:59:21.490505  174032 node_ready.go:38] duration metric: took 2.003146257s for node "multinode-079070-m03" to be "Ready" ...
	I0916 10:59:21.490514  174032 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:59:21.490575  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 10:59:21.490583  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:21.490590  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:21.490593  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:21.493377  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:21.493398  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:21.493408  174032 round_trippers.go:580]     Audit-Id: 84ac7c39-b81f-4954-bc20-5265775d8ff7
	I0916 10:59:21.493414  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:21.493419  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:21.493447  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:21.493457  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:21.493462  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:21 GMT
	I0916 10:59:21.494087  174032 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"867"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"816","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 90750 chars]
	I0916 10:59:21.496712  174032 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace to be "Ready" ...
	I0916 10:59:21.496783  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 10:59:21.496791  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:21.496798  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:21.496802  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:21.498471  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:59:21.498490  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:21.498497  174032 round_trippers.go:580]     Audit-Id: 8793dda9-1500-42a4-abfc-770119ca4ab2
	I0916 10:59:21.498500  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:21.498503  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:21.498506  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:21.498509  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:21.498512  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:21 GMT
	I0916 10:59:21.498662  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"816","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6693 chars]
	I0916 10:59:21.499069  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:59:21.499082  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:21.499088  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:21.499093  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:21.500620  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:59:21.500635  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:21.500640  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:21.500645  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:21.500648  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:21.500651  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:21.500654  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:21 GMT
	I0916 10:59:21.500656  174032 round_trippers.go:580]     Audit-Id: 741dbd60-b8c9-40a1-ba9c-2d6686566127
	I0916 10:59:21.500817  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:59:21.501069  174032 pod_ready.go:93] pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace has status "Ready":"True"
	I0916 10:59:21.501082  174032 pod_ready.go:82] duration metric: took 4.350501ms for pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace to be "Ready" ...
	I0916 10:59:21.501089  174032 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:59:21.501137  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-079070
	I0916 10:59:21.501144  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:21.501151  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:21.501154  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:21.502637  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:59:21.502653  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:21.502659  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:21.502663  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:21.502667  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:21 GMT
	I0916 10:59:21.502670  174032 round_trippers.go:580]     Audit-Id: ad2df49a-d09f-456c-b044-c91f35eeef0c
	I0916 10:59:21.502672  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:21.502675  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:21.502775  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-079070","namespace":"kube-system","uid":"fdcbee48-0d3b-47d7-acd2-298979345c7a","resourceVersion":"749","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"4cd98cb286c25ab6542db09649b1ab0f","kubernetes.io/config.mirror":"4cd98cb286c25ab6542db09649b1ab0f","kubernetes.io/config.seen":"2024-09-16T10:56:25.844758522Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6653 chars]
	I0916 10:59:21.503111  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:59:21.503122  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:21.503129  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:21.503132  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:21.504545  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:59:21.504561  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:21.504567  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:21 GMT
	I0916 10:59:21.504574  174032 round_trippers.go:580]     Audit-Id: 801d1f0c-e565-4199-9550-8102c4f1f402
	I0916 10:59:21.504578  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:21.504581  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:21.504586  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:21.504590  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:21.504739  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:59:21.505035  174032 pod_ready.go:93] pod "etcd-multinode-079070" in "kube-system" namespace has status "Ready":"True"
	I0916 10:59:21.505049  174032 pod_ready.go:82] duration metric: took 3.954218ms for pod "etcd-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:59:21.505068  174032 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:59:21.505130  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-079070
	I0916 10:59:21.505139  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:21.505147  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:21.505154  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:21.506693  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:59:21.506706  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:21.506722  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:21.506731  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:21.506736  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:21 GMT
	I0916 10:59:21.506741  174032 round_trippers.go:580]     Audit-Id: adb4bc83-6ca6-40f0-be17-5c6a508cc5e0
	I0916 10:59:21.506746  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:21.506751  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:21.506884  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-079070","namespace":"kube-system","uid":"72784d35-4a94-476c-a4e9-dc3c6c3a8c46","resourceVersion":"747","creationTimestamp":"2024-09-16T10:56:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.mirror":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.seen":"2024-09-16T10:56:20.578394748Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8731 chars]
	I0916 10:59:21.507253  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:59:21.507274  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:21.507281  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:21.507288  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:21.508681  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:59:21.508701  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:21.508709  174032 round_trippers.go:580]     Audit-Id: 0550c89d-d032-434c-855d-50f159816cc1
	I0916 10:59:21.508716  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:21.508720  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:21.508725  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:21.508729  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:21.508734  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:21 GMT
	I0916 10:59:21.508881  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:59:21.509137  174032 pod_ready.go:93] pod "kube-apiserver-multinode-079070" in "kube-system" namespace has status "Ready":"True"
	I0916 10:59:21.509149  174032 pod_ready.go:82] duration metric: took 4.070879ms for pod "kube-apiserver-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:59:21.509158  174032 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:59:21.509201  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-079070
	I0916 10:59:21.509209  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:21.509215  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:21.509218  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:21.510870  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:59:21.510898  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:21.510907  174032 round_trippers.go:580]     Audit-Id: ebd95e09-03cc-446d-be76-1b9a039425cc
	I0916 10:59:21.510911  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:21.510917  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:21.510920  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:21.510924  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:21.510928  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:21 GMT
	I0916 10:59:21.511163  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-079070","namespace":"kube-system","uid":"44a2f17c-654a-4d64-b474-70ce475c3afe","resourceVersion":"751","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"93bd5dba25d1e51504f9fc3f55fd27c8","kubernetes.io/config.mirror":"93bd5dba25d1e51504f9fc3f55fd27c8","kubernetes.io/config.seen":"2024-09-16T10:56:25.844752735Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8306 chars]
	I0916 10:59:21.511538  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:59:21.511550  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:21.511557  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:21.511560  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:21.513013  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:59:21.513030  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:21.513038  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:21.513043  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:21 GMT
	I0916 10:59:21.513047  174032 round_trippers.go:580]     Audit-Id: 8d1a4bb9-089c-44d3-bf2e-4520118a2883
	I0916 10:59:21.513051  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:21.513056  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:21.513060  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:21.513164  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:59:21.513416  174032 pod_ready.go:93] pod "kube-controller-manager-multinode-079070" in "kube-system" namespace has status "Ready":"True"
	I0916 10:59:21.513429  174032 pod_ready.go:82] duration metric: took 4.265896ms for pod "kube-controller-manager-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:59:21.513437  174032 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2vhmt" in "kube-system" namespace to be "Ready" ...
	I0916 10:59:21.688810  174032 request.go:632] Waited for 175.320995ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2vhmt
	I0916 10:59:21.688860  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2vhmt
	I0916 10:59:21.688865  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:21.688873  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:21.688877  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:21.691250  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:21.691274  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:21.691283  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:21.691290  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:21.691296  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:21 GMT
	I0916 10:59:21.691302  174032 round_trippers.go:580]     Audit-Id: e8874a88-58c7-4208-ba74-7206235deffc
	I0916 10:59:21.691308  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:21.691314  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:21.691426  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2vhmt","generateName":"kube-proxy-","namespace":"kube-system","uid":"6f3faf85-04e9-4840-855d-dd1ef9d4e463","resourceVersion":"664","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6388 chars]
	I0916 10:59:21.888311  174032 request.go:632] Waited for 196.354044ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:59:21.888391  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:59:21.888397  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:21.888407  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:21.888411  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:21.890535  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:21.890553  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:21.890561  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:21.890573  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:21 GMT
	I0916 10:59:21.890578  174032 round_trippers.go:580]     Audit-Id: 82afa85f-ddd8-4148-9209-d6812150eb26
	I0916 10:59:21.890582  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:21.890586  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:21.890590  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:21.890694  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:59:21.891005  174032 pod_ready.go:93] pod "kube-proxy-2vhmt" in "kube-system" namespace has status "Ready":"True"
	I0916 10:59:21.891023  174032 pod_ready.go:82] duration metric: took 377.578698ms for pod "kube-proxy-2vhmt" in "kube-system" namespace to be "Ready" ...
	I0916 10:59:21.891040  174032 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9z4qh" in "kube-system" namespace to be "Ready" ...
	I0916 10:59:22.088124  174032 request.go:632] Waited for 197.012336ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9z4qh
	I0916 10:59:22.088199  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9z4qh
	I0916 10:59:22.088205  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:22.088211  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:22.088215  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:22.090513  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:22.090537  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:22.090545  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:22.090549  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:22 GMT
	I0916 10:59:22.090553  174032 round_trippers.go:580]     Audit-Id: aa47f626-a9fd-4f3f-aac1-80cae66814e1
	I0916 10:59:22.090558  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:22.090562  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:22.090568  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:22.090742  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9z4qh","generateName":"kube-proxy-","namespace":"kube-system","uid":"7bae6f8c-6440-4b47-aaa2-e7bbc9dabd7d","resourceVersion":"826","creationTimestamp":"2024-09-16T10:57:29Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:57:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6408 chars]
	I0916 10:59:22.288620  174032 request.go:632] Waited for 197.392053ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m03
	I0916 10:59:22.288675  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m03
	I0916 10:59:22.288681  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:22.288688  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:22.288692  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:22.290846  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:22.290869  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:22.290879  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:22.290883  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:22.290889  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:22.290898  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:22.290903  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:22 GMT
	I0916 10:59:22.290907  174032 round_trippers.go:580]     Audit-Id: 0e8681ac-04fa-4ec5-8a6a-3d31af628b72
	I0916 10:59:22.291109  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070-m03","uid":"6e80e382-2eee-493d-a8de-e048ca27cfc5","resourceVersion":"867","creationTimestamp":"2024-09-16T10:57:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_57_29_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:57:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metada
ta":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} [truncated 5170 chars]
	I0916 10:59:22.488541  174032 request.go:632] Waited for 97.275971ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9z4qh
	I0916 10:59:22.488604  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9z4qh
	I0916 10:59:22.488610  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:22.488617  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:22.488622  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:22.491266  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:22.491285  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:22.491291  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:22.491296  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:22 GMT
	I0916 10:59:22.491305  174032 round_trippers.go:580]     Audit-Id: e97d520f-1cbc-4b55-b307-fd11fd2152dc
	I0916 10:59:22.491310  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:22.491314  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:22.491319  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:22.491462  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9z4qh","generateName":"kube-proxy-","namespace":"kube-system","uid":"7bae6f8c-6440-4b47-aaa2-e7bbc9dabd7d","resourceVersion":"826","creationTimestamp":"2024-09-16T10:57:29Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:57:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6408 chars]
	I0916 10:59:22.688348  174032 request.go:632] Waited for 196.300297ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m03
	I0916 10:59:22.688429  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m03
	I0916 10:59:22.688440  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:22.688449  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:22.688457  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:22.690465  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:59:22.690483  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:22.690489  174032 round_trippers.go:580]     Audit-Id: b466677e-1cda-407c-9f6c-dd3df358882f
	I0916 10:59:22.690494  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:22.690498  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:22.690503  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:22.690505  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:22.690509  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:22 GMT
	I0916 10:59:22.690622  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070-m03","uid":"6e80e382-2eee-493d-a8de-e048ca27cfc5","resourceVersion":"867","creationTimestamp":"2024-09-16T10:57:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_57_29_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:57:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metada
ta":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} [truncated 5170 chars]
	I0916 10:59:22.892018  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9z4qh
	I0916 10:59:22.892047  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:22.892054  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:22.892058  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:22.894434  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:22.894460  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:22.894469  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:22.894475  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:22.894479  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:22.894486  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:22.894491  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:22 GMT
	I0916 10:59:22.894495  174032 round_trippers.go:580]     Audit-Id: 5ba12026-0238-4a53-b616-bc449f888609
	I0916 10:59:22.894657  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9z4qh","generateName":"kube-proxy-","namespace":"kube-system","uid":"7bae6f8c-6440-4b47-aaa2-e7bbc9dabd7d","resourceVersion":"826","creationTimestamp":"2024-09-16T10:57:29Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:57:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6408 chars]
	I0916 10:59:23.088463  174032 request.go:632] Waited for 193.373477ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m03
	I0916 10:59:23.088523  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m03
	I0916 10:59:23.088530  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:23.088540  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:23.088549  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:23.090878  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:23.090908  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:23.090918  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:23.090925  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:23.090933  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:23.090937  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:23 GMT
	I0916 10:59:23.090942  174032 round_trippers.go:580]     Audit-Id: ffeabd62-cce4-4dc8-a750-7342ff09f604
	I0916 10:59:23.090946  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:23.091075  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070-m03","uid":"6e80e382-2eee-493d-a8de-e048ca27cfc5","resourceVersion":"867","creationTimestamp":"2024-09-16T10:57:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_57_29_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:57:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metada
ta":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} [truncated 5170 chars]
	I0916 10:59:23.391423  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9z4qh
	I0916 10:59:23.391454  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:23.391485  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:23.391492  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:23.393677  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:23.393697  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:23.393705  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:23 GMT
	I0916 10:59:23.393710  174032 round_trippers.go:580]     Audit-Id: 11ee141d-5902-494e-afd8-2082c645c523
	I0916 10:59:23.393714  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:23.393717  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:23.393722  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:23.393726  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:23.393867  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9z4qh","generateName":"kube-proxy-","namespace":"kube-system","uid":"7bae6f8c-6440-4b47-aaa2-e7bbc9dabd7d","resourceVersion":"826","creationTimestamp":"2024-09-16T10:57:29Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:57:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6408 chars]
	I0916 10:59:23.488580  174032 request.go:632] Waited for 94.273332ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m03
	I0916 10:59:23.488678  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m03
	I0916 10:59:23.488689  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:23.488701  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:23.488707  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:23.490993  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:23.491029  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:23.491037  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:23.491043  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:23 GMT
	I0916 10:59:23.491046  174032 round_trippers.go:580]     Audit-Id: 0a31d506-65c8-4c04-af6e-f7640792e91b
	I0916 10:59:23.491051  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:23.491057  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:23.491061  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:23.491244  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070-m03","uid":"6e80e382-2eee-493d-a8de-e048ca27cfc5","resourceVersion":"867","creationTimestamp":"2024-09-16T10:57:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_57_29_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:57:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metada
ta":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} [truncated 5170 chars]
	I0916 10:59:23.891853  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9z4qh
	I0916 10:59:23.891878  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:23.891886  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:23.891891  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:23.894142  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:23.894166  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:23.894175  174032 round_trippers.go:580]     Audit-Id: de95d077-d4ba-45f1-803d-8856e13f14dd
	I0916 10:59:23.894181  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:23.894186  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:23.894191  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:23.894198  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:23.894202  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:23 GMT
	I0916 10:59:23.894306  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9z4qh","generateName":"kube-proxy-","namespace":"kube-system","uid":"7bae6f8c-6440-4b47-aaa2-e7bbc9dabd7d","resourceVersion":"826","creationTimestamp":"2024-09-16T10:57:29Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:57:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6408 chars]
	I0916 10:59:23.894779  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m03
	I0916 10:59:23.894794  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:23.894801  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:23.894805  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:23.896594  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:59:23.896617  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:23.896626  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:23.896633  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:23.896638  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:23.896643  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:23.896649  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:23 GMT
	I0916 10:59:23.896654  174032 round_trippers.go:580]     Audit-Id: 03957404-ce42-4a4e-a9cf-13e511fcce31
	I0916 10:59:23.896780  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070-m03","uid":"6e80e382-2eee-493d-a8de-e048ca27cfc5","resourceVersion":"867","creationTimestamp":"2024-09-16T10:57:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_57_29_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:57:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metada
ta":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} [truncated 5170 chars]
	I0916 10:59:23.897115  174032 pod_ready.go:103] pod "kube-proxy-9z4qh" in "kube-system" namespace has status "Ready":"False"
	I0916 10:59:24.391321  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9z4qh
	I0916 10:59:24.391348  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:24.391359  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:24.391364  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:24.394478  174032 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 10:59:24.394508  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:24.394518  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:24 GMT
	I0916 10:59:24.394523  174032 round_trippers.go:580]     Audit-Id: 854f59b2-d16b-4f96-ba05-8935b0fbea13
	I0916 10:59:24.394527  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:24.394531  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:24.394535  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:24.394541  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:24.394655  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9z4qh","generateName":"kube-proxy-","namespace":"kube-system","uid":"7bae6f8c-6440-4b47-aaa2-e7bbc9dabd7d","resourceVersion":"872","creationTimestamp":"2024-09-16T10:57:29Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:57:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6811 chars]
	I0916 10:59:24.395332  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m03
	I0916 10:59:24.395362  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:24.395374  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:24.395382  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:24.397449  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:24.397471  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:24.397481  174032 round_trippers.go:580]     Audit-Id: bd0f4c59-e112-4d7a-8694-b402479b3110
	I0916 10:59:24.397488  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:24.397492  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:24.397496  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:24.397501  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:24.397506  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:24 GMT
	I0916 10:59:24.397616  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070-m03","uid":"6e80e382-2eee-493d-a8de-e048ca27cfc5","resourceVersion":"867","creationTimestamp":"2024-09-16T10:57:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_57_29_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:57:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metada
ta":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} [truncated 5170 chars]
	I0916 10:59:24.891340  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9z4qh
	I0916 10:59:24.891362  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:24.891370  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:24.891374  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:24.893887  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:24.893914  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:24.893923  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:24 GMT
	I0916 10:59:24.893931  174032 round_trippers.go:580]     Audit-Id: bbe46e5e-8b2c-4921-8d0a-1ebf0a0e79ce
	I0916 10:59:24.893938  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:24.893942  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:24.893945  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:24.893951  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:24.894111  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9z4qh","generateName":"kube-proxy-","namespace":"kube-system","uid":"7bae6f8c-6440-4b47-aaa2-e7bbc9dabd7d","resourceVersion":"872","creationTimestamp":"2024-09-16T10:57:29Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:57:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6811 chars]
	I0916 10:59:24.894657  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m03
	I0916 10:59:24.894673  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:24.894683  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:24.894688  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:24.896727  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:24.896746  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:24.896755  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:24.896761  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:24.896767  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:24.896774  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:24.896778  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:24 GMT
	I0916 10:59:24.896783  174032 round_trippers.go:580]     Audit-Id: 6ee12fc0-41c5-44af-a9c9-7ba7d76fb4ad
	I0916 10:59:24.896907  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070-m03","uid":"6e80e382-2eee-493d-a8de-e048ca27cfc5","resourceVersion":"867","creationTimestamp":"2024-09-16T10:57:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_57_29_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:57:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metada
ta":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} [truncated 5170 chars]
	I0916 10:59:25.391317  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9z4qh
	I0916 10:59:25.391345  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:25.391356  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:25.391365  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:25.393656  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:25.393675  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:25.393681  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:25.393686  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:25 GMT
	I0916 10:59:25.393689  174032 round_trippers.go:580]     Audit-Id: dc1295a9-21fe-4bd2-a514-441c349c41b2
	I0916 10:59:25.393692  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:25.393694  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:25.393697  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:25.393855  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9z4qh","generateName":"kube-proxy-","namespace":"kube-system","uid":"7bae6f8c-6440-4b47-aaa2-e7bbc9dabd7d","resourceVersion":"882","creationTimestamp":"2024-09-16T10:57:29Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:57:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6396 chars]
	I0916 10:59:25.394335  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m03
	I0916 10:59:25.394362  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:25.394372  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:25.394381  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:25.396191  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:59:25.396206  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:25.396212  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:25 GMT
	I0916 10:59:25.396216  174032 round_trippers.go:580]     Audit-Id: ed410179-2138-4c6a-8991-3075cca9d857
	I0916 10:59:25.396221  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:25.396224  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:25.396227  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:25.396250  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:25.396365  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070-m03","uid":"6e80e382-2eee-493d-a8de-e048ca27cfc5","resourceVersion":"886","creationTimestamp":"2024-09-16T10:57:29Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_57_29_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:57:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metada
ta":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}} [truncated 5048 chars]
	I0916 10:59:25.396665  174032 pod_ready.go:93] pod "kube-proxy-9z4qh" in "kube-system" namespace has status "Ready":"True"
	I0916 10:59:25.396681  174032 pod_ready.go:82] duration metric: took 3.505634583s for pod "kube-proxy-9z4qh" in "kube-system" namespace to be "Ready" ...
	I0916 10:59:25.396693  174032 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xkr65" in "kube-system" namespace to be "Ready" ...
	I0916 10:59:25.396750  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xkr65
	I0916 10:59:25.396757  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:25.396764  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:25.396770  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:25.398574  174032 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 10:59:25.398592  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:25.398603  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:25 GMT
	I0916 10:59:25.398607  174032 round_trippers.go:580]     Audit-Id: f4e55977-b989-46e2-9178-faa9d2f554ab
	I0916 10:59:25.398611  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:25.398615  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:25.398619  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:25.398625  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:25.398798  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xkr65","generateName":"kube-proxy-","namespace":"kube-system","uid":"b8d1009a-f71f-4cb1-a2f0-510a2894874f","resourceVersion":"768","creationTimestamp":"2024-09-16T10:56:57Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6396 chars]
	I0916 10:59:25.488530  174032 request.go:632] Waited for 89.272478ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m02
	I0916 10:59:25.488635  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m02
	I0916 10:59:25.488646  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:25.488658  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:25.488670  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:25.491050  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:25.491073  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:25.491080  174032 round_trippers.go:580]     Audit-Id: a51d5503-983f-4a94-9879-ba69f1c9e08a
	I0916 10:59:25.491085  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:25.491088  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:25.491091  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:25.491097  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:25.491103  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:25 GMT
	I0916 10:59:25.491297  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070-m02","uid":"5bc3423c-6b73-4003-b0b3-5aa501e974c0","resourceVersion":"757","creationTimestamp":"2024-09-16T10:56:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_56_58_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:57Z","fieldsType":"FieldsV1","fiel
dsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl": [truncated 5023 chars]
	I0916 10:59:25.491635  174032 pod_ready.go:93] pod "kube-proxy-xkr65" in "kube-system" namespace has status "Ready":"True"
	I0916 10:59:25.491653  174032 pod_ready.go:82] duration metric: took 94.952149ms for pod "kube-proxy-xkr65" in "kube-system" namespace to be "Ready" ...
	I0916 10:59:25.491663  174032 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:59:25.688015  174032 request.go:632] Waited for 196.281862ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 10:59:25.688110  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 10:59:25.688121  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:25.688133  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:25.688145  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:25.690361  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:25.690382  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:25.690388  174032 round_trippers.go:580]     Audit-Id: 3eb481b0-6de9-4d45-ab51-23f41d07aecd
	I0916 10:59:25.690393  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:25.690396  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:25.690399  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:25.690402  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:25.690406  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:25 GMT
	I0916 10:59:25.690482  174032 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-079070","namespace":"kube-system","uid":"cc5dd2a3-a136-42b4-a6f9-11c733cb38c4","resourceVersion":"740","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.mirror":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.seen":"2024-09-16T10:56:25.844756911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5188 chars]
	I0916 10:59:25.888234  174032 request.go:632] Waited for 197.386807ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:59:25.888314  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 10:59:25.888322  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:25.888329  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:25.888333  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:25.890546  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:25.890574  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:25.890585  174032 round_trippers.go:580]     Audit-Id: 29b48c5c-8c46-4919-b92c-db235c1e5e7c
	I0916 10:59:25.890592  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:25.890602  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:25.890608  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:25.890613  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:25.890616  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:25 GMT
	I0916 10:59:25.890758  174032 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 10:59:25.891187  174032 pod_ready.go:93] pod "kube-scheduler-multinode-079070" in "kube-system" namespace has status "Ready":"True"
	I0916 10:59:25.891215  174032 pod_ready.go:82] duration metric: took 399.543046ms for pod "kube-scheduler-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 10:59:25.891228  174032 pod_ready.go:39] duration metric: took 4.400701752s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 10:59:25.891261  174032 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 10:59:25.891316  174032 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:59:25.902227  174032 system_svc.go:56] duration metric: took 10.97099ms WaitForService to wait for kubelet
	I0916 10:59:25.902257  174032 kubeadm.go:582] duration metric: took 6.502231373s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:59:25.902273  174032 node_conditions.go:102] verifying NodePressure condition ...
	I0916 10:59:26.088670  174032 request.go:632] Waited for 186.318774ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes
	I0916 10:59:26.088738  174032 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes
	I0916 10:59:26.088743  174032 round_trippers.go:469] Request Headers:
	I0916 10:59:26.088750  174032 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 10:59:26.088754  174032 round_trippers.go:473]     Accept: application/json, */*
	I0916 10:59:26.090941  174032 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 10:59:26.090959  174032 round_trippers.go:577] Response Headers:
	I0916 10:59:26.090965  174032 round_trippers.go:580]     Content-Type: application/json
	I0916 10:59:26.090969  174032 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 10:59:26.090978  174032 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 10:59:26.090981  174032 round_trippers.go:580]     Date: Mon, 16 Sep 2024 10:59:26 GMT
	I0916 10:59:26.090984  174032 round_trippers.go:580]     Audit-Id: 42c990e0-82a9-4923-827c-909d41cdde53
	I0916 10:59:26.090987  174032 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 10:59:26.091232  174032 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"886"},"items":[{"metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"manag
edFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1", [truncated 17343 chars]
	I0916 10:59:26.091904  174032 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:59:26.091922  174032 node_conditions.go:123] node cpu capacity is 8
	I0916 10:59:26.091930  174032 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:59:26.091934  174032 node_conditions.go:123] node cpu capacity is 8
	I0916 10:59:26.091937  174032 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 10:59:26.091940  174032 node_conditions.go:123] node cpu capacity is 8
	I0916 10:59:26.091944  174032 node_conditions.go:105] duration metric: took 189.667256ms to run NodePressure ...
	I0916 10:59:26.091957  174032 start.go:241] waiting for startup goroutines ...
	I0916 10:59:26.091978  174032 start.go:255] writing updated cluster config ...
	I0916 10:59:26.092239  174032 ssh_runner.go:195] Run: rm -f paused
	I0916 10:59:26.098860  174032 out.go:177] * Done! kubectl is now configured to use "multinode-079070" cluster and "default" namespace by default
	E0916 10:59:26.099909  174032 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b73ca772183b5       6e38f40d628db       12 seconds ago       Running             storage-provisioner       2                   d32ce1cb88c20       storage-provisioner
	e7dd060f7494b       12968670680f4       58 seconds ago       Running             kindnet-cni               1                   60b7e4d184cb1       kindnet-flmdv
	f11253e8ef61a       60c005f310ff3       58 seconds ago       Running             kube-proxy                1                   bd07015878a2b       kube-proxy-2vhmt
	b1134a94f20ca       6e38f40d628db       58 seconds ago       Exited              storage-provisioner       1                   d32ce1cb88c20       storage-provisioner
	9f936546ae131       c69fa2e9cbf5f       58 seconds ago       Running             coredns                   1                   9f22be20239e6       coredns-7c65d6cfc9-ft9gh
	fb80a77bac6e7       8c811b4aec35f       59 seconds ago       Running             busybox                   1                   8cc3146f6064d       busybox-7dff88458-pjlvx
	ca0cc800d9c78       9aa1fad941575       About a minute ago   Running             kube-scheduler            1                   cfba435487c50       kube-scheduler-multinode-079070
	50645a9df44a5       2e96e5913fc06       About a minute ago   Running             etcd                      1                   f06f43a302aa5       etcd-multinode-079070
	f8c9dd99b83da       6bab7719df100       About a minute ago   Running             kube-apiserver            1                   b27c67f5a330d       kube-apiserver-multinode-079070
	224f3c76893fd       175ffd71cce3d       About a minute ago   Running             kube-controller-manager   1                   20b671abc1444       kube-controller-manager-multinode-079070
	8414e0e62b35b       8c811b4aec35f       2 minutes ago        Exited              busybox                   0                   10183dc0f9d0a       busybox-7dff88458-pjlvx
	8954864d99d22       c69fa2e9cbf5f       2 minutes ago        Exited              coredns                   0                   fa69986f2f5d5       coredns-7c65d6cfc9-ft9gh
	de61885ae0251       12968670680f4       3 minutes ago        Exited              kindnet-cni               0                   a9b3bc3ef2872       kindnet-flmdv
	809210a041e03       60c005f310ff3       3 minutes ago        Exited              kube-proxy                0                   d6e6b6a3008e8       kube-proxy-2vhmt
	941f1dc8e3837       175ffd71cce3d       3 minutes ago        Exited              kube-controller-manager   0                   84635e5713cec       kube-controller-manager-multinode-079070
	0bc7fe20ff6ae       2e96e5913fc06       3 minutes ago        Exited              etcd                      0                   a53811583dd27       etcd-multinode-079070
	5d29b7e4482f8       9aa1fad941575       3 minutes ago        Exited              kube-scheduler            0                   b33679bbe5cbf       kube-scheduler-multinode-079070
	411c657184dfd       6bab7719df100       3 minutes ago        Exited              kube-apiserver            0                   c43b3a5fe0f9f       kube-apiserver-multinode-079070
	
	
	==> containerd <==
	Sep 16 10:58:33 multinode-079070 containerd[596]: time="2024-09-16T10:58:33.921005597Z" level=info msg="StartContainer for \"f11253e8ef61a01c3740b24f1b74855531922a5c71ae0705b35472b9baa28a46\""
	Sep 16 10:58:33 multinode-079070 containerd[596]: time="2024-09-16T10:58:33.946813879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-flmdv,Uid:91449e63-0ca3-4dc6-92ef-e3c5ab102dae,Namespace:kube-system,Attempt:1,} returns sandbox id \"60b7e4d184cb169985671537c0bc01666b359f85cd365056590debfe2f39aab0\""
	Sep 16 10:58:33 multinode-079070 containerd[596]: time="2024-09-16T10:58:33.952529761Z" level=info msg="CreateContainer within sandbox \"60b7e4d184cb169985671537c0bc01666b359f85cd365056590debfe2f39aab0\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:1,}"
	Sep 16 10:58:34 multinode-079070 containerd[596]: time="2024-09-16T10:58:34.035767710Z" level=info msg="StartContainer for \"b1134a94f20ca5932c1e053b207a4740304820db6ac8adbb7c2968f5a686c406\" returns successfully"
	Sep 16 10:58:34 multinode-079070 containerd[596]: time="2024-09-16T10:58:34.036243129Z" level=info msg="CreateContainer within sandbox \"60b7e4d184cb169985671537c0bc01666b359f85cd365056590debfe2f39aab0\" for &ContainerMetadata{Name:kindnet-cni,Attempt:1,} returns container id \"e7dd060f7494bc9b42225cbca571b99a4eff363411d2e3c5d94b7fe635b2c5fc\""
	Sep 16 10:58:34 multinode-079070 containerd[596]: time="2024-09-16T10:58:34.036820007Z" level=info msg="StartContainer for \"e7dd060f7494bc9b42225cbca571b99a4eff363411d2e3c5d94b7fe635b2c5fc\""
	Sep 16 10:58:34 multinode-079070 containerd[596]: time="2024-09-16T10:58:34.124152508Z" level=info msg="StartContainer for \"f11253e8ef61a01c3740b24f1b74855531922a5c71ae0705b35472b9baa28a46\" returns successfully"
	Sep 16 10:58:34 multinode-079070 containerd[596]: time="2024-09-16T10:58:34.143898459Z" level=info msg="StartContainer for \"e7dd060f7494bc9b42225cbca571b99a4eff363411d2e3c5d94b7fe635b2c5fc\" returns successfully"
	Sep 16 10:59:04 multinode-079070 containerd[596]: time="2024-09-16T10:59:04.072652360Z" level=info msg="shim disconnected" id=b1134a94f20ca5932c1e053b207a4740304820db6ac8adbb7c2968f5a686c406 namespace=k8s.io
	Sep 16 10:59:04 multinode-079070 containerd[596]: time="2024-09-16T10:59:04.072722993Z" level=warning msg="cleaning up after shim disconnected" id=b1134a94f20ca5932c1e053b207a4740304820db6ac8adbb7c2968f5a686c406 namespace=k8s.io
	Sep 16 10:59:04 multinode-079070 containerd[596]: time="2024-09-16T10:59:04.072756012Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 16 10:59:05 multinode-079070 containerd[596]: time="2024-09-16T10:59:05.001186020Z" level=info msg="RemoveContainer for \"269042fd7e0657021f86f96623c9937f1e0659eae415545c3508c149871ca048\""
	Sep 16 10:59:05 multinode-079070 containerd[596]: time="2024-09-16T10:59:05.007315812Z" level=info msg="RemoveContainer for \"269042fd7e0657021f86f96623c9937f1e0659eae415545c3508c149871ca048\" returns successfully"
	Sep 16 10:59:19 multinode-079070 containerd[596]: time="2024-09-16T10:59:19.730275550Z" level=info msg="CreateContainer within sandbox \"d32ce1cb88c20ac8ca6149fde456d7e025bf77ddc2ac01480186d3e26b399b4a\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:2,}"
	Sep 16 10:59:19 multinode-079070 containerd[596]: time="2024-09-16T10:59:19.743514184Z" level=info msg="CreateContainer within sandbox \"d32ce1cb88c20ac8ca6149fde456d7e025bf77ddc2ac01480186d3e26b399b4a\" for &ContainerMetadata{Name:storage-provisioner,Attempt:2,} returns container id \"b73ca772183b59baa5be8014c698ba6eb41bdef3f84e5083cb7a0313a0fb938d\""
	Sep 16 10:59:19 multinode-079070 containerd[596]: time="2024-09-16T10:59:19.744116888Z" level=info msg="StartContainer for \"b73ca772183b59baa5be8014c698ba6eb41bdef3f84e5083cb7a0313a0fb938d\""
	Sep 16 10:59:19 multinode-079070 containerd[596]: time="2024-09-16T10:59:19.790386662Z" level=info msg="StartContainer for \"b73ca772183b59baa5be8014c698ba6eb41bdef3f84e5083cb7a0313a0fb938d\" returns successfully"
	Sep 16 10:59:27 multinode-079070 containerd[596]: time="2024-09-16T10:59:27.674718080Z" level=info msg="StopPodSandbox for \"097580079dfa797965f8dcd252f3f8ef6da8dfed59ac02ea630737566fc8330d\""
	Sep 16 10:59:27 multinode-079070 containerd[596]: time="2024-09-16T10:59:27.674849486Z" level=info msg="TearDown network for sandbox \"097580079dfa797965f8dcd252f3f8ef6da8dfed59ac02ea630737566fc8330d\" successfully"
	Sep 16 10:59:27 multinode-079070 containerd[596]: time="2024-09-16T10:59:27.674861732Z" level=info msg="StopPodSandbox for \"097580079dfa797965f8dcd252f3f8ef6da8dfed59ac02ea630737566fc8330d\" returns successfully"
	Sep 16 10:59:27 multinode-079070 containerd[596]: time="2024-09-16T10:59:27.675352592Z" level=info msg="RemovePodSandbox for \"097580079dfa797965f8dcd252f3f8ef6da8dfed59ac02ea630737566fc8330d\""
	Sep 16 10:59:27 multinode-079070 containerd[596]: time="2024-09-16T10:59:27.675386162Z" level=info msg="Forcibly stopping sandbox \"097580079dfa797965f8dcd252f3f8ef6da8dfed59ac02ea630737566fc8330d\""
	Sep 16 10:59:27 multinode-079070 containerd[596]: time="2024-09-16T10:59:27.675478777Z" level=info msg="TearDown network for sandbox \"097580079dfa797965f8dcd252f3f8ef6da8dfed59ac02ea630737566fc8330d\" successfully"
	Sep 16 10:59:27 multinode-079070 containerd[596]: time="2024-09-16T10:59:27.681084377Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"097580079dfa797965f8dcd252f3f8ef6da8dfed59ac02ea630737566fc8330d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 16 10:59:27 multinode-079070 containerd[596]: time="2024-09-16T10:59:27.681180978Z" level=info msg="RemovePodSandbox \"097580079dfa797965f8dcd252f3f8ef6da8dfed59ac02ea630737566fc8330d\" returns successfully"
	
	
	==> coredns [8954864d99d2239b93c763ac312c7353b37bfba4eb693480619025ea3402616f] <==
	[INFO] 10.244.0.3:51056 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102475s
	[INFO] 10.244.1.2:41548 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000178899s
	[INFO] 10.244.1.2:39453 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001782363s
	[INFO] 10.244.1.2:56115 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000130511s
	[INFO] 10.244.1.2:37210 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000101251s
	[INFO] 10.244.1.2:55581 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001236938s
	[INFO] 10.244.1.2:35975 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081083s
	[INFO] 10.244.1.2:42877 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073809s
	[INFO] 10.244.1.2:41783 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000084902s
	[INFO] 10.244.0.3:55155 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116031s
	[INFO] 10.244.0.3:59444 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115061s
	[INFO] 10.244.0.3:34308 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000088507s
	[INFO] 10.244.0.3:40765 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088438s
	[INFO] 10.244.1.2:59446 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204406s
	[INFO] 10.244.1.2:52620 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000138315s
	[INFO] 10.244.1.2:51972 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105158s
	[INFO] 10.244.1.2:47877 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000087457s
	[INFO] 10.244.0.3:45741 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000142885s
	[INFO] 10.244.0.3:32935 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000169213s
	[INFO] 10.244.0.3:49721 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000165206s
	[INFO] 10.244.0.3:45554 - 5 "PTR IN 1.67.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000109895s
	[INFO] 10.244.1.2:44123 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168559s
	[INFO] 10.244.1.2:55322 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000107325s
	[INFO] 10.244.1.2:36098 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000102498s
	[INFO] 10.244.1.2:57704 - 5 "PTR IN 1.67.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000095141s
	
	
	==> coredns [9f936546ae13163e90e47cc8dcec45a4a44eb6f873708c6deb509ebe216c4213] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:39434 - 23847 "HINFO IN 6529897643441096498.450809085404921830. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010818363s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1119269091]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 10:58:33.843) (total time: 30001ms):
	Trace[1119269091]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (10:59:03.845)
	Trace[1119269091]: [30.001853356s] [30.001853356s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1128850326]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 10:58:33.843) (total time: 30001ms):
	Trace[1128850326]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (10:59:03.845)
	Trace[1128850326]: [30.001976921s] [30.001976921s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[595338145]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 10:58:33.843) (total time: 30002ms):
	Trace[595338145]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (10:59:03.845)
	Trace[595338145]: [30.002100587s] [30.002100587s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               multinode-079070
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-079070
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=multinode-079070
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_56_26_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:56:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-079070
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:59:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:58:31 +0000   Mon, 16 Sep 2024 10:56:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:58:31 +0000   Mon, 16 Sep 2024 10:56:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:58:31 +0000   Mon, 16 Sep 2024 10:56:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:58:31 +0000   Mon, 16 Sep 2024 10:56:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    multinode-079070
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 15ad3dbf907a4c7d94e3f5f54f517fad
	  System UUID:                aacf5fc8-9d89-4df8-b6e3-7265bb86b554
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-pjlvx                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 coredns-7c65d6cfc9-ft9gh                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     3m1s
	  kube-system                 etcd-multinode-079070                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         3m6s
	  kube-system                 kindnet-flmdv                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3m1s
	  kube-system                 kube-apiserver-multinode-079070             250m (3%)     0 (0%)      0 (0%)           0 (0%)         3m7s
	  kube-system                 kube-controller-manager-multinode-079070    200m (2%)     0 (0%)      0 (0%)           0 (0%)         3m6s
	  kube-system                 kube-proxy-2vhmt                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	  kube-system                 kube-scheduler-multinode-079070             100m (1%)     0 (0%)      0 (0%)           0 (0%)         3m6s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 3m                 kube-proxy       
	  Normal   Starting                 58s                kube-proxy       
	  Normal   Starting                 3m7s               kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m7s               kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  3m6s               kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  3m6s               kubelet          Node multinode-079070 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m6s               kubelet          Node multinode-079070 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m6s               kubelet          Node multinode-079070 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m2s               node-controller  Node multinode-079070 event: Registered Node multinode-079070 in Controller
	  Normal   Starting                 65s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 65s                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  65s (x8 over 65s)  kubelet          Node multinode-079070 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    65s (x7 over 65s)  kubelet          Node multinode-079070 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     65s (x7 over 65s)  kubelet          Node multinode-079070 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  65s                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           57s                node-controller  Node multinode-079070 event: Registered Node multinode-079070 in Controller
	
	
	Name:               multinode-079070-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-079070-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=multinode-079070
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_56_58_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:56:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-079070-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 10:59:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 10:58:56 +0000   Mon, 16 Sep 2024 10:56:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 10:58:56 +0000   Mon, 16 Sep 2024 10:56:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 10:58:56 +0000   Mon, 16 Sep 2024 10:56:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 10:58:56 +0000   Mon, 16 Sep 2024 10:56:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.3
	  Hostname:    multinode-079070-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 31234acbee2046a4936672de45454374
	  System UUID:                230f6bd5-a1b9-46e1-be41-9ec64c608739
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-x6h7b    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kindnet-fs5x4              100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m35s
	  kube-system                 kube-proxy-xkr65           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%)  100m (1%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m32s                  kube-proxy       
	  Normal   Starting                 32s                    kube-proxy       
	  Normal   NodeHasNoDiskPressure    2m35s (x2 over 2m35s)  kubelet          Node multinode-079070-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m35s (x2 over 2m35s)  kubelet          Node multinode-079070-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  2m35s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 2m35s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  2m35s (x2 over 2m35s)  kubelet          Node multinode-079070-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                2m34s                  kubelet          Node multinode-079070-m02 status is now: NodeReady
	  Normal   RegisteredNode           2m32s                  node-controller  Node multinode-079070-m02 event: Registered Node multinode-079070-m02 in Controller
	  Normal   RegisteredNode           57s                    node-controller  Node multinode-079070-m02 event: Registered Node multinode-079070-m02 in Controller
	  Normal   Starting                 42s                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  42s                    kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 42s                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  36s (x7 over 42s)      kubelet          Node multinode-079070-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    36s (x7 over 42s)      kubelet          Node multinode-079070-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     36s (x7 over 42s)      kubelet          Node multinode-079070-m02 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[  +0.000003] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +1.019209] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000006] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000002] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +0.000002] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000001] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +2.015822] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000006] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000001] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000001] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +4.155596] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000023] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000002] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000001] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +8.191238] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000005] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000001] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000001] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	
	
	==> etcd [0bc7fe20ff6ae92cd3f996cddadca6ddb2788e2f661cd3c4b2f9fb33045bed71] <==
	{"level":"info","ts":"2024-09-16T10:56:21.548252Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-09-16T10:56:21.548288Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-09-16T10:56:21.548321Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T10:56:21.548342Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T10:56:22.035573Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T10:56:22.035616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T10:56:22.035652Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2024-09-16T10:56:22.035677Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T10:56:22.035691Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-09-16T10:56:22.035701Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2024-09-16T10:56:22.035712Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-09-16T10:56:22.036773Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:multinode-079070 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:56:22.036773Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:56:22.036802Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:56:22.036801Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:56:22.037065Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:56:22.037130Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:56:22.037464Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:56:22.037668Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:56:22.037772Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:56:22.037989Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:56:22.037989Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:56:22.038884Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2024-09-16T10:56:22.038985Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:56:48.788053Z","caller":"traceutil/trace.go:171","msg":"trace[1037408987] transaction","detail":"{read_only:false; response_revision:421; number_of_response:1; }","duration":"200.765868ms","start":"2024-09-16T10:56:48.587270Z","end":"2024-09-16T10:56:48.788036Z","steps":["trace[1037408987] 'process raft request'  (duration: 200.648474ms)"],"step_count":1}
	
	
	==> etcd [50645a9df44a5be5ef6705e3c8cc321dc230a8a742eff68356246f7fd9869b85] <==
	{"level":"info","ts":"2024-09-16T10:58:28.752744Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:58:28.752781Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:58:28.752793Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:58:28.753007Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-09-16T10:58:28.753018Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-09-16T10:58:28.754378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2024-09-16T10:58:28.754448Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2024-09-16T10:58:28.754544Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:58:28.754573Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:58:30.638607Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-16T10:58:30.638667Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T10:58:30.638691Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-09-16T10:58:30.638704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:58:30.638709Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2024-09-16T10:58:30.638718Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2024-09-16T10:58:30.638725Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2024-09-16T10:58:30.641363Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:multinode-079070 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:58:30.641415Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:58:30.641435Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:58:30.641822Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:58:30.641894Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:58:30.642596Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:58:30.642630Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:58:30.643443Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:58:30.643446Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	
	
	==> kernel <==
	 10:59:32 up 41 min,  0 users,  load average: 0.91, 1.15, 1.05
	Linux multinode-079070 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [de61885ae02518041c7aa7ce71f66fe6f83e66c09666b89a7765dd6c5955ef2e] <==
	I0916 10:57:32.820303       1 main.go:295] Handling node with IPs: map[192.168.67.2:{}]
	I0916 10:57:32.820363       1 main.go:299] handling current node
	I0916 10:57:32.820385       1 main.go:295] Handling node with IPs: map[192.168.67.3:{}]
	I0916 10:57:32.820394       1 main.go:322] Node multinode-079070-m02 has CIDR [10.244.1.0/24] 
	I0916 10:57:32.820565       1 main.go:295] Handling node with IPs: map[192.168.67.4:{}]
	I0916 10:57:32.820582       1 main.go:322] Node multinode-079070-m03 has CIDR [10.244.2.0/24] 
	I0916 10:57:32.820644       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.67.4 Flags: [] Table: 0} 
	I0916 10:57:42.827816       1 main.go:295] Handling node with IPs: map[192.168.67.2:{}]
	I0916 10:57:42.827849       1 main.go:299] handling current node
	I0916 10:57:42.827865       1 main.go:295] Handling node with IPs: map[192.168.67.3:{}]
	I0916 10:57:42.827871       1 main.go:322] Node multinode-079070-m02 has CIDR [10.244.1.0/24] 
	I0916 10:57:42.827984       1 main.go:295] Handling node with IPs: map[192.168.67.4:{}]
	I0916 10:57:42.827997       1 main.go:322] Node multinode-079070-m03 has CIDR [10.244.2.0/24] 
	I0916 10:57:52.828123       1 main.go:295] Handling node with IPs: map[192.168.67.2:{}]
	I0916 10:57:52.828162       1 main.go:299] handling current node
	I0916 10:57:52.828178       1 main.go:295] Handling node with IPs: map[192.168.67.3:{}]
	I0916 10:57:52.828183       1 main.go:322] Node multinode-079070-m02 has CIDR [10.244.1.0/24] 
	I0916 10:57:52.828306       1 main.go:295] Handling node with IPs: map[192.168.67.4:{}]
	I0916 10:57:52.828313       1 main.go:322] Node multinode-079070-m03 has CIDR [10.244.2.0/24] 
	I0916 10:58:02.821669       1 main.go:295] Handling node with IPs: map[192.168.67.3:{}]
	I0916 10:58:02.821717       1 main.go:322] Node multinode-079070-m02 has CIDR [10.244.1.0/24] 
	I0916 10:58:02.821855       1 main.go:295] Handling node with IPs: map[192.168.67.4:{}]
	I0916 10:58:02.821865       1 main.go:322] Node multinode-079070-m03 has CIDR [10.244.2.0/24] 
	I0916 10:58:02.821902       1 main.go:295] Handling node with IPs: map[192.168.67.2:{}]
	I0916 10:58:02.821909       1 main.go:299] handling current node
	
	
	==> kindnet [e7dd060f7494bc9b42225cbca571b99a4eff363411d2e3c5d94b7fe635b2c5fc] <==
	I0916 10:58:44.725586       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.67.3 Flags: [] Table: 0} 
	I0916 10:58:54.720325       1 main.go:295] Handling node with IPs: map[192.168.67.2:{}]
	I0916 10:58:54.720363       1 main.go:299] handling current node
	I0916 10:58:54.720378       1 main.go:295] Handling node with IPs: map[192.168.67.3:{}]
	I0916 10:58:54.720382       1 main.go:322] Node multinode-079070-m02 has CIDR [10.244.1.0/24] 
	I0916 10:58:54.720496       1 main.go:295] Handling node with IPs: map[192.168.67.4:{}]
	I0916 10:58:54.720504       1 main.go:322] Node multinode-079070-m03 has CIDR [10.244.2.0/24] 
	I0916 10:59:04.724555       1 main.go:295] Handling node with IPs: map[192.168.67.2:{}]
	I0916 10:59:04.724597       1 main.go:299] handling current node
	I0916 10:59:04.724612       1 main.go:295] Handling node with IPs: map[192.168.67.3:{}]
	I0916 10:59:04.724617       1 main.go:322] Node multinode-079070-m02 has CIDR [10.244.1.0/24] 
	I0916 10:59:04.724755       1 main.go:295] Handling node with IPs: map[192.168.67.4:{}]
	I0916 10:59:04.724767       1 main.go:322] Node multinode-079070-m03 has CIDR [10.244.2.0/24] 
	I0916 10:59:14.722919       1 main.go:295] Handling node with IPs: map[192.168.67.2:{}]
	I0916 10:59:14.722968       1 main.go:299] handling current node
	I0916 10:59:14.722983       1 main.go:295] Handling node with IPs: map[192.168.67.3:{}]
	I0916 10:59:14.722988       1 main.go:322] Node multinode-079070-m02 has CIDR [10.244.1.0/24] 
	I0916 10:59:14.723127       1 main.go:295] Handling node with IPs: map[192.168.67.4:{}]
	I0916 10:59:14.723143       1 main.go:322] Node multinode-079070-m03 has CIDR [10.244.2.0/24] 
	I0916 10:59:24.721022       1 main.go:295] Handling node with IPs: map[192.168.67.2:{}]
	I0916 10:59:24.721063       1 main.go:299] handling current node
	I0916 10:59:24.721079       1 main.go:295] Handling node with IPs: map[192.168.67.3:{}]
	I0916 10:59:24.721084       1 main.go:322] Node multinode-079070-m02 has CIDR [10.244.1.0/24] 
	I0916 10:59:24.721270       1 main.go:295] Handling node with IPs: map[192.168.67.4:{}]
	I0916 10:59:24.721281       1 main.go:322] Node multinode-079070-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [411c657184dfd15c5a637bda842998291203948392b41c07d2e8b35719214e87] <==
	I0916 10:56:24.478924       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0916 10:56:24.483098       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0916 10:56:24.483123       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 10:56:24.887180       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 10:56:24.923351       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 10:56:25.030521       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0916 10:56:25.037379       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I0916 10:56:25.038608       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:56:25.042579       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 10:56:25.548706       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 10:56:25.953503       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 10:56:25.964413       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 10:56:25.974975       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 10:56:31.130667       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 10:56:31.150004       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0916 10:57:18.122976       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:35682: use of closed network connection
	E0916 10:57:18.268644       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:35704: use of closed network connection
	E0916 10:57:18.422165       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:35718: use of closed network connection
	E0916 10:57:18.568802       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:35742: use of closed network connection
	E0916 10:57:18.713040       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:35752: use of closed network connection
	E0916 10:57:18.854979       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:35772: use of closed network connection
	E0916 10:57:19.111050       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:35800: use of closed network connection
	E0916 10:57:19.253105       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:35822: use of closed network connection
	E0916 10:57:19.403005       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:35840: use of closed network connection
	E0916 10:57:19.547708       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:35870: use of closed network connection
	
	
	==> kube-apiserver [f8c9dd99b83dacf4270ec16fb010b101dbdc6c7542deaf690a717fb265515d4a] <==
	I0916 10:58:31.622575       1 cache.go:32] Waiting for caches to sync for RemoteAvailability controller
	I0916 10:58:31.622741       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0916 10:58:31.621445       1 cluster_authentication_trust_controller.go:443] Starting cluster_authentication_trust_controller controller
	I0916 10:58:31.622951       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0916 10:58:31.724432       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:58:31.725774       1 policy_source.go:224] refreshing policies
	I0916 10:58:31.726045       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:58:31.724631       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 10:58:31.724652       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:58:31.727817       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 10:58:31.727964       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:58:31.728042       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:58:31.728080       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:58:31.728136       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:58:31.738090       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:58:31.740343       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 10:58:31.820145       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 10:58:31.823882       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 10:58:31.823988       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:58:31.824807       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:58:31.824824       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E0916 10:58:31.844633       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0916 10:58:32.624625       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 10:58:35.346502       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:58:35.394006       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [224f3c76893fd9b065b89216b3facf9e0652faec36d68b791b48068b9f5cef50] <==
	I0916 10:58:35.295610       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:58:35.301765       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="250.90877ms"
	I0916 10:58:35.301873       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="65.447µs"
	I0916 10:58:35.710816       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:58:35.727216       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:58:35.727248       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:58:56.459155       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m02"
	I0916 10:58:59.100413       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="5.583041ms"
	I0916 10:58:59.100515       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="53.596µs"
	I0916 10:59:00.143017       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.083825ms"
	I0916 10:59:00.143132       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="65.535µs"
	I0916 10:59:11.803604       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="8.310689ms"
	I0916 10:59:11.803761       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="76.134µs"
	I0916 10:59:15.305747       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m03"
	I0916 10:59:15.305805       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-079070-m02"
	I0916 10:59:15.316502       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m03"
	I0916 10:59:20.349665       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m03"
	I0916 10:59:21.457175       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m03"
	I0916 10:59:21.457182       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-079070-m03"
	I0916 10:59:21.465445       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m03"
	I0916 10:59:25.333709       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m03"
	I0916 10:59:26.632146       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m03"
	I0916 10:59:26.641172       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m03"
	I0916 10:59:27.126032       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-079070-m02"
	I0916 10:59:27.126069       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m03"
	
	
	==> kube-controller-manager [941f1dc8e383770d56fc04131cd6e118a0b22f2035d16d7cd123273e0f80863c] <==
	I0916 10:57:00.349486       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-079070-m02"
	I0916 10:57:02.363803       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="22.564249ms"
	I0916 10:57:02.368707       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="4.838562ms"
	I0916 10:57:02.368809       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="59.079µs"
	I0916 10:57:02.373194       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="61.291µs"
	I0916 10:57:02.377414       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="48.53µs"
	I0916 10:57:05.024697       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="5.41303ms"
	I0916 10:57:05.024803       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="59.231µs"
	I0916 10:57:17.736156       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="4.686451ms"
	I0916 10:57:17.736242       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="46.644µs"
	I0916 10:57:27.562728       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070"
	I0916 10:57:28.354952       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m02"
	I0916 10:57:29.133862       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-079070-m03\" does not exist"
	I0916 10:57:29.133865       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-079070-m02"
	I0916 10:57:29.139577       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-079070-m03" podCIDRs=["10.244.2.0/24"]
	I0916 10:57:29.139620       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m03"
	I0916 10:57:29.139698       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m03"
	I0916 10:57:29.145420       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m03"
	I0916 10:57:29.203923       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m03"
	I0916 10:57:29.443782       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m03"
	I0916 10:57:30.057860       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m03"
	I0916 10:57:30.057909       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-079070-m02"
	I0916 10:57:30.065546       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m03"
	I0916 10:57:30.353600       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-079070-m03"
	I0916 10:57:54.399879       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m03"
	
	
	==> kube-proxy [809210a041e030e61062aa021eb36041df90e322c3257f94c546c420614699bc] <==
	I0916 10:56:32.029982       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:56:32.179672       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.67.2"]
	E0916 10:56:32.179750       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:56:32.234955       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:56:32.235009       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:56:32.237569       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:56:32.237995       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:56:32.238032       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:56:32.239678       1 config.go:199] "Starting service config controller"
	I0916 10:56:32.239727       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:56:32.239777       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:56:32.239783       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:56:32.240007       1 config.go:328] "Starting node config controller"
	I0916 10:56:32.240016       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:56:32.340062       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:56:32.340082       1 shared_informer.go:320] Caches are synced for service config
	I0916 10:56:32.340144       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [f11253e8ef61a01c3740b24f1b74855531922a5c71ae0705b35472b9baa28a46] <==
	I0916 10:58:34.155147       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:58:34.267226       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.67.2"]
	E0916 10:58:34.267320       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:58:34.285784       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:58:34.285854       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:58:34.287753       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:58:34.288177       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:58:34.288207       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:58:34.289596       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:58:34.289816       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:58:34.289652       1 config.go:199] "Starting service config controller"
	I0916 10:58:34.289903       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:58:34.289712       1 config.go:328] "Starting node config controller"
	I0916 10:58:34.289977       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:58:34.390019       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:58:34.390050       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:58:34.390025       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [5d29b7e4482f874fecde10cfcd42e99ca36d060f25d2e8e7a8110ea495ea8583] <==
	W0916 10:56:23.626494       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 10:56:23.626538       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:56:23.626572       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:56:23.626619       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:56:23.626626       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 10:56:23.626673       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:56:23.626691       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:56:23.626732       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:56:23.626711       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 10:56:23.626782       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:56:24.460004       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 10:56:24.460050       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:56:24.468721       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 10:56:24.468769       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:56:24.515374       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 10:56:24.515416       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 10:56:24.539117       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 10:56:24.539157       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 10:56:24.708195       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 10:56:24.708249       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 10:56:24.711434       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 10:56:24.711474       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 10:56:24.728071       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 10:56:24.728136       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 10:56:25.122409       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [ca0cc800d9c7855a343484f3d2f0ffc35459a84c699a3c4d1a4f9fc511b1b850] <==
	I0916 10:58:29.733121       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:58:31.727169       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:58:31.727213       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 10:58:31.727225       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:58:31.727234       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:58:31.822085       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:58:31.822120       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:58:31.825101       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:58:31.825732       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:58:31.829644       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:58:31.829688       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:58:31.930692       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 10:58:32 multinode-079070 kubelet[724]: E0916 10:58:32.845742     724 projected.go:194] Error preparing data for projected volume kube-api-access-49dwx for pod kube-system/kindnet-flmdv: failed to sync configmap cache: timed out waiting for the condition
	Sep 16 10:58:32 multinode-079070 kubelet[724]: E0916 10:58:32.845763     724 projected.go:194] Error preparing data for projected volume kube-api-access-5nnpd for pod kube-system/kube-proxy-2vhmt: failed to sync configmap cache: timed out waiting for the condition
	Sep 16 10:58:32 multinode-079070 kubelet[724]: E0916 10:58:32.845719     724 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Sep 16 10:58:32 multinode-079070 kubelet[724]: E0916 10:58:32.845839     724 projected.go:194] Error preparing data for projected volume kube-api-access-nnfv2 for pod kube-system/coredns-7c65d6cfc9-ft9gh: failed to sync configmap cache: timed out waiting for the condition
	Sep 16 10:58:32 multinode-079070 kubelet[724]: E0916 10:58:32.845839     724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/91449e63-0ca3-4dc6-92ef-e3c5ab102dae-kube-api-access-49dwx podName:91449e63-0ca3-4dc6-92ef-e3c5ab102dae nodeName:}" failed. No retries permitted until 2024-09-16 10:58:33.345814023 +0000 UTC m=+5.737904302 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-49dwx" (UniqueName: "kubernetes.io/projected/91449e63-0ca3-4dc6-92ef-e3c5ab102dae-kube-api-access-49dwx") pod "kindnet-flmdv" (UID: "91449e63-0ca3-4dc6-92ef-e3c5ab102dae") : failed to sync configmap cache: timed out waiting for the condition
	Sep 16 10:58:32 multinode-079070 kubelet[724]: E0916 10:58:32.845901     724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6f3faf85-04e9-4840-855d-dd1ef9d4e463-kube-api-access-5nnpd podName:6f3faf85-04e9-4840-855d-dd1ef9d4e463 nodeName:}" failed. No retries permitted until 2024-09-16 10:58:33.345880663 +0000 UTC m=+5.737970929 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5nnpd" (UniqueName: "kubernetes.io/projected/6f3faf85-04e9-4840-855d-dd1ef9d4e463-kube-api-access-5nnpd") pod "kube-proxy-2vhmt" (UID: "6f3faf85-04e9-4840-855d-dd1ef9d4e463") : failed to sync configmap cache: timed out waiting for the condition
	Sep 16 10:58:32 multinode-079070 kubelet[724]: E0916 10:58:32.845914     724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8052b6a1-7257-44d4-a318-740afd039d2c-kube-api-access-nnfv2 podName:8052b6a1-7257-44d4-a318-740afd039d2c nodeName:}" failed. No retries permitted until 2024-09-16 10:58:33.345907132 +0000 UTC m=+5.737997386 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nnfv2" (UniqueName: "kubernetes.io/projected/8052b6a1-7257-44d4-a318-740afd039d2c-kube-api-access-nnfv2") pod "coredns-7c65d6cfc9-ft9gh" (UID: "8052b6a1-7257-44d4-a318-740afd039d2c") : failed to sync configmap cache: timed out waiting for the condition
	Sep 16 10:58:32 multinode-079070 kubelet[724]: E0916 10:58:32.845952     724 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Sep 16 10:58:32 multinode-079070 kubelet[724]: E0916 10:58:32.845974     724 projected.go:194] Error preparing data for projected volume kube-api-access-8vbbr for pod kube-system/storage-provisioner: failed to sync configmap cache: timed out waiting for the condition
	Sep 16 10:58:32 multinode-079070 kubelet[724]: E0916 10:58:32.846031     724 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/43862f2e-c773-468d-ab03-8b0bc0633ad4-kube-api-access-8vbbr podName:43862f2e-c773-468d-ab03-8b0bc0633ad4 nodeName:}" failed. No retries permitted until 2024-09-16 10:58:33.345999266 +0000 UTC m=+5.738089533 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8vbbr" (UniqueName: "kubernetes.io/projected/43862f2e-c773-468d-ab03-8b0bc0633ad4-kube-api-access-8vbbr") pod "storage-provisioner" (UID: "43862f2e-c773-468d-ab03-8b0bc0633ad4") : failed to sync configmap cache: timed out waiting for the condition
	Sep 16 10:58:37 multinode-079070 kubelet[724]: E0916 10:58:37.770417     724 summary_sys_containers.go:51] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Sep 16 10:58:37 multinode-079070 kubelet[724]: E0916 10:58:37.770463     724 helpers.go:854] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal="allocatableMemory.available"
	Sep 16 10:58:41 multinode-079070 kubelet[724]: I0916 10:58:41.783751     724 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 16 10:58:47 multinode-079070 kubelet[724]: E0916 10:58:47.788445     724 summary_sys_containers.go:51] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Sep 16 10:58:47 multinode-079070 kubelet[724]: E0916 10:58:47.788483     724 helpers.go:854] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal="allocatableMemory.available"
	Sep 16 10:58:57 multinode-079070 kubelet[724]: E0916 10:58:57.806693     724 summary_sys_containers.go:51] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Sep 16 10:58:57 multinode-079070 kubelet[724]: E0916 10:58:57.806730     724 helpers.go:854] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal="allocatableMemory.available"
	Sep 16 10:59:04 multinode-079070 kubelet[724]: I0916 10:59:04.999764     724 scope.go:117] "RemoveContainer" containerID="269042fd7e0657021f86f96623c9937f1e0659eae415545c3508c149871ca048"
	Sep 16 10:59:05 multinode-079070 kubelet[724]: I0916 10:59:05.000182     724 scope.go:117] "RemoveContainer" containerID="b1134a94f20ca5932c1e053b207a4740304820db6ac8adbb7c2968f5a686c406"
	Sep 16 10:59:05 multinode-079070 kubelet[724]: E0916 10:59:05.000395     724 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(43862f2e-c773-468d-ab03-8b0bc0633ad4)\"" pod="kube-system/storage-provisioner" podUID="43862f2e-c773-468d-ab03-8b0bc0633ad4"
	Sep 16 10:59:07 multinode-079070 kubelet[724]: E0916 10:59:07.821529     724 summary_sys_containers.go:51] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Sep 16 10:59:07 multinode-079070 kubelet[724]: E0916 10:59:07.821582     724 helpers.go:854] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal="allocatableMemory.available"
	Sep 16 10:59:17 multinode-079070 kubelet[724]: E0916 10:59:17.839730     724 summary_sys_containers.go:51] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Sep 16 10:59:17 multinode-079070 kubelet[724]: E0916 10:59:17.839825     724 helpers.go:854] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal="allocatableMemory.available"
	Sep 16 10:59:19 multinode-079070 kubelet[724]: I0916 10:59:19.727867     724 scope.go:117] "RemoveContainer" containerID="b1134a94f20ca5932c1e053b207a4740304820db6ac8adbb7c2968f5a686c406"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-079070 -n multinode-079070
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-079070 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context multinode-079070 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (504.191µs)
helpers_test.go:263: kubectl --context multinode-079070 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestMultiNode/serial/DeleteNode (7.78s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (53.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-079070 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-079070 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (50.872630145s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:396: (dbg) Non-zero exit: kubectl get nodes: fork/exec /usr/local/bin/kubectl: exec format error (564.586µs)
multinode_test.go:398: failed to run kubectl get nodes. args "kubectl get nodes" : fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-079070
helpers_test.go:235: (dbg) docker inspect multinode-079070:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1f3af6522540e0cf7a676c3192a1437f3ee712969b647e0c5ff8f6e5d39943b2",
	        "Created": "2024-09-16T10:56:12.200290899Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 182529,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T10:59:58.110364567Z",
	            "FinishedAt": "2024-09-16T10:59:57.260932583Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/1f3af6522540e0cf7a676c3192a1437f3ee712969b647e0c5ff8f6e5d39943b2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1f3af6522540e0cf7a676c3192a1437f3ee712969b647e0c5ff8f6e5d39943b2/hostname",
	        "HostsPath": "/var/lib/docker/containers/1f3af6522540e0cf7a676c3192a1437f3ee712969b647e0c5ff8f6e5d39943b2/hosts",
	        "LogPath": "/var/lib/docker/containers/1f3af6522540e0cf7a676c3192a1437f3ee712969b647e0c5ff8f6e5d39943b2/1f3af6522540e0cf7a676c3192a1437f3ee712969b647e0c5ff8f6e5d39943b2-json.log",
	        "Name": "/multinode-079070",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-079070:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-079070",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e5e11918fc04456f16b46e3608935d02c293129393f819ada075321189caa2ad-init/diff:/var/lib/docker/overlay2/e391a31d00d345931d18da68f8a7d7338830f6d9b98fe203461225d03a65d594/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e5e11918fc04456f16b46e3608935d02c293129393f819ada075321189caa2ad/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e5e11918fc04456f16b46e3608935d02c293129393f819ada075321189caa2ad/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e5e11918fc04456f16b46e3608935d02c293129393f819ada075321189caa2ad/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-079070",
	                "Source": "/var/lib/docker/volumes/multinode-079070/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-079070",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-079070",
	                "name.minikube.sigs.k8s.io": "multinode-079070",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1405ec96afa1abbf18dca3bc3d650690f2a4c2e9285710e896acb4dab590e888",
	            "SandboxKey": "/var/run/docker/netns/1405ec96afa1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32943"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32944"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32947"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32945"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32946"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-079070": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null,
	                    "NetworkID": "49585fce923a48b44636990469ad4decadcc5b1b88fcdd63ced7ebb1e3971b52",
	                    "EndpointID": "570aa40b6ef6602e2ea81c3559051c3e0e11083633c2c1736753bfdb0ba7cec9",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "multinode-079070",
	                        "1f3af6522540"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-079070 -n multinode-079070
helpers_test.go:244: <<< TestMultiNode/serial/RestartMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-079070 logs -n 25: (1.517772682s)
helpers_test.go:252: TestMultiNode/serial/RestartMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-079070 cp multinode-079070-m02:/home/docker/cp-test.txt                       | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | multinode-079070:/home/docker/cp-test_multinode-079070-m02_multinode-079070.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-079070 ssh -n                                                                 | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | multinode-079070-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-079070 ssh -n multinode-079070 sudo cat                                       | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | /home/docker/cp-test_multinode-079070-m02_multinode-079070.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-079070 cp multinode-079070-m02:/home/docker/cp-test.txt                       | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | multinode-079070-m03:/home/docker/cp-test_multinode-079070-m02_multinode-079070-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-079070 ssh -n                                                                 | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | multinode-079070-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-079070 ssh -n multinode-079070-m03 sudo cat                                   | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | /home/docker/cp-test_multinode-079070-m02_multinode-079070-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-079070 cp testdata/cp-test.txt                                                | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | multinode-079070-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-079070 ssh -n                                                                 | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | multinode-079070-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-079070 cp multinode-079070-m03:/home/docker/cp-test.txt                       | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile475858721/001/cp-test_multinode-079070-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-079070 ssh -n                                                                 | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | multinode-079070-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-079070 cp multinode-079070-m03:/home/docker/cp-test.txt                       | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | multinode-079070:/home/docker/cp-test_multinode-079070-m03_multinode-079070.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-079070 ssh -n                                                                 | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | multinode-079070-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-079070 ssh -n multinode-079070 sudo cat                                       | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | /home/docker/cp-test_multinode-079070-m03_multinode-079070.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-079070 cp multinode-079070-m03:/home/docker/cp-test.txt                       | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | multinode-079070-m02:/home/docker/cp-test_multinode-079070-m03_multinode-079070-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-079070 ssh -n                                                                 | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | multinode-079070-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-079070 ssh -n multinode-079070-m02 sudo cat                                   | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | /home/docker/cp-test_multinode-079070-m03_multinode-079070-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-079070 node stop m03                                                          | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	| node    | multinode-079070 node start                                                             | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:57 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-079070                                                                | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC |                     |
	| stop    | -p multinode-079070                                                                     | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:57 UTC | 16 Sep 24 10:58 UTC |
	| start   | -p multinode-079070                                                                     | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:58 UTC | 16 Sep 24 10:59 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-079070                                                                | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:59 UTC |                     |
	| node    | multinode-079070 node delete                                                            | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:59 UTC | 16 Sep 24 10:59 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-079070 stop                                                                   | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:59 UTC | 16 Sep 24 10:59 UTC |
	| start   | -p multinode-079070                                                                     | multinode-079070 | jenkins | v1.34.0 | 16 Sep 24 10:59 UTC | 16 Sep 24 11:00 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	|         | --driver=docker                                                                         |                  |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                  |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:59:57
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:59:57.756320  182212 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:59:57.756587  182212 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:59:57.756596  182212 out.go:358] Setting ErrFile to fd 2...
	I0916 10:59:57.756601  182212 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:59:57.756806  182212 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 10:59:57.757382  182212 out.go:352] Setting JSON to false
	I0916 10:59:57.758402  182212 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2542,"bootTime":1726481856,"procs":247,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:59:57.758518  182212 start.go:139] virtualization: kvm guest
	I0916 10:59:57.761345  182212 out.go:177] * [multinode-079070] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:59:57.763092  182212 notify.go:220] Checking for updates...
	I0916 10:59:57.763104  182212 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:59:57.764696  182212 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:59:57.766295  182212 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:59:57.767779  182212 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 10:59:57.769415  182212 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:59:57.771005  182212 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:59:57.772711  182212 config.go:182] Loaded profile config "multinode-079070": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:59:57.773149  182212 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:59:57.795707  182212 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:59:57.795820  182212 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:59:57.843408  182212 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:42 SystemTime:2024-09-16 10:59:57.833932703 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:59:57.843515  182212 docker.go:318] overlay module found
	I0916 10:59:57.845709  182212 out.go:177] * Using the docker driver based on existing profile
	I0916 10:59:57.846865  182212 start.go:297] selected driver: docker
	I0916 10:59:57.846877  182212 start.go:901] validating driver "docker" against &{Name:multinode-079070 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-079070 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false
nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:59:57.847021  182212 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:59:57.847093  182212 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:59:57.894910  182212 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:42 SystemTime:2024-09-16 10:59:57.88470679 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:59:57.895833  182212 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 10:59:57.895876  182212 cni.go:84] Creating CNI manager for ""
	I0916 10:59:57.895923  182212 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0916 10:59:57.896003  182212 start.go:340] cluster config:
	{Name:multinode-079070 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-079070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-secur
ity-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:59:57.898738  182212 out.go:177] * Starting "multinode-079070" primary control-plane node in "multinode-079070" cluster
	I0916 10:59:57.899961  182212 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 10:59:57.901300  182212 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:59:57.902602  182212 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:59:57.902644  182212 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4
	I0916 10:59:57.902658  182212 cache.go:56] Caching tarball of preloaded images
	I0916 10:59:57.902690  182212 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:59:57.902747  182212 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 10:59:57.902761  182212 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 10:59:57.902922  182212 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/config.json ...
	W0916 10:59:57.922058  182212 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 10:59:57.922078  182212 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:59:57.922174  182212 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:59:57.922194  182212 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:59:57.922202  182212 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:59:57.922213  182212 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:59:57.922223  182212 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 10:59:57.923408  182212 image.go:273] response: 
	I0916 10:59:57.980388  182212 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 10:59:57.980431  182212 cache.go:194] Successfully downloaded all kic artifacts
	I0916 10:59:57.980477  182212 start.go:360] acquireMachinesLock for multinode-079070: {Name:mka8d048a8e19e1d22189c5e81470c7f2336c084 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 10:59:57.980558  182212 start.go:364] duration metric: took 55.09µs to acquireMachinesLock for "multinode-079070"
	I0916 10:59:57.980579  182212 start.go:96] Skipping create...Using existing machine configuration
	I0916 10:59:57.980585  182212 fix.go:54] fixHost starting: 
	I0916 10:59:57.980876  182212 cli_runner.go:164] Run: docker container inspect multinode-079070 --format={{.State.Status}}
	I0916 10:59:57.998318  182212 fix.go:112] recreateIfNeeded on multinode-079070: state=Stopped err=<nil>
	W0916 10:59:57.998345  182212 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 10:59:58.000579  182212 out.go:177] * Restarting existing docker container for "multinode-079070" ...
	I0916 10:59:58.001956  182212 cli_runner.go:164] Run: docker start multinode-079070
	I0916 10:59:58.270697  182212 cli_runner.go:164] Run: docker container inspect multinode-079070 --format={{.State.Status}}
	I0916 10:59:58.290431  182212 kic.go:430] container "multinode-079070" state is running.
	I0916 10:59:58.290838  182212 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-079070
	I0916 10:59:58.308621  182212 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/config.json ...
	I0916 10:59:58.308834  182212 machine.go:93] provisionDockerMachine start ...
	I0916 10:59:58.308895  182212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070
	I0916 10:59:58.327001  182212 main.go:141] libmachine: Using SSH client type: native
	I0916 10:59:58.327268  182212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32943 <nil> <nil>}
	I0916 10:59:58.327287  182212 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 10:59:58.328068  182212 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54056->127.0.0.1:32943: read: connection reset by peer
	I0916 11:00:01.459211  182212 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-079070
	
	I0916 11:00:01.459240  182212 ubuntu.go:169] provisioning hostname "multinode-079070"
	I0916 11:00:01.459326  182212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070
	I0916 11:00:01.477157  182212 main.go:141] libmachine: Using SSH client type: native
	I0916 11:00:01.477377  182212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32943 <nil> <nil>}
	I0916 11:00:01.477396  182212 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-079070 && echo "multinode-079070" | sudo tee /etc/hostname
	I0916 11:00:01.622787  182212 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-079070
	
	I0916 11:00:01.622847  182212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070
	I0916 11:00:01.639682  182212 main.go:141] libmachine: Using SSH client type: native
	I0916 11:00:01.639890  182212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32943 <nil> <nil>}
	I0916 11:00:01.639908  182212 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-079070' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-079070/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-079070' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:00:01.771807  182212 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:00:01.771837  182212 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 11:00:01.771863  182212 ubuntu.go:177] setting up certificates
	I0916 11:00:01.771874  182212 provision.go:84] configureAuth start
	I0916 11:00:01.771930  182212 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-079070
	I0916 11:00:01.788805  182212 provision.go:143] copyHostCerts
	I0916 11:00:01.788842  182212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 11:00:01.788871  182212 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 11:00:01.788877  182212 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 11:00:01.788937  182212 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 11:00:01.789029  182212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 11:00:01.789047  182212 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 11:00:01.789052  182212 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 11:00:01.789077  182212 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 11:00:01.789143  182212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 11:00:01.789160  182212 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 11:00:01.789166  182212 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 11:00:01.789191  182212 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 11:00:01.789238  182212 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.multinode-079070 san=[127.0.0.1 192.168.67.2 localhost minikube multinode-079070]
	I0916 11:00:02.179698  182212 provision.go:177] copyRemoteCerts
	I0916 11:00:02.179796  182212 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:00:02.179833  182212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070
	I0916 11:00:02.196671  182212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32943 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070/id_rsa Username:docker}
	I0916 11:00:02.292089  182212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 11:00:02.292180  182212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 11:00:02.313736  182212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 11:00:02.313803  182212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0916 11:00:02.334367  182212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 11:00:02.334433  182212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 11:00:02.354751  182212 provision.go:87] duration metric: took 582.861128ms to configureAuth
	I0916 11:00:02.354781  182212 ubuntu.go:193] setting minikube options for container-runtime
	I0916 11:00:02.354966  182212 config.go:182] Loaded profile config "multinode-079070": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:00:02.354977  182212 machine.go:96] duration metric: took 4.046131915s to provisionDockerMachine
	I0916 11:00:02.354984  182212 start.go:293] postStartSetup for "multinode-079070" (driver="docker")
	I0916 11:00:02.354993  182212 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:00:02.355039  182212 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:00:02.355073  182212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070
	I0916 11:00:02.372055  182212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32943 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070/id_rsa Username:docker}
	I0916 11:00:02.468473  182212 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:00:02.471313  182212 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.4 LTS"
	I0916 11:00:02.471336  182212 command_runner.go:130] > NAME="Ubuntu"
	I0916 11:00:02.471345  182212 command_runner.go:130] > VERSION_ID="22.04"
	I0916 11:00:02.471353  182212 command_runner.go:130] > VERSION="22.04.4 LTS (Jammy Jellyfish)"
	I0916 11:00:02.471361  182212 command_runner.go:130] > VERSION_CODENAME=jammy
	I0916 11:00:02.471365  182212 command_runner.go:130] > ID=ubuntu
	I0916 11:00:02.471369  182212 command_runner.go:130] > ID_LIKE=debian
	I0916 11:00:02.471373  182212 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0916 11:00:02.471377  182212 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0916 11:00:02.471385  182212 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0916 11:00:02.471393  182212 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0916 11:00:02.471397  182212 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0916 11:00:02.471468  182212 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 11:00:02.471504  182212 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 11:00:02.471520  182212 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 11:00:02.471530  182212 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 11:00:02.471542  182212 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 11:00:02.471595  182212 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 11:00:02.471668  182212 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 11:00:02.471680  182212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /etc/ssl/certs/111892.pem
	I0916 11:00:02.471773  182212 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:00:02.479468  182212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:00:02.501105  182212 start.go:296] duration metric: took 146.10752ms for postStartSetup
	I0916 11:00:02.501175  182212 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 11:00:02.501213  182212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070
	I0916 11:00:02.517926  182212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32943 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070/id_rsa Username:docker}
	I0916 11:00:02.612242  182212 command_runner.go:130] > 31%
	I0916 11:00:02.612326  182212 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 11:00:02.616011  182212 command_runner.go:130] > 202G
	I0916 11:00:02.616270  182212 fix.go:56] duration metric: took 4.635681926s for fixHost
	I0916 11:00:02.616294  182212 start.go:83] releasing machines lock for "multinode-079070", held for 4.635722976s
	I0916 11:00:02.616375  182212 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-079070
	I0916 11:00:02.632823  182212 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:00:02.632850  182212 ssh_runner.go:195] Run: cat /version.json
	I0916 11:00:02.632895  182212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070
	I0916 11:00:02.632913  182212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070
	I0916 11:00:02.650434  182212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32943 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070/id_rsa Username:docker}
	I0916 11:00:02.651126  182212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32943 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070/id_rsa Username:docker}
	I0916 11:00:02.813861  182212 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0916 11:00:02.813973  182212 command_runner.go:130] > {"iso_version": "v1.34.0-1726281733-19643", "kicbase_version": "v0.0.45-1726358845-19644", "minikube_version": "v1.34.0", "commit": "f890713149c79cf50e25c13e6a5c0470aa0f0450"}
	I0916 11:00:02.814078  182212 ssh_runner.go:195] Run: systemctl --version
	I0916 11:00:02.818071  182212 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.12)
	I0916 11:00:02.818111  182212 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0916 11:00:02.818180  182212 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:00:02.821850  182212 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0916 11:00:02.821868  182212 command_runner.go:130] >   Size: 78        	Blocks: 8          IO Block: 4096   regular file
	I0916 11:00:02.821874  182212 command_runner.go:130] > Device: 35h/53d	Inode: 809402      Links: 1
	I0916 11:00:02.821881  182212 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 11:00:02.821886  182212 command_runner.go:130] > Access: 2024-09-16 10:59:58.649522308 +0000
	I0916 11:00:02.821891  182212 command_runner.go:130] > Modify: 2024-09-16 10:58:26.533403911 +0000
	I0916 11:00:02.821895  182212 command_runner.go:130] > Change: 2024-09-16 10:58:26.533403911 +0000
	I0916 11:00:02.821900  182212 command_runner.go:130] >  Birth: 2024-09-16 10:58:26.533403911 +0000
	I0916 11:00:02.822134  182212 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 11:00:02.838621  182212 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 11:00:02.838685  182212 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:00:02.847335  182212 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 11:00:02.847359  182212 start.go:495] detecting cgroup driver to use...
	I0916 11:00:02.847388  182212 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 11:00:02.847431  182212 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 11:00:02.859839  182212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 11:00:02.870811  182212 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:00:02.870863  182212 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:00:02.882629  182212 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:00:02.892981  182212 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:00:02.975431  182212 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:00:03.046955  182212 docker.go:233] disabling docker service ...
	I0916 11:00:03.047010  182212 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:00:03.058191  182212 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:00:03.068779  182212 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:00:03.139312  182212 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:00:03.219125  182212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:00:03.229434  182212 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:00:03.243512  182212 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0916 11:00:03.243596  182212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 11:00:03.252118  182212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 11:00:03.260886  182212 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 11:00:03.260954  182212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 11:00:03.270123  182212 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 11:00:03.278978  182212 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 11:00:03.287687  182212 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 11:00:03.296384  182212 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:00:03.304496  182212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 11:00:03.313216  182212 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 11:00:03.321732  182212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 11:00:03.330451  182212 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:00:03.337100  182212 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0916 11:00:03.337731  182212 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:00:03.344989  182212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:00:03.415234  182212 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 11:00:03.517190  182212 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 11:00:03.517263  182212 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 11:00:03.520796  182212 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I0916 11:00:03.520817  182212 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0916 11:00:03.520834  182212 command_runner.go:130] > Device: 40h/64d	Inode: 160         Links: 1
	I0916 11:00:03.520844  182212 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 11:00:03.520850  182212 command_runner.go:130] > Access: 2024-09-16 11:00:03.473947495 +0000
	I0916 11:00:03.520857  182212 command_runner.go:130] > Modify: 2024-09-16 11:00:03.473947495 +0000
	I0916 11:00:03.520862  182212 command_runner.go:130] > Change: 2024-09-16 11:00:03.473947495 +0000
	I0916 11:00:03.520866  182212 command_runner.go:130] >  Birth: -
	I0916 11:00:03.520883  182212 start.go:563] Will wait 60s for crictl version
	I0916 11:00:03.520925  182212 ssh_runner.go:195] Run: which crictl
	I0916 11:00:03.523946  182212 command_runner.go:130] > /usr/bin/crictl
	I0916 11:00:03.524029  182212 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:00:03.556862  182212 command_runner.go:130] > Version:  0.1.0
	I0916 11:00:03.556888  182212 command_runner.go:130] > RuntimeName:  containerd
	I0916 11:00:03.556897  182212 command_runner.go:130] > RuntimeVersion:  1.7.22
	I0916 11:00:03.556914  182212 command_runner.go:130] > RuntimeApiVersion:  v1
	I0916 11:00:03.556932  182212 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 11:00:03.556975  182212 ssh_runner.go:195] Run: containerd --version
	I0916 11:00:03.579185  182212 command_runner.go:130] > containerd containerd.io 1.7.22 7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c
	I0916 11:00:03.579273  182212 ssh_runner.go:195] Run: containerd --version
	I0916 11:00:03.599012  182212 command_runner.go:130] > containerd containerd.io 1.7.22 7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c
	I0916 11:00:03.602628  182212 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 11:00:03.603954  182212 cli_runner.go:164] Run: docker network inspect multinode-079070 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:00:03.620982  182212 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0916 11:00:03.624439  182212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:00:03.634610  182212 kubeadm.go:883] updating cluster {Name:multinode-079070 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-079070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-ins
taller:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:00:03.634780  182212 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:00:03.634831  182212 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:00:03.664310  182212 command_runner.go:130] > {
	I0916 11:00:03.664332  182212 command_runner.go:130] >   "images": [
	I0916 11:00:03.664336  182212 command_runner.go:130] >     {
	I0916 11:00:03.664345  182212 command_runner.go:130] >       "id": "sha256:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 11:00:03.664353  182212 command_runner.go:130] >       "repoTags": [
	I0916 11:00:03.664366  182212 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 11:00:03.664372  182212 command_runner.go:130] >       ],
	I0916 11:00:03.664379  182212 command_runner.go:130] >       "repoDigests": [
	I0916 11:00:03.664387  182212 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 11:00:03.664393  182212 command_runner.go:130] >       ],
	I0916 11:00:03.664398  182212 command_runner.go:130] >       "size": "36793393",
	I0916 11:00:03.664402  182212 command_runner.go:130] >       "uid": null,
	I0916 11:00:03.664406  182212 command_runner.go:130] >       "username": "",
	I0916 11:00:03.664410  182212 command_runner.go:130] >       "spec": null,
	I0916 11:00:03.664414  182212 command_runner.go:130] >       "pinned": false
	I0916 11:00:03.664418  182212 command_runner.go:130] >     },
	I0916 11:00:03.664422  182212 command_runner.go:130] >     {
	I0916 11:00:03.664432  182212 command_runner.go:130] >       "id": "sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0916 11:00:03.664441  182212 command_runner.go:130] >       "repoTags": [
	I0916 11:00:03.664449  182212 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0916 11:00:03.664458  182212 command_runner.go:130] >       ],
	I0916 11:00:03.664465  182212 command_runner.go:130] >       "repoDigests": [
	I0916 11:00:03.664476  182212 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0916 11:00:03.664481  182212 command_runner.go:130] >       ],
	I0916 11:00:03.664486  182212 command_runner.go:130] >       "size": "725911",
	I0916 11:00:03.664495  182212 command_runner.go:130] >       "uid": null,
	I0916 11:00:03.664499  182212 command_runner.go:130] >       "username": "",
	I0916 11:00:03.664505  182212 command_runner.go:130] >       "spec": null,
	I0916 11:00:03.664509  182212 command_runner.go:130] >       "pinned": false
	I0916 11:00:03.664513  182212 command_runner.go:130] >     },
	I0916 11:00:03.664516  182212 command_runner.go:130] >     {
	I0916 11:00:03.664525  182212 command_runner.go:130] >       "id": "sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 11:00:03.664534  182212 command_runner.go:130] >       "repoTags": [
	I0916 11:00:03.664547  182212 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 11:00:03.664556  182212 command_runner.go:130] >       ],
	I0916 11:00:03.664568  182212 command_runner.go:130] >       "repoDigests": [
	I0916 11:00:03.664579  182212 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0916 11:00:03.664588  182212 command_runner.go:130] >       ],
	I0916 11:00:03.664596  182212 command_runner.go:130] >       "size": "9058936",
	I0916 11:00:03.664605  182212 command_runner.go:130] >       "uid": null,
	I0916 11:00:03.664615  182212 command_runner.go:130] >       "username": "",
	I0916 11:00:03.664622  182212 command_runner.go:130] >       "spec": null,
	I0916 11:00:03.664632  182212 command_runner.go:130] >       "pinned": false
	I0916 11:00:03.664637  182212 command_runner.go:130] >     },
	I0916 11:00:03.664645  182212 command_runner.go:130] >     {
	I0916 11:00:03.664656  182212 command_runner.go:130] >       "id": "sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 11:00:03.664664  182212 command_runner.go:130] >       "repoTags": [
	I0916 11:00:03.664672  182212 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 11:00:03.664680  182212 command_runner.go:130] >       ],
	I0916 11:00:03.664687  182212 command_runner.go:130] >       "repoDigests": [
	I0916 11:00:03.664701  182212 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"
	I0916 11:00:03.664710  182212 command_runner.go:130] >       ],
	I0916 11:00:03.664717  182212 command_runner.go:130] >       "size": "18562039",
	I0916 11:00:03.664726  182212 command_runner.go:130] >       "uid": null,
	I0916 11:00:03.664737  182212 command_runner.go:130] >       "username": "nonroot",
	I0916 11:00:03.664743  182212 command_runner.go:130] >       "spec": null,
	I0916 11:00:03.664752  182212 command_runner.go:130] >       "pinned": false
	I0916 11:00:03.664758  182212 command_runner.go:130] >     },
	I0916 11:00:03.664765  182212 command_runner.go:130] >     {
	I0916 11:00:03.664771  182212 command_runner.go:130] >       "id": "sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 11:00:03.664779  182212 command_runner.go:130] >       "repoTags": [
	I0916 11:00:03.664787  182212 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 11:00:03.664795  182212 command_runner.go:130] >       ],
	I0916 11:00:03.664802  182212 command_runner.go:130] >       "repoDigests": [
	I0916 11:00:03.664816  182212 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 11:00:03.664832  182212 command_runner.go:130] >       ],
	I0916 11:00:03.664842  182212 command_runner.go:130] >       "size": "56909194",
	I0916 11:00:03.664851  182212 command_runner.go:130] >       "uid": {
	I0916 11:00:03.664860  182212 command_runner.go:130] >         "value": "0"
	I0916 11:00:03.664867  182212 command_runner.go:130] >       },
	I0916 11:00:03.664871  182212 command_runner.go:130] >       "username": "",
	I0916 11:00:03.664881  182212 command_runner.go:130] >       "spec": null,
	I0916 11:00:03.664891  182212 command_runner.go:130] >       "pinned": false
	I0916 11:00:03.664900  182212 command_runner.go:130] >     },
	I0916 11:00:03.664908  182212 command_runner.go:130] >     {
	I0916 11:00:03.664918  182212 command_runner.go:130] >       "id": "sha256:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 11:00:03.664927  182212 command_runner.go:130] >       "repoTags": [
	I0916 11:00:03.664938  182212 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 11:00:03.664946  182212 command_runner.go:130] >       ],
	I0916 11:00:03.664954  182212 command_runner.go:130] >       "repoDigests": [
	I0916 11:00:03.664962  182212 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 11:00:03.664971  182212 command_runner.go:130] >       ],
	I0916 11:00:03.664980  182212 command_runner.go:130] >       "size": "28047142",
	I0916 11:00:03.664987  182212 command_runner.go:130] >       "uid": {
	I0916 11:00:03.664997  182212 command_runner.go:130] >         "value": "0"
	I0916 11:00:03.665005  182212 command_runner.go:130] >       },
	I0916 11:00:03.665014  182212 command_runner.go:130] >       "username": "",
	I0916 11:00:03.665024  182212 command_runner.go:130] >       "spec": null,
	I0916 11:00:03.665033  182212 command_runner.go:130] >       "pinned": false
	I0916 11:00:03.665041  182212 command_runner.go:130] >     },
	I0916 11:00:03.665048  182212 command_runner.go:130] >     {
	I0916 11:00:03.665054  182212 command_runner.go:130] >       "id": "sha256:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 11:00:03.665063  182212 command_runner.go:130] >       "repoTags": [
	I0916 11:00:03.665074  182212 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 11:00:03.665085  182212 command_runner.go:130] >       ],
	I0916 11:00:03.665094  182212 command_runner.go:130] >       "repoDigests": [
	I0916 11:00:03.665109  182212 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1"
	I0916 11:00:03.665118  182212 command_runner.go:130] >       ],
	I0916 11:00:03.665127  182212 command_runner.go:130] >       "size": "26221554",
	I0916 11:00:03.665138  182212 command_runner.go:130] >       "uid": {
	I0916 11:00:03.665144  182212 command_runner.go:130] >         "value": "0"
	I0916 11:00:03.665149  182212 command_runner.go:130] >       },
	I0916 11:00:03.665154  182212 command_runner.go:130] >       "username": "",
	I0916 11:00:03.665159  182212 command_runner.go:130] >       "spec": null,
	I0916 11:00:03.665166  182212 command_runner.go:130] >       "pinned": false
	I0916 11:00:03.665174  182212 command_runner.go:130] >     },
	I0916 11:00:03.665182  182212 command_runner.go:130] >     {
	I0916 11:00:03.665193  182212 command_runner.go:130] >       "id": "sha256:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 11:00:03.665203  182212 command_runner.go:130] >       "repoTags": [
	I0916 11:00:03.665213  182212 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 11:00:03.665223  182212 command_runner.go:130] >       ],
	I0916 11:00:03.665231  182212 command_runner.go:130] >       "repoDigests": [
	I0916 11:00:03.665245  182212 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"
	I0916 11:00:03.665268  182212 command_runner.go:130] >       ],
	I0916 11:00:03.665280  182212 command_runner.go:130] >       "size": "30211884",
	I0916 11:00:03.665289  182212 command_runner.go:130] >       "uid": null,
	I0916 11:00:03.665296  182212 command_runner.go:130] >       "username": "",
	I0916 11:00:03.665306  182212 command_runner.go:130] >       "spec": null,
	I0916 11:00:03.665313  182212 command_runner.go:130] >       "pinned": false
	I0916 11:00:03.665319  182212 command_runner.go:130] >     },
	I0916 11:00:03.665327  182212 command_runner.go:130] >     {
	I0916 11:00:03.665339  182212 command_runner.go:130] >       "id": "sha256:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 11:00:03.665348  182212 command_runner.go:130] >       "repoTags": [
	I0916 11:00:03.665356  182212 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 11:00:03.665365  182212 command_runner.go:130] >       ],
	I0916 11:00:03.665373  182212 command_runner.go:130] >       "repoDigests": [
	I0916 11:00:03.665388  182212 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"
	I0916 11:00:03.665396  182212 command_runner.go:130] >       ],
	I0916 11:00:03.665405  182212 command_runner.go:130] >       "size": "20177215",
	I0916 11:00:03.665413  182212 command_runner.go:130] >       "uid": {
	I0916 11:00:03.665421  182212 command_runner.go:130] >         "value": "0"
	I0916 11:00:03.665430  182212 command_runner.go:130] >       },
	I0916 11:00:03.665437  182212 command_runner.go:130] >       "username": "",
	I0916 11:00:03.665446  182212 command_runner.go:130] >       "spec": null,
	I0916 11:00:03.665454  182212 command_runner.go:130] >       "pinned": false
	I0916 11:00:03.665462  182212 command_runner.go:130] >     },
	I0916 11:00:03.665468  182212 command_runner.go:130] >     {
	I0916 11:00:03.665496  182212 command_runner.go:130] >       "id": "sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 11:00:03.665505  182212 command_runner.go:130] >       "repoTags": [
	I0916 11:00:03.665513  182212 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 11:00:03.665523  182212 command_runner.go:130] >       ],
	I0916 11:00:03.665531  182212 command_runner.go:130] >       "repoDigests": [
	I0916 11:00:03.665546  182212 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 11:00:03.665554  182212 command_runner.go:130] >       ],
	I0916 11:00:03.665566  182212 command_runner.go:130] >       "size": "320368",
	I0916 11:00:03.665575  182212 command_runner.go:130] >       "uid": {
	I0916 11:00:03.665582  182212 command_runner.go:130] >         "value": "65535"
	I0916 11:00:03.665591  182212 command_runner.go:130] >       },
	I0916 11:00:03.665599  182212 command_runner.go:130] >       "username": "",
	I0916 11:00:03.665608  182212 command_runner.go:130] >       "spec": null,
	I0916 11:00:03.665615  182212 command_runner.go:130] >       "pinned": true
	I0916 11:00:03.665623  182212 command_runner.go:130] >     }
	I0916 11:00:03.665629  182212 command_runner.go:130] >   ]
	I0916 11:00:03.665637  182212 command_runner.go:130] > }
	I0916 11:00:03.666540  182212 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 11:00:03.666563  182212 containerd.go:534] Images already preloaded, skipping extraction
	I0916 11:00:03.666621  182212 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:00:03.698151  182212 command_runner.go:130] > {
	I0916 11:00:03.698182  182212 command_runner.go:130] >   "images": [
	I0916 11:00:03.698187  182212 command_runner.go:130] >     {
	I0916 11:00:03.698199  182212 command_runner.go:130] >       "id": "sha256:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I0916 11:00:03.698207  182212 command_runner.go:130] >       "repoTags": [
	I0916 11:00:03.698221  182212 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I0916 11:00:03.698226  182212 command_runner.go:130] >       ],
	I0916 11:00:03.698250  182212 command_runner.go:130] >       "repoDigests": [
	I0916 11:00:03.698263  182212 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I0916 11:00:03.698268  182212 command_runner.go:130] >       ],
	I0916 11:00:03.698273  182212 command_runner.go:130] >       "size": "36793393",
	I0916 11:00:03.698279  182212 command_runner.go:130] >       "uid": null,
	I0916 11:00:03.698283  182212 command_runner.go:130] >       "username": "",
	I0916 11:00:03.698289  182212 command_runner.go:130] >       "spec": null,
	I0916 11:00:03.698295  182212 command_runner.go:130] >       "pinned": false
	I0916 11:00:03.698301  182212 command_runner.go:130] >     },
	I0916 11:00:03.698308  182212 command_runner.go:130] >     {
	I0916 11:00:03.698317  182212 command_runner.go:130] >       "id": "sha256:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0916 11:00:03.698321  182212 command_runner.go:130] >       "repoTags": [
	I0916 11:00:03.698326  182212 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0916 11:00:03.698330  182212 command_runner.go:130] >       ],
	I0916 11:00:03.698334  182212 command_runner.go:130] >       "repoDigests": [
	I0916 11:00:03.698342  182212 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0916 11:00:03.698346  182212 command_runner.go:130] >       ],
	I0916 11:00:03.698350  182212 command_runner.go:130] >       "size": "725911",
	I0916 11:00:03.698357  182212 command_runner.go:130] >       "uid": null,
	I0916 11:00:03.698361  182212 command_runner.go:130] >       "username": "",
	I0916 11:00:03.698364  182212 command_runner.go:130] >       "spec": null,
	I0916 11:00:03.698368  182212 command_runner.go:130] >       "pinned": false
	I0916 11:00:03.698372  182212 command_runner.go:130] >     },
	I0916 11:00:03.698376  182212 command_runner.go:130] >     {
	I0916 11:00:03.698382  182212 command_runner.go:130] >       "id": "sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0916 11:00:03.698388  182212 command_runner.go:130] >       "repoTags": [
	I0916 11:00:03.698398  182212 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0916 11:00:03.698404  182212 command_runner.go:130] >       ],
	I0916 11:00:03.698409  182212 command_runner.go:130] >       "repoDigests": [
	I0916 11:00:03.698418  182212 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I0916 11:00:03.698422  182212 command_runner.go:130] >       ],
	I0916 11:00:03.698426  182212 command_runner.go:130] >       "size": "9058936",
	I0916 11:00:03.698429  182212 command_runner.go:130] >       "uid": null,
	I0916 11:00:03.698434  182212 command_runner.go:130] >       "username": "",
	I0916 11:00:03.698443  182212 command_runner.go:130] >       "spec": null,
	I0916 11:00:03.698446  182212 command_runner.go:130] >       "pinned": false
	I0916 11:00:03.698452  182212 command_runner.go:130] >     },
	I0916 11:00:03.698455  182212 command_runner.go:130] >     {
	I0916 11:00:03.698462  182212 command_runner.go:130] >       "id": "sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I0916 11:00:03.698468  182212 command_runner.go:130] >       "repoTags": [
	I0916 11:00:03.698473  182212 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I0916 11:00:03.698478  182212 command_runner.go:130] >       ],
	I0916 11:00:03.698484  182212 command_runner.go:130] >       "repoDigests": [
	I0916 11:00:03.698493  182212 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"
	I0916 11:00:03.698498  182212 command_runner.go:130] >       ],
	I0916 11:00:03.698502  182212 command_runner.go:130] >       "size": "18562039",
	I0916 11:00:03.698508  182212 command_runner.go:130] >       "uid": null,
	I0916 11:00:03.698512  182212 command_runner.go:130] >       "username": "nonroot",
	I0916 11:00:03.698516  182212 command_runner.go:130] >       "spec": null,
	I0916 11:00:03.698521  182212 command_runner.go:130] >       "pinned": false
	I0916 11:00:03.698524  182212 command_runner.go:130] >     },
	I0916 11:00:03.698527  182212 command_runner.go:130] >     {
	I0916 11:00:03.698533  182212 command_runner.go:130] >       "id": "sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I0916 11:00:03.698539  182212 command_runner.go:130] >       "repoTags": [
	I0916 11:00:03.698543  182212 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I0916 11:00:03.698547  182212 command_runner.go:130] >       ],
	I0916 11:00:03.698551  182212 command_runner.go:130] >       "repoDigests": [
	I0916 11:00:03.698557  182212 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I0916 11:00:03.698566  182212 command_runner.go:130] >       ],
	I0916 11:00:03.698570  182212 command_runner.go:130] >       "size": "56909194",
	I0916 11:00:03.698576  182212 command_runner.go:130] >       "uid": {
	I0916 11:00:03.698580  182212 command_runner.go:130] >         "value": "0"
	I0916 11:00:03.698583  182212 command_runner.go:130] >       },
	I0916 11:00:03.698587  182212 command_runner.go:130] >       "username": "",
	I0916 11:00:03.698593  182212 command_runner.go:130] >       "spec": null,
	I0916 11:00:03.698598  182212 command_runner.go:130] >       "pinned": false
	I0916 11:00:03.698603  182212 command_runner.go:130] >     },
	I0916 11:00:03.698606  182212 command_runner.go:130] >     {
	I0916 11:00:03.698613  182212 command_runner.go:130] >       "id": "sha256:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I0916 11:00:03.698618  182212 command_runner.go:130] >       "repoTags": [
	I0916 11:00:03.698623  182212 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I0916 11:00:03.698627  182212 command_runner.go:130] >       ],
	I0916 11:00:03.698631  182212 command_runner.go:130] >       "repoDigests": [
	I0916 11:00:03.698638  182212 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I0916 11:00:03.698641  182212 command_runner.go:130] >       ],
	I0916 11:00:03.698646  182212 command_runner.go:130] >       "size": "28047142",
	I0916 11:00:03.698649  182212 command_runner.go:130] >       "uid": {
	I0916 11:00:03.698653  182212 command_runner.go:130] >         "value": "0"
	I0916 11:00:03.698656  182212 command_runner.go:130] >       },
	I0916 11:00:03.698660  182212 command_runner.go:130] >       "username": "",
	I0916 11:00:03.698664  182212 command_runner.go:130] >       "spec": null,
	I0916 11:00:03.698667  182212 command_runner.go:130] >       "pinned": false
	I0916 11:00:03.698670  182212 command_runner.go:130] >     },
	I0916 11:00:03.698672  182212 command_runner.go:130] >     {
	I0916 11:00:03.698679  182212 command_runner.go:130] >       "id": "sha256:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I0916 11:00:03.698682  182212 command_runner.go:130] >       "repoTags": [
	I0916 11:00:03.698687  182212 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I0916 11:00:03.698690  182212 command_runner.go:130] >       ],
	I0916 11:00:03.698694  182212 command_runner.go:130] >       "repoDigests": [
	I0916 11:00:03.698701  182212 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1"
	I0916 11:00:03.698704  182212 command_runner.go:130] >       ],
	I0916 11:00:03.698708  182212 command_runner.go:130] >       "size": "26221554",
	I0916 11:00:03.698711  182212 command_runner.go:130] >       "uid": {
	I0916 11:00:03.698715  182212 command_runner.go:130] >         "value": "0"
	I0916 11:00:03.698718  182212 command_runner.go:130] >       },
	I0916 11:00:03.698722  182212 command_runner.go:130] >       "username": "",
	I0916 11:00:03.698725  182212 command_runner.go:130] >       "spec": null,
	I0916 11:00:03.698731  182212 command_runner.go:130] >       "pinned": false
	I0916 11:00:03.698734  182212 command_runner.go:130] >     },
	I0916 11:00:03.698737  182212 command_runner.go:130] >     {
	I0916 11:00:03.698744  182212 command_runner.go:130] >       "id": "sha256:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I0916 11:00:03.698748  182212 command_runner.go:130] >       "repoTags": [
	I0916 11:00:03.698753  182212 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I0916 11:00:03.698759  182212 command_runner.go:130] >       ],
	I0916 11:00:03.698763  182212 command_runner.go:130] >       "repoDigests": [
	I0916 11:00:03.698770  182212 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"
	I0916 11:00:03.698775  182212 command_runner.go:130] >       ],
	I0916 11:00:03.698779  182212 command_runner.go:130] >       "size": "30211884",
	I0916 11:00:03.698792  182212 command_runner.go:130] >       "uid": null,
	I0916 11:00:03.698798  182212 command_runner.go:130] >       "username": "",
	I0916 11:00:03.698802  182212 command_runner.go:130] >       "spec": null,
	I0916 11:00:03.698806  182212 command_runner.go:130] >       "pinned": false
	I0916 11:00:03.698809  182212 command_runner.go:130] >     },
	I0916 11:00:03.698813  182212 command_runner.go:130] >     {
	I0916 11:00:03.698819  182212 command_runner.go:130] >       "id": "sha256:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I0916 11:00:03.698825  182212 command_runner.go:130] >       "repoTags": [
	I0916 11:00:03.698830  182212 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I0916 11:00:03.698836  182212 command_runner.go:130] >       ],
	I0916 11:00:03.698840  182212 command_runner.go:130] >       "repoDigests": [
	I0916 11:00:03.698847  182212 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"
	I0916 11:00:03.698853  182212 command_runner.go:130] >       ],
	I0916 11:00:03.698857  182212 command_runner.go:130] >       "size": "20177215",
	I0916 11:00:03.698863  182212 command_runner.go:130] >       "uid": {
	I0916 11:00:03.698867  182212 command_runner.go:130] >         "value": "0"
	I0916 11:00:03.698870  182212 command_runner.go:130] >       },
	I0916 11:00:03.698875  182212 command_runner.go:130] >       "username": "",
	I0916 11:00:03.698881  182212 command_runner.go:130] >       "spec": null,
	I0916 11:00:03.698885  182212 command_runner.go:130] >       "pinned": false
	I0916 11:00:03.698888  182212 command_runner.go:130] >     },
	I0916 11:00:03.698892  182212 command_runner.go:130] >     {
	I0916 11:00:03.698906  182212 command_runner.go:130] >       "id": "sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I0916 11:00:03.698912  182212 command_runner.go:130] >       "repoTags": [
	I0916 11:00:03.698917  182212 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I0916 11:00:03.698923  182212 command_runner.go:130] >       ],
	I0916 11:00:03.698926  182212 command_runner.go:130] >       "repoDigests": [
	I0916 11:00:03.698945  182212 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I0916 11:00:03.698951  182212 command_runner.go:130] >       ],
	I0916 11:00:03.698955  182212 command_runner.go:130] >       "size": "320368",
	I0916 11:00:03.698959  182212 command_runner.go:130] >       "uid": {
	I0916 11:00:03.698964  182212 command_runner.go:130] >         "value": "65535"
	I0916 11:00:03.698968  182212 command_runner.go:130] >       },
	I0916 11:00:03.698973  182212 command_runner.go:130] >       "username": "",
	I0916 11:00:03.698979  182212 command_runner.go:130] >       "spec": null,
	I0916 11:00:03.698982  182212 command_runner.go:130] >       "pinned": true
	I0916 11:00:03.698986  182212 command_runner.go:130] >     }
	I0916 11:00:03.698992  182212 command_runner.go:130] >   ]
	I0916 11:00:03.698994  182212 command_runner.go:130] > }
	I0916 11:00:03.699100  182212 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 11:00:03.699110  182212 cache_images.go:84] Images are preloaded, skipping loading
	I0916 11:00:03.699116  182212 kubeadm.go:934] updating node { 192.168.67.2 8443 v1.31.1 containerd true true} ...
	I0916 11:00:03.699217  182212 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-079070 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-079070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:00:03.699274  182212 ssh_runner.go:195] Run: sudo crictl info
	I0916 11:00:03.731587  182212 command_runner.go:130] > {
	I0916 11:00:03.731615  182212 command_runner.go:130] >   "status": {
	I0916 11:00:03.731624  182212 command_runner.go:130] >     "conditions": [
	I0916 11:00:03.731630  182212 command_runner.go:130] >       {
	I0916 11:00:03.731638  182212 command_runner.go:130] >         "type": "RuntimeReady",
	I0916 11:00:03.731646  182212 command_runner.go:130] >         "status": true,
	I0916 11:00:03.731652  182212 command_runner.go:130] >         "reason": "",
	I0916 11:00:03.731658  182212 command_runner.go:130] >         "message": ""
	I0916 11:00:03.731664  182212 command_runner.go:130] >       },
	I0916 11:00:03.731674  182212 command_runner.go:130] >       {
	I0916 11:00:03.731685  182212 command_runner.go:130] >         "type": "NetworkReady",
	I0916 11:00:03.731692  182212 command_runner.go:130] >         "status": true,
	I0916 11:00:03.731701  182212 command_runner.go:130] >         "reason": "",
	I0916 11:00:03.731711  182212 command_runner.go:130] >         "message": ""
	I0916 11:00:03.731718  182212 command_runner.go:130] >       },
	I0916 11:00:03.731721  182212 command_runner.go:130] >       {
	I0916 11:00:03.731729  182212 command_runner.go:130] >         "type": "ContainerdHasNoDeprecationWarnings",
	I0916 11:00:03.731767  182212 command_runner.go:130] >         "status": true,
	I0916 11:00:03.731779  182212 command_runner.go:130] >         "reason": "",
	I0916 11:00:03.731786  182212 command_runner.go:130] >         "message": ""
	I0916 11:00:03.731805  182212 command_runner.go:130] >       }
	I0916 11:00:03.731815  182212 command_runner.go:130] >     ]
	I0916 11:00:03.731823  182212 command_runner.go:130] >   },
	I0916 11:00:03.731831  182212 command_runner.go:130] >   "cniconfig": {
	I0916 11:00:03.731840  182212 command_runner.go:130] >     "PluginDirs": [
	I0916 11:00:03.731849  182212 command_runner.go:130] >       "/opt/cni/bin"
	I0916 11:00:03.731856  182212 command_runner.go:130] >     ],
	I0916 11:00:03.731866  182212 command_runner.go:130] >     "PluginConfDir": "/etc/cni/net.d",
	I0916 11:00:03.731876  182212 command_runner.go:130] >     "PluginMaxConfNum": 1,
	I0916 11:00:03.731886  182212 command_runner.go:130] >     "Prefix": "eth",
	I0916 11:00:03.731892  182212 command_runner.go:130] >     "Networks": [
	I0916 11:00:03.731901  182212 command_runner.go:130] >       {
	I0916 11:00:03.731911  182212 command_runner.go:130] >         "Config": {
	I0916 11:00:03.731921  182212 command_runner.go:130] >           "Name": "cni-loopback",
	I0916 11:00:03.731931  182212 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I0916 11:00:03.731941  182212 command_runner.go:130] >           "Plugins": [
	I0916 11:00:03.731949  182212 command_runner.go:130] >             {
	I0916 11:00:03.731955  182212 command_runner.go:130] >               "Network": {
	I0916 11:00:03.731959  182212 command_runner.go:130] >                 "type": "loopback",
	I0916 11:00:03.731969  182212 command_runner.go:130] >                 "ipam": {},
	I0916 11:00:03.731978  182212 command_runner.go:130] >                 "dns": {}
	I0916 11:00:03.731984  182212 command_runner.go:130] >               },
	I0916 11:00:03.731996  182212 command_runner.go:130] >               "Source": "{\"type\":\"loopback\"}"
	I0916 11:00:03.732004  182212 command_runner.go:130] >             }
	I0916 11:00:03.732013  182212 command_runner.go:130] >           ],
	I0916 11:00:03.732038  182212 command_runner.go:130] >           "Source": "{\n\"cniVersion\": \"0.3.1\",\n\"name\": \"cni-loopback\",\n\"plugins\": [{\n  \"type\": \"loopback\"\n}]\n}"
	I0916 11:00:03.732046  182212 command_runner.go:130] >         },
	I0916 11:00:03.732050  182212 command_runner.go:130] >         "IFName": "lo"
	I0916 11:00:03.732058  182212 command_runner.go:130] >       },
	I0916 11:00:03.732064  182212 command_runner.go:130] >       {
	I0916 11:00:03.732072  182212 command_runner.go:130] >         "Config": {
	I0916 11:00:03.732080  182212 command_runner.go:130] >           "Name": "kindnet",
	I0916 11:00:03.732090  182212 command_runner.go:130] >           "CNIVersion": "0.3.1",
	I0916 11:00:03.732101  182212 command_runner.go:130] >           "Plugins": [
	I0916 11:00:03.732110  182212 command_runner.go:130] >             {
	I0916 11:00:03.732120  182212 command_runner.go:130] >               "Network": {
	I0916 11:00:03.732130  182212 command_runner.go:130] >                 "type": "ptp",
	I0916 11:00:03.732138  182212 command_runner.go:130] >                 "ipam": {
	I0916 11:00:03.732145  182212 command_runner.go:130] >                   "type": "host-local"
	I0916 11:00:03.732151  182212 command_runner.go:130] >                 },
	I0916 11:00:03.732162  182212 command_runner.go:130] >                 "dns": {}
	I0916 11:00:03.732171  182212 command_runner.go:130] >               },
	I0916 11:00:03.732191  182212 command_runner.go:130] >               "Source": "{\"ipMasq\":false,\"ipam\":{\"dataDir\":\"/run/cni-ipam-state\",\"ranges\":[[{\"subnet\":\"10.244.0.0/24\"}]],\"routes\":[{\"dst\":\"0.0.0.0/0\"}],\"type\":\"host-local\"},\"mtu\":1500,\"type\":\"ptp\"}"
	I0916 11:00:03.732200  182212 command_runner.go:130] >             },
	I0916 11:00:03.732210  182212 command_runner.go:130] >             {
	I0916 11:00:03.732220  182212 command_runner.go:130] >               "Network": {
	I0916 11:00:03.732230  182212 command_runner.go:130] >                 "type": "portmap",
	I0916 11:00:03.732239  182212 command_runner.go:130] >                 "capabilities": {
	I0916 11:00:03.732246  182212 command_runner.go:130] >                   "portMappings": true
	I0916 11:00:03.732251  182212 command_runner.go:130] >                 },
	I0916 11:00:03.732261  182212 command_runner.go:130] >                 "ipam": {},
	I0916 11:00:03.732271  182212 command_runner.go:130] >                 "dns": {}
	I0916 11:00:03.732277  182212 command_runner.go:130] >               },
	I0916 11:00:03.732292  182212 command_runner.go:130] >               "Source": "{\"capabilities\":{\"portMappings\":true},\"type\":\"portmap\"}"
	I0916 11:00:03.732312  182212 command_runner.go:130] >             }
	I0916 11:00:03.732321  182212 command_runner.go:130] >           ],
	I0916 11:00:03.732376  182212 command_runner.go:130] >           "Source": "\n{\n\t\"cniVersion\": \"0.3.1\",\n\t\"name\": \"kindnet\",\n\t\"plugins\": [\n\t{\n\t\t\"type\": \"ptp\",\n\t\t\"ipMasq\": false,\n\t\t\"ipam\": {\n\t\t\t\"type\": \"host-local\",\n\t\t\t\"dataDir\": \"/run/cni-ipam-state\",\n\t\t\t\"routes\": [\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t{ \"dst\": \"0.0.0.0/0\" }\n\t\t\t],\n\t\t\t\"ranges\": [\n\t\t\t\t\n\t\t\t\t\n\t\t\t\t[ { \"subnet\": \"10.244.0.0/24\" } ]\n\t\t\t]\n\t\t}\n\t\t,\n\t\t\"mtu\": 1500\n\t\t\n\t},\n\t{\n\t\t\"type\": \"portmap\",\n\t\t\"capabilities\": {\n\t\t\t\"portMappings\": true\n\t\t}\n\t}\n\t]\n}\n"
	I0916 11:00:03.732388  182212 command_runner.go:130] >         },
	I0916 11:00:03.732396  182212 command_runner.go:130] >         "IFName": "eth0"
	I0916 11:00:03.732401  182212 command_runner.go:130] >       }
	I0916 11:00:03.732410  182212 command_runner.go:130] >     ]
	I0916 11:00:03.732421  182212 command_runner.go:130] >   },
	I0916 11:00:03.732429  182212 command_runner.go:130] >   "config": {
	I0916 11:00:03.732438  182212 command_runner.go:130] >     "containerd": {
	I0916 11:00:03.732448  182212 command_runner.go:130] >       "snapshotter": "overlayfs",
	I0916 11:00:03.732459  182212 command_runner.go:130] >       "defaultRuntimeName": "runc",
	I0916 11:00:03.732471  182212 command_runner.go:130] >       "defaultRuntime": {
	I0916 11:00:03.732480  182212 command_runner.go:130] >         "runtimeType": "",
	I0916 11:00:03.732490  182212 command_runner.go:130] >         "runtimePath": "",
	I0916 11:00:03.732497  182212 command_runner.go:130] >         "runtimeEngine": "",
	I0916 11:00:03.732507  182212 command_runner.go:130] >         "PodAnnotations": null,
	I0916 11:00:03.732517  182212 command_runner.go:130] >         "ContainerAnnotations": null,
	I0916 11:00:03.732527  182212 command_runner.go:130] >         "runtimeRoot": "",
	I0916 11:00:03.732537  182212 command_runner.go:130] >         "options": null,
	I0916 11:00:03.732548  182212 command_runner.go:130] >         "privileged_without_host_devices": false,
	I0916 11:00:03.732559  182212 command_runner.go:130] >         "privileged_without_host_devices_all_devices_allowed": false,
	I0916 11:00:03.732570  182212 command_runner.go:130] >         "baseRuntimeSpec": "",
	I0916 11:00:03.732579  182212 command_runner.go:130] >         "cniConfDir": "",
	I0916 11:00:03.732590  182212 command_runner.go:130] >         "cniMaxConfNum": 0,
	I0916 11:00:03.732597  182212 command_runner.go:130] >         "snapshotter": "",
	I0916 11:00:03.732607  182212 command_runner.go:130] >         "sandboxMode": ""
	I0916 11:00:03.732615  182212 command_runner.go:130] >       },
	I0916 11:00:03.732624  182212 command_runner.go:130] >       "untrustedWorkloadRuntime": {
	I0916 11:00:03.732633  182212 command_runner.go:130] >         "runtimeType": "",
	I0916 11:00:03.732643  182212 command_runner.go:130] >         "runtimePath": "",
	I0916 11:00:03.732656  182212 command_runner.go:130] >         "runtimeEngine": "",
	I0916 11:00:03.732663  182212 command_runner.go:130] >         "PodAnnotations": null,
	I0916 11:00:03.732669  182212 command_runner.go:130] >         "ContainerAnnotations": null,
	I0916 11:00:03.732678  182212 command_runner.go:130] >         "runtimeRoot": "",
	I0916 11:00:03.732688  182212 command_runner.go:130] >         "options": null,
	I0916 11:00:03.732697  182212 command_runner.go:130] >         "privileged_without_host_devices": false,
	I0916 11:00:03.732709  182212 command_runner.go:130] >         "privileged_without_host_devices_all_devices_allowed": false,
	I0916 11:00:03.732719  182212 command_runner.go:130] >         "baseRuntimeSpec": "",
	I0916 11:00:03.732728  182212 command_runner.go:130] >         "cniConfDir": "",
	I0916 11:00:03.732737  182212 command_runner.go:130] >         "cniMaxConfNum": 0,
	I0916 11:00:03.732747  182212 command_runner.go:130] >         "snapshotter": "",
	I0916 11:00:03.732755  182212 command_runner.go:130] >         "sandboxMode": ""
	I0916 11:00:03.732761  182212 command_runner.go:130] >       },
	I0916 11:00:03.732766  182212 command_runner.go:130] >       "runtimes": {
	I0916 11:00:03.732782  182212 command_runner.go:130] >         "runc": {
	I0916 11:00:03.732793  182212 command_runner.go:130] >           "runtimeType": "io.containerd.runc.v2",
	I0916 11:00:03.732803  182212 command_runner.go:130] >           "runtimePath": "",
	I0916 11:00:03.732812  182212 command_runner.go:130] >           "runtimeEngine": "",
	I0916 11:00:03.732822  182212 command_runner.go:130] >           "PodAnnotations": null,
	I0916 11:00:03.732832  182212 command_runner.go:130] >           "ContainerAnnotations": null,
	I0916 11:00:03.732841  182212 command_runner.go:130] >           "runtimeRoot": "",
	I0916 11:00:03.732850  182212 command_runner.go:130] >           "options": {
	I0916 11:00:03.732859  182212 command_runner.go:130] >             "SystemdCgroup": false
	I0916 11:00:03.732864  182212 command_runner.go:130] >           },
	I0916 11:00:03.732906  182212 command_runner.go:130] >           "privileged_without_host_devices": false,
	I0916 11:00:03.732919  182212 command_runner.go:130] >           "privileged_without_host_devices_all_devices_allowed": false,
	I0916 11:00:03.732929  182212 command_runner.go:130] >           "baseRuntimeSpec": "",
	I0916 11:00:03.732939  182212 command_runner.go:130] >           "cniConfDir": "",
	I0916 11:00:03.732949  182212 command_runner.go:130] >           "cniMaxConfNum": 0,
	I0916 11:00:03.732959  182212 command_runner.go:130] >           "snapshotter": "",
	I0916 11:00:03.732969  182212 command_runner.go:130] >           "sandboxMode": "podsandbox"
	I0916 11:00:03.732978  182212 command_runner.go:130] >         }
	I0916 11:00:03.732985  182212 command_runner.go:130] >       },
	I0916 11:00:03.732990  182212 command_runner.go:130] >       "noPivot": false,
	I0916 11:00:03.732998  182212 command_runner.go:130] >       "disableSnapshotAnnotations": true,
	I0916 11:00:03.733009  182212 command_runner.go:130] >       "discardUnpackedLayers": true,
	I0916 11:00:03.733019  182212 command_runner.go:130] >       "ignoreBlockIONotEnabledErrors": false,
	I0916 11:00:03.733030  182212 command_runner.go:130] >       "ignoreRdtNotEnabledErrors": false
	I0916 11:00:03.733038  182212 command_runner.go:130] >     },
	I0916 11:00:03.733047  182212 command_runner.go:130] >     "cni": {
	I0916 11:00:03.733056  182212 command_runner.go:130] >       "binDir": "/opt/cni/bin",
	I0916 11:00:03.733066  182212 command_runner.go:130] >       "confDir": "/etc/cni/net.d",
	I0916 11:00:03.733073  182212 command_runner.go:130] >       "maxConfNum": 1,
	I0916 11:00:03.733077  182212 command_runner.go:130] >       "setupSerially": false,
	I0916 11:00:03.733087  182212 command_runner.go:130] >       "confTemplate": "",
	I0916 11:00:03.733096  182212 command_runner.go:130] >       "ipPref": ""
	I0916 11:00:03.733102  182212 command_runner.go:130] >     },
	I0916 11:00:03.733117  182212 command_runner.go:130] >     "registry": {
	I0916 11:00:03.733128  182212 command_runner.go:130] >       "configPath": "/etc/containerd/certs.d",
	I0916 11:00:03.733137  182212 command_runner.go:130] >       "mirrors": null,
	I0916 11:00:03.733147  182212 command_runner.go:130] >       "configs": null,
	I0916 11:00:03.733156  182212 command_runner.go:130] >       "auths": null,
	I0916 11:00:03.733164  182212 command_runner.go:130] >       "headers": null
	I0916 11:00:03.733170  182212 command_runner.go:130] >     },
	I0916 11:00:03.733175  182212 command_runner.go:130] >     "imageDecryption": {
	I0916 11:00:03.733185  182212 command_runner.go:130] >       "keyModel": "node"
	I0916 11:00:03.733194  182212 command_runner.go:130] >     },
	I0916 11:00:03.733201  182212 command_runner.go:130] >     "disableTCPService": true,
	I0916 11:00:03.733211  182212 command_runner.go:130] >     "streamServerAddress": "",
	I0916 11:00:03.733222  182212 command_runner.go:130] >     "streamServerPort": "10010",
	I0916 11:00:03.733231  182212 command_runner.go:130] >     "streamIdleTimeout": "4h0m0s",
	I0916 11:00:03.733240  182212 command_runner.go:130] >     "enableSelinux": false,
	I0916 11:00:03.733250  182212 command_runner.go:130] >     "selinuxCategoryRange": 1024,
	I0916 11:00:03.733260  182212 command_runner.go:130] >     "sandboxImage": "registry.k8s.io/pause:3.10",
	I0916 11:00:03.733267  182212 command_runner.go:130] >     "statsCollectPeriod": 10,
	I0916 11:00:03.733272  182212 command_runner.go:130] >     "systemdCgroup": false,
	I0916 11:00:03.733282  182212 command_runner.go:130] >     "enableTLSStreaming": false,
	I0916 11:00:03.733294  182212 command_runner.go:130] >     "x509KeyPairStreaming": {
	I0916 11:00:03.733304  182212 command_runner.go:130] >       "tlsCertFile": "",
	I0916 11:00:03.733314  182212 command_runner.go:130] >       "tlsKeyFile": ""
	I0916 11:00:03.733322  182212 command_runner.go:130] >     },
	I0916 11:00:03.733331  182212 command_runner.go:130] >     "maxContainerLogSize": 16384,
	I0916 11:00:03.733341  182212 command_runner.go:130] >     "disableCgroup": false,
	I0916 11:00:03.733350  182212 command_runner.go:130] >     "disableApparmor": false,
	I0916 11:00:03.733359  182212 command_runner.go:130] >     "restrictOOMScoreAdj": false,
	I0916 11:00:03.733366  182212 command_runner.go:130] >     "maxConcurrentDownloads": 3,
	I0916 11:00:03.733372  182212 command_runner.go:130] >     "disableProcMount": false,
	I0916 11:00:03.733382  182212 command_runner.go:130] >     "unsetSeccompProfile": "",
	I0916 11:00:03.733392  182212 command_runner.go:130] >     "tolerateMissingHugetlbController": true,
	I0916 11:00:03.733403  182212 command_runner.go:130] >     "disableHugetlbController": true,
	I0916 11:00:03.733421  182212 command_runner.go:130] >     "device_ownership_from_security_context": false,
	I0916 11:00:03.733431  182212 command_runner.go:130] >     "ignoreImageDefinedVolumes": false,
	I0916 11:00:03.733442  182212 command_runner.go:130] >     "netnsMountsUnderStateDir": false,
	I0916 11:00:03.733452  182212 command_runner.go:130] >     "enableUnprivilegedPorts": true,
	I0916 11:00:03.733459  182212 command_runner.go:130] >     "enableUnprivilegedICMP": false,
	I0916 11:00:03.733464  182212 command_runner.go:130] >     "enableCDI": false,
	I0916 11:00:03.733472  182212 command_runner.go:130] >     "cdiSpecDirs": [
	I0916 11:00:03.733482  182212 command_runner.go:130] >       "/etc/cdi",
	I0916 11:00:03.733491  182212 command_runner.go:130] >       "/var/run/cdi"
	I0916 11:00:03.733500  182212 command_runner.go:130] >     ],
	I0916 11:00:03.733510  182212 command_runner.go:130] >     "imagePullProgressTimeout": "5m0s",
	I0916 11:00:03.733521  182212 command_runner.go:130] >     "drainExecSyncIOTimeout": "0s",
	I0916 11:00:03.733534  182212 command_runner.go:130] >     "imagePullWithSyncFs": false,
	I0916 11:00:03.733542  182212 command_runner.go:130] >     "ignoreDeprecationWarnings": null,
	I0916 11:00:03.733550  182212 command_runner.go:130] >     "containerdRootDir": "/var/lib/containerd",
	I0916 11:00:03.733558  182212 command_runner.go:130] >     "containerdEndpoint": "/run/containerd/containerd.sock",
	I0916 11:00:03.733574  182212 command_runner.go:130] >     "rootDir": "/var/lib/containerd/io.containerd.grpc.v1.cri",
	I0916 11:00:03.733605  182212 command_runner.go:130] >     "stateDir": "/run/containerd/io.containerd.grpc.v1.cri"
	I0916 11:00:03.733613  182212 command_runner.go:130] >   },
	I0916 11:00:03.733620  182212 command_runner.go:130] >   "golang": "go1.22.7",
	I0916 11:00:03.733629  182212 command_runner.go:130] >   "lastCNILoadStatus": "OK",
	I0916 11:00:03.733637  182212 command_runner.go:130] >   "lastCNILoadStatus.default": "OK"
	I0916 11:00:03.733640  182212 command_runner.go:130] > }
	I0916 11:00:03.734059  182212 cni.go:84] Creating CNI manager for ""
	I0916 11:00:03.734073  182212 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0916 11:00:03.734084  182212 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:00:03.734113  182212 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-079070 NodeName:multinode-079070 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 11:00:03.734245  182212 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "multinode-079070"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:00:03.734310  182212 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 11:00:03.741834  182212 command_runner.go:130] > kubeadm
	I0916 11:00:03.741856  182212 command_runner.go:130] > kubectl
	I0916 11:00:03.741861  182212 command_runner.go:130] > kubelet
	I0916 11:00:03.742564  182212 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 11:00:03.742640  182212 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:00:03.750667  182212 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0916 11:00:03.767192  182212 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:00:03.783671  182212 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2170 bytes)
	I0916 11:00:03.799923  182212 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0916 11:00:03.803095  182212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:00:03.813576  182212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:00:03.883863  182212 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:00:03.896718  182212 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070 for IP: 192.168.67.2
	I0916 11:00:03.896738  182212 certs.go:194] generating shared ca certs ...
	I0916 11:00:03.896753  182212 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:00:03.896877  182212 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 11:00:03.896915  182212 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 11:00:03.896925  182212 certs.go:256] generating profile certs ...
	I0916 11:00:03.896992  182212 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/client.key
	I0916 11:00:03.897045  182212 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/apiserver.key.5aac267e
	I0916 11:00:03.897086  182212 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/proxy-client.key
	I0916 11:00:03.897097  182212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 11:00:03.897110  182212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 11:00:03.897123  182212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 11:00:03.897136  182212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 11:00:03.897149  182212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0916 11:00:03.897166  182212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0916 11:00:03.897178  182212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0916 11:00:03.897190  182212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0916 11:00:03.897239  182212 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 11:00:03.897268  182212 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 11:00:03.897277  182212 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:00:03.897300  182212 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 11:00:03.897335  182212 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:00:03.897355  182212 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 11:00:03.897391  182212 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:00:03.897417  182212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem -> /usr/share/ca-certificates/11189.pem
	I0916 11:00:03.897429  182212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /usr/share/ca-certificates/111892.pem
	I0916 11:00:03.897445  182212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:00:03.898000  182212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:00:03.920777  182212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:00:03.943842  182212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:00:03.971881  182212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:00:04.050771  182212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 11:00:04.124437  182212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 11:00:04.150472  182212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:00:04.175840  182212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 11:00:04.223010  182212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 11:00:04.244654  182212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 11:00:04.266535  182212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:00:04.289069  182212 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:00:04.305151  182212 ssh_runner.go:195] Run: openssl version
	I0916 11:00:04.309890  182212 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0916 11:00:04.310058  182212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 11:00:04.318595  182212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 11:00:04.321658  182212 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 11:00:04.321740  182212 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 11:00:04.321787  182212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 11:00:04.327772  182212 command_runner.go:130] > 51391683
	I0916 11:00:04.327983  182212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 11:00:04.336100  182212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 11:00:04.344754  182212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 11:00:04.348105  182212 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 11:00:04.348133  182212 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 11:00:04.348179  182212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 11:00:04.354069  182212 command_runner.go:130] > 3ec20f2e
	I0916 11:00:04.354330  182212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:00:04.362409  182212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:00:04.371664  182212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:00:04.375117  182212 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:00:04.375179  182212 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:00:04.375259  182212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:00:04.381440  182212 command_runner.go:130] > b5213941
	I0916 11:00:04.381692  182212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:00:04.391000  182212 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:00:04.394520  182212 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:00:04.394548  182212 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0916 11:00:04.394555  182212 command_runner.go:130] > Device: 801h/2049d	Inode: 809447      Links: 1
	I0916 11:00:04.394561  182212 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 11:00:04.394570  182212 command_runner.go:130] > Access: 2024-09-16 10:58:28.025535417 +0000
	I0916 11:00:04.394575  182212 command_runner.go:130] > Modify: 2024-09-16 10:56:17.230007830 +0000
	I0916 11:00:04.394580  182212 command_runner.go:130] > Change: 2024-09-16 10:56:17.230007830 +0000
	I0916 11:00:04.394585  182212 command_runner.go:130] >  Birth: 2024-09-16 10:56:17.230007830 +0000
	I0916 11:00:04.394644  182212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 11:00:04.400788  182212 command_runner.go:130] > Certificate will not expire
	I0916 11:00:04.400947  182212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 11:00:04.407352  182212 command_runner.go:130] > Certificate will not expire
	I0916 11:00:04.407441  182212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 11:00:04.413837  182212 command_runner.go:130] > Certificate will not expire
	I0916 11:00:04.414131  182212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 11:00:04.420445  182212 command_runner.go:130] > Certificate will not expire
	I0916 11:00:04.420685  182212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 11:00:04.426783  182212 command_runner.go:130] > Certificate will not expire
	I0916 11:00:04.426862  182212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 11:00:04.434260  182212 command_runner.go:130] > Certificate will not expire
	I0916 11:00:04.434643  182212 kubeadm.go:392] StartCluster: {Name:multinode-079070 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-079070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true} {Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-instal
ler:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:00:04.434789  182212 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 11:00:04.434868  182212 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:00:04.528849  182212 command_runner.go:130] > b73ca772183b59baa5be8014c698ba6eb41bdef3f84e5083cb7a0313a0fb938d
	I0916 11:00:04.528875  182212 command_runner.go:130] > e7dd060f7494bc9b42225cbca571b99a4eff363411d2e3c5d94b7fe635b2c5fc
	I0916 11:00:04.528885  182212 command_runner.go:130] > f11253e8ef61a01c3740b24f1b74855531922a5c71ae0705b35472b9baa28a46
	I0916 11:00:04.528895  182212 command_runner.go:130] > 9f936546ae13163e90e47cc8dcec45a4a44eb6f873708c6deb509ebe216c4213
	I0916 11:00:04.528904  182212 command_runner.go:130] > ca0cc800d9c7855a343484f3d2f0ffc35459a84c699a3c4d1a4f9fc511b1b850
	I0916 11:00:04.528912  182212 command_runner.go:130] > 50645a9df44a5be5ef6705e3c8cc321dc230a8a742eff68356246f7fd9869b85
	I0916 11:00:04.528921  182212 command_runner.go:130] > f8c9dd99b83dacf4270ec16fb010b101dbdc6c7542deaf690a717fb265515d4a
	I0916 11:00:04.528940  182212 command_runner.go:130] > 224f3c76893fd9b065b89216b3facf9e0652faec36d68b791b48068b9f5cef50
	I0916 11:00:04.532628  182212 cri.go:89] found id: "b73ca772183b59baa5be8014c698ba6eb41bdef3f84e5083cb7a0313a0fb938d"
	I0916 11:00:04.532657  182212 cri.go:89] found id: "e7dd060f7494bc9b42225cbca571b99a4eff363411d2e3c5d94b7fe635b2c5fc"
	I0916 11:00:04.532662  182212 cri.go:89] found id: "f11253e8ef61a01c3740b24f1b74855531922a5c71ae0705b35472b9baa28a46"
	I0916 11:00:04.532670  182212 cri.go:89] found id: "9f936546ae13163e90e47cc8dcec45a4a44eb6f873708c6deb509ebe216c4213"
	I0916 11:00:04.532673  182212 cri.go:89] found id: "ca0cc800d9c7855a343484f3d2f0ffc35459a84c699a3c4d1a4f9fc511b1b850"
	I0916 11:00:04.532678  182212 cri.go:89] found id: "50645a9df44a5be5ef6705e3c8cc321dc230a8a742eff68356246f7fd9869b85"
	I0916 11:00:04.532682  182212 cri.go:89] found id: "f8c9dd99b83dacf4270ec16fb010b101dbdc6c7542deaf690a717fb265515d4a"
	I0916 11:00:04.532685  182212 cri.go:89] found id: "224f3c76893fd9b065b89216b3facf9e0652faec36d68b791b48068b9f5cef50"
	I0916 11:00:04.532688  182212 cri.go:89] found id: ""
	I0916 11:00:04.532740  182212 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0916 11:00:04.546352  182212 command_runner.go:130] ! load container 4dc8c8f7df4437146c414746a3964facccbe3cd29cc3fa476e3d62fdcf5eec06: container does not exist
	I0916 11:00:04.552124  182212 command_runner.go:130] > [{"ociVersion":"1.0.2-dev","id":"6cccdd8798cbed01b856e7276893c72bc7d342d6660d9624a1c9b076683113cd","pid":1047,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6cccdd8798cbed01b856e7276893c72bc7d342d6660d9624a1c9b076683113cd","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6cccdd8798cbed01b856e7276893c72bc7d342d6660d9624a1c9b076683113cd/rootfs","created":"2024-09-16T11:00:04.530198095Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"6cccdd8798cbed01b856e7276893c72bc7d342d6660d9624a1c9b076683113cd","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-multinode-079070_4cd98cb286c25ab6542db09649b1ab0f","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-multinode-079070","io.kuber
netes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4cd98cb286c25ab6542db09649b1ab0f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a1e5b577dcd1503dde86537c8b75bdf65f41ce16f524b81619c11c7639914d27","pid":1079,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a1e5b577dcd1503dde86537c8b75bdf65f41ce16f524b81619c11c7639914d27","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a1e5b577dcd1503dde86537c8b75bdf65f41ce16f524b81619c11c7639914d27/rootfs","created":"2024-09-16T11:00:04.541406243Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"a1e5b577dcd1503dde86537c8b75bdf65f41ce16f524b81619c11c7639914d27","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-multinode-079070_b4d60d557a4cfb2c6d1e1c4e2473b237","io.kubernetes.cri.sandbox-memory
":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-multinode-079070","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b4d60d557a4cfb2c6d1e1c4e2473b237"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f8757a65895f3660e0b5503d31eda4d51741d310d2554b9b903648d119533954","pid":1080,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f8757a65895f3660e0b5503d31eda4d51741d310d2554b9b903648d119533954","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f8757a65895f3660e0b5503d31eda4d51741d310d2554b9b903648d119533954/rootfs","created":"2024-09-16T11:00:04.54172667Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"f8757a65895f3660e0b5503d31eda4d51741d310d2554b9b903648d119533954","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-ma
nager-multinode-079070_93bd5dba25d1e51504f9fc3f55fd27c8","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-multinode-079070","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"93bd5dba25d1e51504f9fc3f55fd27c8"},"owner":"root"}]
	I0916 11:00:04.552165  182212 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"6cccdd8798cbed01b856e7276893c72bc7d342d6660d9624a1c9b076683113cd","pid":1047,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6cccdd8798cbed01b856e7276893c72bc7d342d6660d9624a1c9b076683113cd","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6cccdd8798cbed01b856e7276893c72bc7d342d6660d9624a1c9b076683113cd/rootfs","created":"2024-09-16T11:00:04.530198095Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"6cccdd8798cbed01b856e7276893c72bc7d342d6660d9624a1c9b076683113cd","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-multinode-079070_4cd98cb286c25ab6542db09649b1ab0f","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-multinode-079070","io.kubernetes.
cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"4cd98cb286c25ab6542db09649b1ab0f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a1e5b577dcd1503dde86537c8b75bdf65f41ce16f524b81619c11c7639914d27","pid":1079,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a1e5b577dcd1503dde86537c8b75bdf65f41ce16f524b81619c11c7639914d27","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a1e5b577dcd1503dde86537c8b75bdf65f41ce16f524b81619c11c7639914d27/rootfs","created":"2024-09-16T11:00:04.541406243Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"a1e5b577dcd1503dde86537c8b75bdf65f41ce16f524b81619c11c7639914d27","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-multinode-079070_b4d60d557a4cfb2c6d1e1c4e2473b237","io.kubernetes.cri.sandbox-memory":"0",
"io.kubernetes.cri.sandbox-name":"kube-scheduler-multinode-079070","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"b4d60d557a4cfb2c6d1e1c4e2473b237"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f8757a65895f3660e0b5503d31eda4d51741d310d2554b9b903648d119533954","pid":1080,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f8757a65895f3660e0b5503d31eda4d51741d310d2554b9b903648d119533954","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f8757a65895f3660e0b5503d31eda4d51741d310d2554b9b903648d119533954/rootfs","created":"2024-09-16T11:00:04.54172667Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"f8757a65895f3660e0b5503d31eda4d51741d310d2554b9b903648d119533954","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-
multinode-079070_93bd5dba25d1e51504f9fc3f55fd27c8","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-multinode-079070","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"93bd5dba25d1e51504f9fc3f55fd27c8"},"owner":"root"}]
	I0916 11:00:04.552279  182212 cri.go:126] list returned 3 containers
	I0916 11:00:04.552292  182212 cri.go:129] container: {ID:6cccdd8798cbed01b856e7276893c72bc7d342d6660d9624a1c9b076683113cd Status:created}
	I0916 11:00:04.552347  182212 cri.go:131] skipping 6cccdd8798cbed01b856e7276893c72bc7d342d6660d9624a1c9b076683113cd - not in ps
	I0916 11:00:04.552358  182212 cri.go:129] container: {ID:a1e5b577dcd1503dde86537c8b75bdf65f41ce16f524b81619c11c7639914d27 Status:created}
	I0916 11:00:04.552366  182212 cri.go:131] skipping a1e5b577dcd1503dde86537c8b75bdf65f41ce16f524b81619c11c7639914d27 - not in ps
	I0916 11:00:04.552375  182212 cri.go:129] container: {ID:f8757a65895f3660e0b5503d31eda4d51741d310d2554b9b903648d119533954 Status:created}
	I0916 11:00:04.552384  182212 cri.go:131] skipping f8757a65895f3660e0b5503d31eda4d51741d310d2554b9b903648d119533954 - not in ps
	I0916 11:00:04.552438  182212 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:00:04.630416  182212 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0916 11:00:04.630444  182212 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0916 11:00:04.630453  182212 command_runner.go:130] > /var/lib/minikube/etcd:
	I0916 11:00:04.630458  182212 command_runner.go:130] > member
	I0916 11:00:04.630479  182212 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 11:00:04.630486  182212 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 11:00:04.630548  182212 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0916 11:00:04.643365  182212 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0916 11:00:04.644088  182212 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-079070" does not appear in /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:00:04.644334  182212 kubeconfig.go:62] /home/jenkins/minikube-integration/19651-3687/kubeconfig needs updating (will repair): [kubeconfig missing "multinode-079070" cluster setting kubeconfig missing "multinode-079070" context setting]
	I0916 11:00:04.644809  182212 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:00:04.645357  182212 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:00:04.645688  182212 kapi.go:59] client config for multinode-079070: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 11:00:04.646193  182212 cert_rotation.go:140] Starting client certificate rotation controller
	I0916 11:00:04.646371  182212 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 11:00:04.656787  182212 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.67.2
	I0916 11:00:04.656820  182212 kubeadm.go:597] duration metric: took 26.328291ms to restartPrimaryControlPlane
	I0916 11:00:04.656829  182212 kubeadm.go:394] duration metric: took 222.194378ms to StartCluster
	I0916 11:00:04.656844  182212 settings.go:142] acquiring lock: {Name:mk5f140da961bb985c566f732d611f3e238f1f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:00:04.656901  182212 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:00:04.657466  182212 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:00:04.657644  182212 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 11:00:04.657710  182212 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:00:04.657882  182212 config.go:182] Loaded profile config "multinode-079070": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:00:04.661118  182212 out.go:177] * Verifying Kubernetes components...
	I0916 11:00:04.661118  182212 out.go:177] * Enabled addons: 
	I0916 11:00:04.662645  182212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:00:04.662707  182212 addons.go:510] duration metric: took 4.995271ms for enable addons: enabled=[]
	I0916 11:00:04.944436  182212 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:00:05.026329  182212 node_ready.go:35] waiting up to 6m0s for node "multinode-079070" to be "Ready" ...
	I0916 11:00:05.026510  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:05.026522  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:05.026548  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:05.026559  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:05.026834  182212 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0916 11:00:05.026866  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:05.527535  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:05.527575  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:05.527588  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:05.527595  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:07.924715  182212 round_trippers.go:574] Response Status: 200 OK in 2397 milliseconds
	I0916 11:00:07.924745  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:07.924753  182212 round_trippers.go:580]     Audit-Id: 9c0e3a02-e3c0-47fb-a00e-9e9f2943a4c4
	I0916 11:00:07.924759  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:07.924763  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:07.924781  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0916 11:00:07.924785  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0916 11:00:07.924789  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:07 GMT
	I0916 11:00:07.925053  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:07.926192  182212 node_ready.go:49] node "multinode-079070" has status "Ready":"True"
	I0916 11:00:07.926216  182212 node_ready.go:38] duration metric: took 2.899847638s for node "multinode-079070" to be "Ready" ...
	I0916 11:00:07.926228  182212 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:00:07.926295  182212 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 11:00:07.926312  182212 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 11:00:07.926389  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 11:00:07.926400  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:07.926424  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:07.926434  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:07.935459  182212 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0916 11:00:07.935622  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:07.935640  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:07 GMT
	I0916 11:00:07.935646  182212 round_trippers.go:580]     Audit-Id: f2d52bbc-7b6f-457d-85f4-e39bc225e77c
	I0916 11:00:07.935651  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:07.935656  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:07.935661  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0916 11:00:07.935666  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0916 11:00:07.936625  182212 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"905"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"816","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 90726 chars]
	I0916 11:00:07.941947  182212 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace to be "Ready" ...
	I0916 11:00:07.942075  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 11:00:07.942090  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:07.942112  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:07.942118  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:07.944170  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:07.944189  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:07.944195  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:07.944200  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0916 11:00:07.944203  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0916 11:00:07.944206  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:07 GMT
	I0916 11:00:07.944209  182212 round_trippers.go:580]     Audit-Id: fc003014-28f5-4215-8fc7-acd36ab97f83
	I0916 11:00:07.944212  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:07.944411  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"816","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6693 chars]
	I0916 11:00:07.944976  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:07.944996  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:07.945003  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:07.945007  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:07.946961  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:07.946982  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:07.946991  182212 round_trippers.go:580]     Audit-Id: b5815315-d30f-447f-8432-428d29a41956
	I0916 11:00:07.946997  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:07.947001  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:07.947007  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0916 11:00:07.947010  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0916 11:00:07.947015  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:07 GMT
	I0916 11:00:07.947199  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:07.947497  182212 pod_ready.go:93] pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace has status "Ready":"True"
	I0916 11:00:07.947518  182212 pod_ready.go:82] duration metric: took 5.533961ms for pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace to be "Ready" ...
	I0916 11:00:07.947535  182212 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 11:00:07.947605  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-079070
	I0916 11:00:07.947619  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:07.947629  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:07.947634  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:07.949351  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:07.949372  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:07.949380  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0916 11:00:07.949388  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:07 GMT
	I0916 11:00:07.949393  182212 round_trippers.go:580]     Audit-Id: 2a2155b7-18bc-4564-90ed-19d305693515
	I0916 11:00:07.949397  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:07.949401  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:07.949406  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0916 11:00:07.949542  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-079070","namespace":"kube-system","uid":"fdcbee48-0d3b-47d7-acd2-298979345c7a","resourceVersion":"749","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"4cd98cb286c25ab6542db09649b1ab0f","kubernetes.io/config.mirror":"4cd98cb286c25ab6542db09649b1ab0f","kubernetes.io/config.seen":"2024-09-16T10:56:25.844758522Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6653 chars]
	I0916 11:00:07.949915  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:07.949928  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:07.949938  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:07.949943  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:07.951488  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:07.951507  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:07.951516  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:07 GMT
	I0916 11:00:07.951521  182212 round_trippers.go:580]     Audit-Id: 7d32383a-f979-47d2-8325-17e5df476dd8
	I0916 11:00:07.951524  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:07.951535  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:07.951540  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0916 11:00:07.951544  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0916 11:00:07.951690  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:07.952027  182212 pod_ready.go:93] pod "etcd-multinode-079070" in "kube-system" namespace has status "Ready":"True"
	I0916 11:00:07.952047  182212 pod_ready.go:82] duration metric: took 4.504483ms for pod "etcd-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 11:00:07.952065  182212 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 11:00:07.952120  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-079070
	I0916 11:00:07.952127  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:07.952133  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:07.952136  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:07.953914  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:07.953931  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:07.953954  182212 round_trippers.go:580]     Audit-Id: 8a416dd6-031e-4319-a5df-d765a1d45f5c
	I0916 11:00:07.953959  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:07.953966  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:07.953971  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0916 11:00:07.953977  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0916 11:00:07.953984  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:07 GMT
	I0916 11:00:07.954131  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-079070","namespace":"kube-system","uid":"72784d35-4a94-476c-a4e9-dc3c6c3a8c46","resourceVersion":"747","creationTimestamp":"2024-09-16T10:56:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.mirror":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.seen":"2024-09-16T10:56:20.578394748Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8731 chars]
	I0916 11:00:07.954564  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:07.954579  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:07.954588  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:07.954592  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:08.020610  182212 round_trippers.go:574] Response Status: 200 OK in 66 milliseconds
	I0916 11:00:08.020631  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:08.020638  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:08 GMT
	I0916 11:00:08.020642  182212 round_trippers.go:580]     Audit-Id: 6fae2373-bc6c-41e9-be94-ef0651532ce7
	I0916 11:00:08.020648  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:08.020652  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:08.020657  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0916 11:00:08.020661  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0916 11:00:08.020780  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:08.021168  182212 pod_ready.go:93] pod "kube-apiserver-multinode-079070" in "kube-system" namespace has status "Ready":"True"
	I0916 11:00:08.021194  182212 pod_ready.go:82] duration metric: took 69.118034ms for pod "kube-apiserver-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 11:00:08.021208  182212 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 11:00:08.021293  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-079070
	I0916 11:00:08.021307  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:08.021316  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:08.021322  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:08.023594  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:08.023620  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:08.023629  182212 round_trippers.go:580]     Audit-Id: 9db8b4b0-7e40-47b3-b3af-0847ca7ade50
	I0916 11:00:08.023636  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:08.023641  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:08.023646  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0916 11:00:08.023663  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0916 11:00:08.023668  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:08 GMT
	I0916 11:00:08.023895  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-079070","namespace":"kube-system","uid":"44a2f17c-654a-4d64-b474-70ce475c3afe","resourceVersion":"751","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"93bd5dba25d1e51504f9fc3f55fd27c8","kubernetes.io/config.mirror":"93bd5dba25d1e51504f9fc3f55fd27c8","kubernetes.io/config.seen":"2024-09-16T10:56:25.844752735Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8306 chars]
	I0916 11:00:08.024500  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:08.024520  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:08.024530  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:08.024536  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:08.027066  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:08.027088  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:08.027097  182212 round_trippers.go:580]     Audit-Id: 7fce6d78-e54c-44be-8e99-6b490d9d0d2d
	I0916 11:00:08.027104  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:08.027108  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:08.027111  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:08.027115  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:08.027118  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:08 GMT
	I0916 11:00:08.027262  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"650","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:08.027663  182212 pod_ready.go:93] pod "kube-controller-manager-multinode-079070" in "kube-system" namespace has status "Ready":"True"
	I0916 11:00:08.027691  182212 pod_ready.go:82] duration metric: took 6.474789ms for pod "kube-controller-manager-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 11:00:08.027705  182212 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2vhmt" in "kube-system" namespace to be "Ready" ...
	I0916 11:00:08.027804  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2vhmt
	I0916 11:00:08.027820  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:08.027830  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:08.027834  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:08.029862  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:08.029883  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:08.029892  182212 round_trippers.go:580]     Audit-Id: a0379013-fc7f-43e8-bab7-2fe7bd1d7237
	I0916 11:00:08.029896  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:08.029901  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:08.029922  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:08.029935  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:08.029940  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:08 GMT
	I0916 11:00:08.030117  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2vhmt","generateName":"kube-proxy-","namespace":"kube-system","uid":"6f3faf85-04e9-4840-855d-dd1ef9d4e463","resourceVersion":"664","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6388 chars]
	I0916 11:00:08.127019  182212 request.go:632] Waited for 96.253458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:08.127098  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:08.127106  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:08.127114  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:08.127119  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:08.128956  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:08.128979  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:08.128988  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:08.128994  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:08.129005  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:08.129010  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:08.129015  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:08 GMT
	I0916 11:00:08.129019  182212 round_trippers.go:580]     Audit-Id: 9dcf9514-86cb-4d25-af79-b0b6f98959d8
	I0916 11:00:08.129204  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:08.129620  182212 pod_ready.go:93] pod "kube-proxy-2vhmt" in "kube-system" namespace has status "Ready":"True"
	I0916 11:00:08.129643  182212 pod_ready.go:82] duration metric: took 101.929597ms for pod "kube-proxy-2vhmt" in "kube-system" namespace to be "Ready" ...
	I0916 11:00:08.129666  182212 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9z4qh" in "kube-system" namespace to be "Ready" ...
	I0916 11:00:08.327064  182212 request.go:632] Waited for 197.305724ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9z4qh
	I0916 11:00:08.327540  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9z4qh
	I0916 11:00:08.327595  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:08.327607  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:08.327613  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:08.332324  182212 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0916 11:00:08.332407  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:08.332424  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:08.332432  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:08.332439  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:08 GMT
	I0916 11:00:08.332446  182212 round_trippers.go:580]     Audit-Id: 597e3d6e-ebf3-4ad9-a0a1-7c5751b9204f
	I0916 11:00:08.332451  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:08.332457  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:08.332600  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9z4qh","generateName":"kube-proxy-","namespace":"kube-system","uid":"7bae6f8c-6440-4b47-aaa2-e7bbc9dabd7d","resourceVersion":"882","creationTimestamp":"2024-09-16T10:57:29Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:57:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6396 chars]
	I0916 11:00:08.526485  182212 request.go:632] Waited for 193.273723ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m03
	I0916 11:00:08.526589  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m03
	I0916 11:00:08.526605  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:08.526616  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:08.526626  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:08.529227  182212 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0916 11:00:08.529300  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:08.529330  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:08.529351  182212 round_trippers.go:580]     Content-Length: 210
	I0916 11:00:08.529360  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:08 GMT
	I0916 11:00:08.529370  182212 round_trippers.go:580]     Audit-Id: f13fbb05-aa57-4800-8869-ab5362921777
	I0916 11:00:08.529380  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:08.529389  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:08.529409  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:08.529660  182212 request.go:1351] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-079070-m03\" not found","reason":"NotFound","details":{"name":"multinode-079070-m03","kind":"nodes"},"code":404}
	I0916 11:00:08.529893  182212 pod_ready.go:98] node "multinode-079070-m03" hosting pod "kube-proxy-9z4qh" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-079070-m03": nodes "multinode-079070-m03" not found
	I0916 11:00:08.529929  182212 pod_ready.go:82] duration metric: took 400.252111ms for pod "kube-proxy-9z4qh" in "kube-system" namespace to be "Ready" ...
	E0916 11:00:08.529945  182212 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-079070-m03" hosting pod "kube-proxy-9z4qh" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-079070-m03": nodes "multinode-079070-m03" not found
	I0916 11:00:08.529959  182212 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xkr65" in "kube-system" namespace to be "Ready" ...
	I0916 11:00:08.727084  182212 request.go:632] Waited for 197.035847ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xkr65
	I0916 11:00:08.727309  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xkr65
	I0916 11:00:08.727344  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:08.727365  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:08.727385  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:08.730237  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:08.730264  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:08.730273  182212 round_trippers.go:580]     Audit-Id: 962b809a-36c9-4593-bae7-3fe94adadfed
	I0916 11:00:08.730279  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:08.730283  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:08.730321  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:08.730333  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:08.730338  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:08 GMT
	I0916 11:00:08.730487  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xkr65","generateName":"kube-proxy-","namespace":"kube-system","uid":"b8d1009a-f71f-4cb1-a2f0-510a2894874f","resourceVersion":"768","creationTimestamp":"2024-09-16T10:56:57Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6396 chars]
	I0916 11:00:08.927305  182212 request.go:632] Waited for 196.247179ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m02
	I0916 11:00:08.927370  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m02
	I0916 11:00:08.927377  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:08.927387  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:08.927398  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:08.931097  182212 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 11:00:08.931125  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:08.931135  182212 round_trippers.go:580]     Audit-Id: a6759e26-9494-4b8a-a02f-5cb544544583
	I0916 11:00:08.931142  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:08.931147  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:08.931152  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:08.931157  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:08.931161  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:08 GMT
	I0916 11:00:08.931282  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070-m02","uid":"5bc3423c-6b73-4003-b0b3-5aa501e974c0","resourceVersion":"757","creationTimestamp":"2024-09-16T10:56:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_56_58_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:57Z","fieldsType":"FieldsV1","fiel
dsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl": [truncated 5023 chars]
	I0916 11:00:08.931708  182212 pod_ready.go:93] pod "kube-proxy-xkr65" in "kube-system" namespace has status "Ready":"True"
	I0916 11:00:08.931765  182212 pod_ready.go:82] duration metric: took 401.761401ms for pod "kube-proxy-xkr65" in "kube-system" namespace to be "Ready" ...
	I0916 11:00:08.931784  182212 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 11:00:09.126497  182212 request.go:632] Waited for 194.633605ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 11:00:09.126592  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 11:00:09.126604  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:09.126612  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:09.126620  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:09.128663  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:09.128689  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:09.128699  182212 round_trippers.go:580]     Audit-Id: 859a643c-46cb-4e7e-a1fd-4820698dc30f
	I0916 11:00:09.128706  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:09.128711  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:09.128715  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:09.128719  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:09.128726  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:09 GMT
	I0916 11:00:09.128879  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-079070","namespace":"kube-system","uid":"cc5dd2a3-a136-42b4-a6f9-11c733cb38c4","resourceVersion":"915","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.mirror":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.seen":"2024-09-16T10:56:25.844756911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5432 chars]
	I0916 11:00:09.326752  182212 request.go:632] Waited for 197.340819ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:09.326829  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:09.326837  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:09.326850  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:09.326860  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:09.329178  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:09.329203  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:09.329211  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:09.329218  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:09.329224  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:09.329228  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:09.329233  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:09 GMT
	I0916 11:00:09.329238  182212 round_trippers.go:580]     Audit-Id: 2e1051dd-5c7a-4704-acc1-618c47b79098
	I0916 11:00:09.329428  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:09.526902  182212 request.go:632] Waited for 94.239883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 11:00:09.526957  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 11:00:09.526962  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:09.526969  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:09.526974  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:09.529264  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:09.529282  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:09.529288  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:09.529291  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:09.529294  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:09.529297  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:09.529300  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:09 GMT
	I0916 11:00:09.529303  182212 round_trippers.go:580]     Audit-Id: a63b8296-f709-4001-8f7d-4eda3d5148c1
	I0916 11:00:09.529483  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-079070","namespace":"kube-system","uid":"cc5dd2a3-a136-42b4-a6f9-11c733cb38c4","resourceVersion":"915","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.mirror":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.seen":"2024-09-16T10:56:25.844756911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5432 chars]
	I0916 11:00:09.727209  182212 request.go:632] Waited for 197.315446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:09.727303  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:09.727314  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:09.727326  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:09.727343  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:09.729711  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:09.729734  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:09.729741  182212 round_trippers.go:580]     Audit-Id: b7a1ac35-f4cf-4b6b-8676-086024122c9b
	I0916 11:00:09.729746  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:09.729750  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:09.729757  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:09.729763  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:09.729769  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:09 GMT
	I0916 11:00:09.729890  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:09.932433  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 11:00:09.932461  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:09.932472  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:09.932482  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:09.934301  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:09.934326  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:09.934335  182212 round_trippers.go:580]     Audit-Id: 624e0276-e43e-40e8-af4a-ae49f9ad9769
	I0916 11:00:09.934341  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:09.934345  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:09.934350  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:09.934355  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:09.934359  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:09 GMT
	I0916 11:00:09.934515  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-079070","namespace":"kube-system","uid":"cc5dd2a3-a136-42b4-a6f9-11c733cb38c4","resourceVersion":"915","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.mirror":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.seen":"2024-09-16T10:56:25.844756911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5432 chars]
	I0916 11:00:10.127435  182212 request.go:632] Waited for 192.351341ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:10.127517  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:10.127524  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:10.127532  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:10.127539  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:10.129584  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:10.129606  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:10.129616  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:10 GMT
	I0916 11:00:10.129622  182212 round_trippers.go:580]     Audit-Id: 7494789d-1420-4ef4-b873-61bae2a991a9
	I0916 11:00:10.129629  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:10.129634  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:10.129638  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:10.129642  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:10.129819  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:10.432341  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 11:00:10.432364  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:10.432371  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:10.432376  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:10.434346  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:10.434366  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:10.434375  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:10.434382  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:10 GMT
	I0916 11:00:10.434388  182212 round_trippers.go:580]     Audit-Id: 27464946-edee-4d4a-87ca-88f93f4209af
	I0916 11:00:10.434391  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:10.434396  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:10.434400  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:10.434630  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-079070","namespace":"kube-system","uid":"cc5dd2a3-a136-42b4-a6f9-11c733cb38c4","resourceVersion":"915","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.mirror":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.seen":"2024-09-16T10:56:25.844756911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5432 chars]
	I0916 11:00:10.527292  182212 request.go:632] Waited for 92.26683ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:10.527356  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:10.527361  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:10.527369  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:10.527379  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:10.529402  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:10.529425  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:10.529434  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:10.529439  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:10.529443  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:10 GMT
	I0916 11:00:10.529447  182212 round_trippers.go:580]     Audit-Id: 4715c22c-87cd-44d1-a587-8b47fb65e435
	I0916 11:00:10.529451  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:10.529456  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:10.529670  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:10.932985  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 11:00:10.933008  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:10.933016  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:10.933020  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:10.935289  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:10.935315  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:10.935325  182212 round_trippers.go:580]     Audit-Id: 02a7f1d1-bf22-42ea-a017-f3c79c38b349
	I0916 11:00:10.935331  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:10.935344  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:10.935351  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:10.935356  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:10.935360  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:10 GMT
	I0916 11:00:10.935566  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-079070","namespace":"kube-system","uid":"cc5dd2a3-a136-42b4-a6f9-11c733cb38c4","resourceVersion":"915","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.mirror":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.seen":"2024-09-16T10:56:25.844756911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5432 chars]
	I0916 11:00:10.936041  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:10.936055  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:10.936063  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:10.936068  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:10.938106  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:10.938127  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:10.938135  182212 round_trippers.go:580]     Audit-Id: cb787b6c-def4-4a76-bdf3-53ae2bad3fe5
	I0916 11:00:10.938140  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:10.938144  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:10.938149  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:10.938176  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:10.938185  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:10 GMT
	I0916 11:00:10.938358  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:10.938671  182212 pod_ready.go:103] pod "kube-scheduler-multinode-079070" in "kube-system" namespace has status "Ready":"False"
	I0916 11:00:11.432909  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 11:00:11.432930  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:11.432940  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:11.432945  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:11.434891  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:11.434913  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:11.434923  182212 round_trippers.go:580]     Audit-Id: 5f055393-1399-49c4-b6cf-ae3a78f84cfe
	I0916 11:00:11.434928  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:11.434933  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:11.434937  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:11.434942  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:11.434945  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:11 GMT
	I0916 11:00:11.435073  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-079070","namespace":"kube-system","uid":"cc5dd2a3-a136-42b4-a6f9-11c733cb38c4","resourceVersion":"915","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.mirror":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.seen":"2024-09-16T10:56:25.844756911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5432 chars]
	I0916 11:00:11.435563  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:11.435581  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:11.435591  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:11.435597  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:11.437394  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:11.437415  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:11.437424  182212 round_trippers.go:580]     Audit-Id: b847d723-580b-4f54-a4a0-7f41c8aa3332
	I0916 11:00:11.437431  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:11.437476  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:11.437490  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:11.437496  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:11.437498  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:11 GMT
	I0916 11:00:11.437619  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:11.932074  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 11:00:11.932101  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:11.932113  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:11.932138  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:11.934515  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:11.934538  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:11.934548  182212 round_trippers.go:580]     Audit-Id: 9d0167a1-1e60-4acd-a168-d09c2bac99d2
	I0916 11:00:11.934561  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:11.934566  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:11.934570  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:11.934574  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:11.934579  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:11 GMT
	I0916 11:00:11.934790  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-079070","namespace":"kube-system","uid":"cc5dd2a3-a136-42b4-a6f9-11c733cb38c4","resourceVersion":"915","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.mirror":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.seen":"2024-09-16T10:56:25.844756911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5432 chars]
	I0916 11:00:11.935283  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:11.935301  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:11.935311  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:11.935316  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:11.937206  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:11.937228  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:11.937237  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:11.937243  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:11.937249  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:11.937256  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:11 GMT
	I0916 11:00:11.937260  182212 round_trippers.go:580]     Audit-Id: d26d6200-d027-4b9b-9fa1-18971288beb9
	I0916 11:00:11.937267  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:11.937382  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:12.433016  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 11:00:12.433040  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:12.433048  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:12.433053  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:12.434971  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:12.434992  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:12.435001  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:12 GMT
	I0916 11:00:12.435006  182212 round_trippers.go:580]     Audit-Id: 1155d8ed-717f-4432-9197-3f70f2fb4e0b
	I0916 11:00:12.435010  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:12.435014  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:12.435018  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:12.435021  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:12.435201  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-079070","namespace":"kube-system","uid":"cc5dd2a3-a136-42b4-a6f9-11c733cb38c4","resourceVersion":"915","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.mirror":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.seen":"2024-09-16T10:56:25.844756911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5432 chars]
	I0916 11:00:12.435598  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:12.435612  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:12.435619  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:12.435622  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:12.437262  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:12.437278  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:12.437286  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:12.437291  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:12.437296  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:12.437302  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:12.437306  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:12 GMT
	I0916 11:00:12.437320  182212 round_trippers.go:580]     Audit-Id: 3118ff6e-71a7-4fc7-acff-80a696cec646
	I0916 11:00:12.437429  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:12.932328  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 11:00:12.932350  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:12.932360  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:12.932367  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:12.934567  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:12.934595  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:12.934604  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:12.934610  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:12.934616  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:12.934620  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:12 GMT
	I0916 11:00:12.934626  182212 round_trippers.go:580]     Audit-Id: b14b62f4-9f0e-4334-a3b2-2f55267d8f60
	I0916 11:00:12.934630  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:12.934803  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-079070","namespace":"kube-system","uid":"cc5dd2a3-a136-42b4-a6f9-11c733cb38c4","resourceVersion":"915","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.mirror":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.seen":"2024-09-16T10:56:25.844756911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5432 chars]
	I0916 11:00:12.935240  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:12.935255  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:12.935262  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:12.935265  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:12.937551  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:12.937573  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:12.937583  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:12 GMT
	I0916 11:00:12.937590  182212 round_trippers.go:580]     Audit-Id: bdef23bf-c709-4a1d-a108-b9fd7c8b377e
	I0916 11:00:12.937594  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:12.937599  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:12.937602  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:12.937606  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:12.937724  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:13.432337  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 11:00:13.432363  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:13.432371  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:13.432375  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:13.434666  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:13.434686  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:13.434693  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:13.434697  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:13 GMT
	I0916 11:00:13.434702  182212 round_trippers.go:580]     Audit-Id: 9eb6f85d-ade2-455d-913d-3cd4003f914d
	I0916 11:00:13.434706  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:13.434710  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:13.434714  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:13.434910  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-079070","namespace":"kube-system","uid":"cc5dd2a3-a136-42b4-a6f9-11c733cb38c4","resourceVersion":"915","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.mirror":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.seen":"2024-09-16T10:56:25.844756911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5432 chars]
	I0916 11:00:13.435308  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:13.435323  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:13.435333  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:13.435338  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:13.437187  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:13.437202  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:13.437209  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:13.437213  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:13.437216  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:13.437221  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:13.437226  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:13 GMT
	I0916 11:00:13.437233  182212 round_trippers.go:580]     Audit-Id: 63f11c4c-f8a8-446d-a595-948115abf219
	I0916 11:00:13.437393  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:13.437728  182212 pod_ready.go:103] pod "kube-scheduler-multinode-079070" in "kube-system" namespace has status "Ready":"False"
	I0916 11:00:13.932977  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 11:00:13.932998  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:13.933006  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:13.933009  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:13.935284  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:13.935311  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:13.935320  182212 round_trippers.go:580]     Audit-Id: 847734f4-a306-46e9-a545-3c0729ae34a1
	I0916 11:00:13.935327  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:13.935332  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:13.935337  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:13.935342  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:13.935346  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:13 GMT
	I0916 11:00:13.935488  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-079070","namespace":"kube-system","uid":"cc5dd2a3-a136-42b4-a6f9-11c733cb38c4","resourceVersion":"915","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.mirror":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.seen":"2024-09-16T10:56:25.844756911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5432 chars]
	I0916 11:00:13.936003  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:13.936022  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:13.936031  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:13.936036  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:13.937943  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:13.937960  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:13.937966  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:13.937971  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:13.937976  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:13.937980  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:13.937993  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:13 GMT
	I0916 11:00:13.938001  182212 round_trippers.go:580]     Audit-Id: 58a70c6f-f0a5-4514-873f-b0e38ccc7c5a
	I0916 11:00:13.938135  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:14.432884  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 11:00:14.432910  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:14.432921  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:14.432926  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:14.435351  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:14.435370  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:14.435377  182212 round_trippers.go:580]     Audit-Id: 448d22e0-fea6-4cf7-9f1b-73db63e020e0
	I0916 11:00:14.435382  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:14.435386  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:14.435389  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:14.435392  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:14.435394  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:14 GMT
	I0916 11:00:14.435587  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-079070","namespace":"kube-system","uid":"cc5dd2a3-a136-42b4-a6f9-11c733cb38c4","resourceVersion":"915","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.mirror":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.seen":"2024-09-16T10:56:25.844756911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5432 chars]
	I0916 11:00:14.435992  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:14.436006  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:14.436012  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:14.436015  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:14.437698  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:14.437717  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:14.437725  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:14.437730  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:14.437734  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:14.437738  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:14.437743  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:14 GMT
	I0916 11:00:14.437746  182212 round_trippers.go:580]     Audit-Id: aaf53981-a693-4981-8bb9-f8fc5d57ba5b
	I0916 11:00:14.437909  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:14.932776  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 11:00:14.932800  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:14.932809  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:14.932815  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:14.935076  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:14.935099  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:14.935105  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:14.935110  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:14 GMT
	I0916 11:00:14.935113  182212 round_trippers.go:580]     Audit-Id: 60b84848-4714-4569-a6ce-81987a8a3bb2
	I0916 11:00:14.935116  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:14.935121  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:14.935133  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:14.935338  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-079070","namespace":"kube-system","uid":"cc5dd2a3-a136-42b4-a6f9-11c733cb38c4","resourceVersion":"915","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.mirror":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.seen":"2024-09-16T10:56:25.844756911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5432 chars]
	I0916 11:00:14.935784  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:14.935796  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:14.935803  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:14.935807  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:14.937813  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:14.937833  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:14.937841  182212 round_trippers.go:580]     Audit-Id: 372ae307-de9a-4f8d-b42e-29b05c22b6a1
	I0916 11:00:14.937845  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:14.937849  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:14.937852  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:14.937855  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:14.937859  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:14 GMT
	I0916 11:00:14.937966  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:15.432411  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 11:00:15.432437  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:15.432444  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:15.432454  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:15.434797  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:15.434817  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:15.434823  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:15.434826  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:15.434829  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:15.434832  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:15.434836  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:15 GMT
	I0916 11:00:15.434838  182212 round_trippers.go:580]     Audit-Id: 9a5c7403-8c23-476c-9ddf-a9714b784062
	I0916 11:00:15.435040  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-079070","namespace":"kube-system","uid":"cc5dd2a3-a136-42b4-a6f9-11c733cb38c4","resourceVersion":"915","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.mirror":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.seen":"2024-09-16T10:56:25.844756911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5432 chars]
	I0916 11:00:15.435454  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:15.435468  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:15.435475  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:15.435480  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:15.437277  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:15.437295  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:15.437303  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:15.437308  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:15 GMT
	I0916 11:00:15.437324  182212 round_trippers.go:580]     Audit-Id: 52eb3444-2374-4fd8-8b75-914b1977f384
	I0916 11:00:15.437331  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:15.437338  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:15.437346  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:15.437519  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:15.437941  182212 pod_ready.go:103] pod "kube-scheduler-multinode-079070" in "kube-system" namespace has status "Ready":"False"
	I0916 11:00:15.933002  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 11:00:15.933030  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:15.933042  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:15.933046  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:15.935413  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:15.935437  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:15.935445  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:15.935452  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:15.935455  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:15.935463  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:15.935468  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:15 GMT
	I0916 11:00:15.935472  182212 round_trippers.go:580]     Audit-Id: 1becdbfe-8f2f-49bb-9909-43e26d392498
	I0916 11:00:15.935624  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-079070","namespace":"kube-system","uid":"cc5dd2a3-a136-42b4-a6f9-11c733cb38c4","resourceVersion":"915","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.mirror":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.seen":"2024-09-16T10:56:25.844756911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5432 chars]
	I0916 11:00:15.936101  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:15.936116  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:15.936123  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:15.936128  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:15.938031  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:15.938046  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:15.938052  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:15.938057  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:15 GMT
	I0916 11:00:15.938060  182212 round_trippers.go:580]     Audit-Id: 97195549-6703-45a0-ae5f-c8d09c9b6f0e
	I0916 11:00:15.938063  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:15.938066  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:15.938070  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:15.938228  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:16.432984  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 11:00:16.433011  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:16.433024  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:16.433032  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:16.435295  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:16.435315  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:16.435322  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:16.435327  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:16.435330  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:16.435336  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:16 GMT
	I0916 11:00:16.435340  182212 round_trippers.go:580]     Audit-Id: f65248a0-0645-45a1-98c3-b6b847e62ecf
	I0916 11:00:16.435343  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:16.435497  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-079070","namespace":"kube-system","uid":"cc5dd2a3-a136-42b4-a6f9-11c733cb38c4","resourceVersion":"915","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.mirror":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.seen":"2024-09-16T10:56:25.844756911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5432 chars]
	I0916 11:00:16.435984  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:16.435998  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:16.436008  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:16.436019  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:16.437708  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:16.437721  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:16.437727  182212 round_trippers.go:580]     Audit-Id: c08580db-8fe6-4079-bce6-ee5a6d1e19a2
	I0916 11:00:16.437731  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:16.437734  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:16.437737  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:16.437740  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:16.437743  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:16 GMT
	I0916 11:00:16.437906  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:16.932610  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 11:00:16.932634  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:16.932642  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:16.932646  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:16.935250  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:16.935274  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:16.935281  182212 round_trippers.go:580]     Audit-Id: 726579f1-b6cc-438f-8c2f-43c6d54f5b67
	I0916 11:00:16.935285  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:16.935288  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:16.935298  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:16.935303  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:16.935309  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:16 GMT
	I0916 11:00:16.935507  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-079070","namespace":"kube-system","uid":"cc5dd2a3-a136-42b4-a6f9-11c733cb38c4","resourceVersion":"915","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.mirror":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.seen":"2024-09-16T10:56:25.844756911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5432 chars]
	I0916 11:00:16.936037  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:16.936051  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:16.936058  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:16.936063  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:16.937980  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:16.937999  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:16.938007  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:16.938012  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:16 GMT
	I0916 11:00:16.938015  182212 round_trippers.go:580]     Audit-Id: 2cded6ec-46e8-48f5-bdd0-2cd8f89f6e17
	I0916 11:00:16.938021  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:16.938026  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:16.938030  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:16.938183  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:17.432587  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 11:00:17.432609  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:17.432617  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:17.432621  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:17.434696  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:17.434722  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:17.434731  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:17 GMT
	I0916 11:00:17.434737  182212 round_trippers.go:580]     Audit-Id: 22cbf2a5-0500-45d9-970a-1461c6aff2ad
	I0916 11:00:17.434742  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:17.434747  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:17.434751  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:17.434756  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:17.434887  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-079070","namespace":"kube-system","uid":"cc5dd2a3-a136-42b4-a6f9-11c733cb38c4","resourceVersion":"915","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.mirror":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.seen":"2024-09-16T10:56:25.844756911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5432 chars]
	I0916 11:00:17.435269  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:17.435280  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:17.435287  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:17.435291  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:17.437001  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:17.437017  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:17.437023  182212 round_trippers.go:580]     Audit-Id: 6afdf834-433e-4dcb-8703-a3adcbe62ce2
	I0916 11:00:17.437028  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:17.437031  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:17.437035  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:17.437038  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:17.437040  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:17 GMT
	I0916 11:00:17.437201  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:17.932484  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 11:00:17.932504  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:17.932512  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:17.932519  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:17.934518  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:17.934536  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:17.934555  182212 round_trippers.go:580]     Audit-Id: 742610e4-ec63-40ea-a3e8-8eb37f25aec9
	I0916 11:00:17.934561  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:17.934566  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:17.934571  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:17.934576  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:17.934580  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:17 GMT
	I0916 11:00:17.934721  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-079070","namespace":"kube-system","uid":"cc5dd2a3-a136-42b4-a6f9-11c733cb38c4","resourceVersion":"915","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.mirror":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.seen":"2024-09-16T10:56:25.844756911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5432 chars]
	I0916 11:00:17.935196  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:17.935212  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:17.935223  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:17.935227  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:17.937113  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:17.937128  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:17.937134  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:17.937138  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:17.937141  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:17.937144  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:17.937147  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:17 GMT
	I0916 11:00:17.937150  182212 round_trippers.go:580]     Audit-Id: 4beaddd9-8517-43e6-b0c3-7824f3c4c85a
	I0916 11:00:17.937292  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:17.937584  182212 pod_ready.go:103] pod "kube-scheduler-multinode-079070" in "kube-system" namespace has status "Ready":"False"
	I0916 11:00:18.432953  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 11:00:18.432977  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:18.432986  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:18.432991  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:18.435265  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:18.435290  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:18.435300  182212 round_trippers.go:580]     Audit-Id: 8c569cb3-dbce-4235-8eab-ace24e70035b
	I0916 11:00:18.435306  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:18.435310  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:18.435318  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:18.435322  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:18.435327  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:18 GMT
	I0916 11:00:18.435480  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-079070","namespace":"kube-system","uid":"cc5dd2a3-a136-42b4-a6f9-11c733cb38c4","resourceVersion":"915","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.mirror":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.seen":"2024-09-16T10:56:25.844756911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5432 chars]
	I0916 11:00:18.435903  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:18.435916  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:18.435924  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:18.435931  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:18.437828  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:18.437841  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:18.437848  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:18 GMT
	I0916 11:00:18.437854  182212 round_trippers.go:580]     Audit-Id: 5d7800f0-551a-465e-b706-767de16d76de
	I0916 11:00:18.437858  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:18.437861  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:18.437865  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:18.437869  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:18.438076  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:18.932734  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 11:00:18.932757  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:18.932765  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:18.932770  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:18.935352  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:18.935377  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:18.935387  182212 round_trippers.go:580]     Audit-Id: e8ed4a4b-0e39-4532-ab59-df22273e8a83
	I0916 11:00:18.935394  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:18.935401  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:18.935406  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:18.935411  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:18.935416  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:18 GMT
	I0916 11:00:18.935573  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-079070","namespace":"kube-system","uid":"cc5dd2a3-a136-42b4-a6f9-11c733cb38c4","resourceVersion":"915","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.mirror":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.seen":"2024-09-16T10:56:25.844756911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5432 chars]
	I0916 11:00:18.936043  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:18.936059  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:18.936066  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:18.936071  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:18.937908  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:18.937929  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:18.937938  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:18.937945  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:18.937948  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:18.937952  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:18 GMT
	I0916 11:00:18.937956  182212 round_trippers.go:580]     Audit-Id: 9d5a8961-b314-4fb2-8c90-d7c84783fdc5
	I0916 11:00:18.937961  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:18.938082  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:19.432811  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 11:00:19.432834  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:19.432841  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:19.432845  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:19.435262  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:19.435283  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:19.435290  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:19.435294  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:19.435297  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:19.435300  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:19 GMT
	I0916 11:00:19.435304  182212 round_trippers.go:580]     Audit-Id: 51e2df3a-df9c-4173-b104-ad215d8fdff5
	I0916 11:00:19.435309  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:19.435508  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-079070","namespace":"kube-system","uid":"cc5dd2a3-a136-42b4-a6f9-11c733cb38c4","resourceVersion":"915","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.mirror":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.seen":"2024-09-16T10:56:25.844756911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5432 chars]
	I0916 11:00:19.435897  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:19.435911  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:19.435918  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:19.435922  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:19.437620  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:19.437639  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:19.437648  182212 round_trippers.go:580]     Audit-Id: 9d885fef-d49f-4cb9-a836-9ede33ea8b73
	I0916 11:00:19.437653  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:19.437658  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:19.437662  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:19.437666  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:19.437669  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:19 GMT
	I0916 11:00:19.437848  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:19.932543  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 11:00:19.932566  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:19.932575  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:19.932579  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:19.934978  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:19.935000  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:19.935008  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:19.935013  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:19 GMT
	I0916 11:00:19.935017  182212 round_trippers.go:580]     Audit-Id: 76bcc09a-24f9-44d2-998f-8a11da4d1a83
	I0916 11:00:19.935021  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:19.935025  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:19.935029  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:19.935161  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-079070","namespace":"kube-system","uid":"cc5dd2a3-a136-42b4-a6f9-11c733cb38c4","resourceVersion":"915","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.mirror":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.seen":"2024-09-16T10:56:25.844756911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5432 chars]
	I0916 11:00:19.935581  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:19.935594  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:19.935604  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:19.935610  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:19.937577  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:19.937595  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:19.937604  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:19.937609  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:19.937613  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:19.937618  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:19 GMT
	I0916 11:00:19.937623  182212 round_trippers.go:580]     Audit-Id: 7243da99-d893-47df-8568-c318aa1dd436
	I0916 11:00:19.937626  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:19.937780  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:19.938079  182212 pod_ready.go:103] pod "kube-scheduler-multinode-079070" in "kube-system" namespace has status "Ready":"False"
	I0916 11:00:20.432976  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 11:00:20.432998  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:20.433008  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:20.433014  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:20.435277  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:20.435311  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:20.435320  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:20.435327  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:20.435333  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:20 GMT
	I0916 11:00:20.435337  182212 round_trippers.go:580]     Audit-Id: f352f850-8de1-48b4-a4ec-b92762ebf783
	I0916 11:00:20.435344  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:20.435349  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:20.435534  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-079070","namespace":"kube-system","uid":"cc5dd2a3-a136-42b4-a6f9-11c733cb38c4","resourceVersion":"915","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.mirror":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.seen":"2024-09-16T10:56:25.844756911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5432 chars]
	I0916 11:00:20.435994  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:20.436009  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:20.436016  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:20.436020  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:20.437916  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:20.437932  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:20.437938  182212 round_trippers.go:580]     Audit-Id: ab3e736c-d172-49c0-b4c9-f59728c426a0
	I0916 11:00:20.437941  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:20.437945  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:20.437947  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:20.437950  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:20.437952  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:20 GMT
	I0916 11:00:20.438150  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:20.932799  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 11:00:20.932823  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:20.932831  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:20.932835  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:20.935020  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:20.935045  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:20.935053  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:20.935058  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:20.935062  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:20 GMT
	I0916 11:00:20.935066  182212 round_trippers.go:580]     Audit-Id: 5a45b76e-4bb5-4c04-b976-3907ff185884
	I0916 11:00:20.935070  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:20.935078  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:20.935190  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-079070","namespace":"kube-system","uid":"cc5dd2a3-a136-42b4-a6f9-11c733cb38c4","resourceVersion":"915","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.mirror":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.seen":"2024-09-16T10:56:25.844756911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5432 chars]
	I0916 11:00:20.935668  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:20.935685  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:20.935696  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:20.935700  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:20.937855  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:20.937873  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:20.937879  182212 round_trippers.go:580]     Audit-Id: 0334ddea-f1bb-42fd-9a13-0a535a49e4c3
	I0916 11:00:20.937883  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:20.937888  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:20.937891  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:20.937893  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:20.937896  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:20 GMT
	I0916 11:00:20.938059  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:21.432779  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 11:00:21.432802  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:21.432813  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:21.432818  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:21.435124  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:21.435143  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:21.435149  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:21.435153  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:21 GMT
	I0916 11:00:21.435156  182212 round_trippers.go:580]     Audit-Id: fae63395-dabb-43cc-87c1-4d8c6eaa8b7a
	I0916 11:00:21.435159  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:21.435162  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:21.435165  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:21.435332  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-079070","namespace":"kube-system","uid":"cc5dd2a3-a136-42b4-a6f9-11c733cb38c4","resourceVersion":"915","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.mirror":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.seen":"2024-09-16T10:56:25.844756911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5432 chars]
	I0916 11:00:21.435825  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:21.435840  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:21.435847  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:21.435852  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:21.437599  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:21.437614  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:21.437620  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:21.437624  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:21.437627  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:21 GMT
	I0916 11:00:21.437629  182212 round_trippers.go:580]     Audit-Id: 20cd8c38-fa23-44c5-8ab7-42d315228732
	I0916 11:00:21.437632  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:21.437636  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:21.437841  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:21.932623  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 11:00:21.932655  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:21.932666  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:21.932674  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:21.934935  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:21.934957  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:21.934964  182212 round_trippers.go:580]     Audit-Id: 66d28277-2f9d-4d82-9177-600148cbb12b
	I0916 11:00:21.934968  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:21.934973  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:21.934976  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:21.934980  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:21.934988  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:21 GMT
	I0916 11:00:21.935193  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-079070","namespace":"kube-system","uid":"cc5dd2a3-a136-42b4-a6f9-11c733cb38c4","resourceVersion":"915","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.mirror":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.seen":"2024-09-16T10:56:25.844756911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5432 chars]
	I0916 11:00:21.935795  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:21.935813  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:21.935822  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:21.935827  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:21.937725  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:21.937740  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:21.937746  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:21.937751  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:21.937755  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:21.937758  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:21.937762  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:21 GMT
	I0916 11:00:21.937767  182212 round_trippers.go:580]     Audit-Id: 017152ef-8c82-4321-bd6e-8747376e8549
	I0916 11:00:21.937924  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:21.938222  182212 pod_ready.go:103] pod "kube-scheduler-multinode-079070" in "kube-system" namespace has status "Ready":"False"
	I0916 11:00:22.432680  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 11:00:22.432705  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:22.432715  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:22.432721  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:22.435059  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:22.435106  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:22.435119  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:22.435127  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:22 GMT
	I0916 11:00:22.435133  182212 round_trippers.go:580]     Audit-Id: 82db98f3-667e-40e7-acd2-cbffb19bc3f5
	I0916 11:00:22.435139  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:22.435144  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:22.435165  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:22.435341  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-079070","namespace":"kube-system","uid":"cc5dd2a3-a136-42b4-a6f9-11c733cb38c4","resourceVersion":"915","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.mirror":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.seen":"2024-09-16T10:56:25.844756911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 5432 chars]
	I0916 11:00:22.435808  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:22.435826  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:22.435836  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:22.435844  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:22.437874  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:22.437893  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:22.437901  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:22.437906  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:22.437909  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:22.437914  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:22 GMT
	I0916 11:00:22.437919  182212 round_trippers.go:580]     Audit-Id: 2a0ae4d0-7ef5-4298-b5ce-c5a799fa54bc
	I0916 11:00:22.437923  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:22.438131  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:22.932907  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 11:00:22.932930  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:22.932938  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:22.932943  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:22.935179  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:22.935206  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:22.935216  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:22.935222  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:22.935227  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:22 GMT
	I0916 11:00:22.935233  182212 round_trippers.go:580]     Audit-Id: 9507aebf-4632-4270-b9cc-887d3c34272d
	I0916 11:00:22.935237  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:22.935241  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:22.935403  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-079070","namespace":"kube-system","uid":"cc5dd2a3-a136-42b4-a6f9-11c733cb38c4","resourceVersion":"1007","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.mirror":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.seen":"2024-09-16T10:56:25.844756911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5189 chars]
	I0916 11:00:22.935846  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:22.935860  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:22.935867  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:22.935873  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:22.937914  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:22.937933  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:22.937943  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:22.937948  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:22.937954  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:22 GMT
	I0916 11:00:22.937958  182212 round_trippers.go:580]     Audit-Id: da1a569f-4c9b-4df0-b741-5352d5a40615
	I0916 11:00:22.937962  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:22.937968  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:22.938114  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:22.938408  182212 pod_ready.go:93] pod "kube-scheduler-multinode-079070" in "kube-system" namespace has status "Ready":"True"
	I0916 11:00:22.938422  182212 pod_ready.go:82] duration metric: took 14.006631225s for pod "kube-scheduler-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 11:00:22.938434  182212 pod_ready.go:39] duration metric: took 15.012189372s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:00:22.938453  182212 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:00:22.938506  182212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:00:22.949650  182212 command_runner.go:130] > 1234
	I0916 11:00:22.949696  182212 api_server.go:72] duration metric: took 18.292026896s to wait for apiserver process to appear ...
	I0916 11:00:22.949709  182212 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:00:22.949735  182212 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0916 11:00:22.953270  182212 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0916 11:00:22.953362  182212 round_trippers.go:463] GET https://192.168.67.2:8443/version
	I0916 11:00:22.953374  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:22.953385  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:22.953391  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:22.954448  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:22.954467  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:22.954475  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:22.954480  182212 round_trippers.go:580]     Content-Length: 263
	I0916 11:00:22.954485  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:22 GMT
	I0916 11:00:22.954489  182212 round_trippers.go:580]     Audit-Id: 809f07d7-24b9-4a74-8e14-b857aa0b7fdd
	I0916 11:00:22.954493  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:22.954497  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:22.954504  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:22.954528  182212 request.go:1351] Response Body: {
	  "major": "1",
	  "minor": "31",
	  "gitVersion": "v1.31.1",
	  "gitCommit": "948afe5ca072329a73c8e79ed5938717a5cb3d21",
	  "gitTreeState": "clean",
	  "buildDate": "2024-09-11T21:22:08Z",
	  "goVersion": "go1.22.6",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0916 11:00:22.954623  182212 api_server.go:141] control plane version: v1.31.1
	I0916 11:00:22.954644  182212 api_server.go:131] duration metric: took 4.92743ms to wait for apiserver health ...
	I0916 11:00:22.954654  182212 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 11:00:22.954731  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 11:00:22.954740  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:22.954749  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:22.954758  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:22.957577  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:22.957600  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:22.957609  182212 round_trippers.go:580]     Audit-Id: 75f40f40-9208-4348-92f7-b2a2ce18186d
	I0916 11:00:22.957615  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:22.957620  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:22.957623  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:22.957629  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:22.957636  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:22 GMT
	I0916 11:00:22.958192  182212 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1007"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"982","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 90963 chars]
	I0916 11:00:22.960992  182212 system_pods.go:59] 12 kube-system pods found
	I0916 11:00:22.961029  182212 system_pods.go:61] "coredns-7c65d6cfc9-ft9gh" [8052b6a1-7257-44d4-a318-740afd039d2c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 11:00:22.961035  182212 system_pods.go:61] "etcd-multinode-079070" [fdcbee48-0d3b-47d7-acd2-298979345c7a] Running
	I0916 11:00:22.961039  182212 system_pods.go:61] "kindnet-flmdv" [91449e63-0ca3-4dc6-92ef-e3c5ab102dae] Running
	I0916 11:00:22.961043  182212 system_pods.go:61] "kindnet-fs5x4" [3c4eb83d-3eba-427a-ac72-d8967f67abc1] Running
	I0916 11:00:22.961046  182212 system_pods.go:61] "kindnet-kxnzq" [bdf63c4c-0d22-4d74-b604-df3131d86f07] Running
	I0916 11:00:22.961050  182212 system_pods.go:61] "kube-apiserver-multinode-079070" [72784d35-4a94-476c-a4e9-dc3c6c3a8c46] Running
	I0916 11:00:22.961054  182212 system_pods.go:61] "kube-controller-manager-multinode-079070" [44a2f17c-654a-4d64-b474-70ce475c3afe] Running
	I0916 11:00:22.961057  182212 system_pods.go:61] "kube-proxy-2vhmt" [6f3faf85-04e9-4840-855d-dd1ef9d4e463] Running
	I0916 11:00:22.961060  182212 system_pods.go:61] "kube-proxy-9z4qh" [7bae6f8c-6440-4b47-aaa2-e7bbc9dabd7d] Running
	I0916 11:00:22.961063  182212 system_pods.go:61] "kube-proxy-xkr65" [b8d1009a-f71f-4cb1-a2f0-510a2894874f] Running
	I0916 11:00:22.961067  182212 system_pods.go:61] "kube-scheduler-multinode-079070" [cc5dd2a3-a136-42b4-a6f9-11c733cb38c4] Running
	I0916 11:00:22.961070  182212 system_pods.go:61] "storage-provisioner" [43862f2e-c773-468d-ab03-8b0bc0633ad4] Running
	I0916 11:00:22.961076  182212 system_pods.go:74] duration metric: took 6.416192ms to wait for pod list to return data ...
	I0916 11:00:22.961085  182212 default_sa.go:34] waiting for default service account to be created ...
	I0916 11:00:22.961159  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/default/serviceaccounts
	I0916 11:00:22.961166  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:22.961174  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:22.961177  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:22.963819  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:22.963847  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:22.963858  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:22.963862  182212 round_trippers.go:580]     Content-Length: 262
	I0916 11:00:22.963865  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:22 GMT
	I0916 11:00:22.963868  182212 round_trippers.go:580]     Audit-Id: 8b36299b-ad22-4809-a324-ce177fa2412d
	I0916 11:00:22.963871  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:22.963875  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:22.963878  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:22.963897  182212 request.go:1351] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1007"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"4622bf83-82d0-4a2c-a46c-d6dbfa5ce9ea","resourceVersion":"300","creationTimestamp":"2024-09-16T10:56:30Z"}}]}
	I0916 11:00:22.964066  182212 default_sa.go:45] found service account: "default"
	I0916 11:00:22.964080  182212 default_sa.go:55] duration metric: took 2.98981ms for default service account to be created ...
	I0916 11:00:22.964087  182212 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 11:00:22.964139  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 11:00:22.964146  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:22.964152  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:22.964156  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:22.966638  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:22.966655  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:22.966666  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:22.966671  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:22.966675  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:22.966681  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:22.966684  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:22 GMT
	I0916 11:00:22.966688  182212 round_trippers.go:580]     Audit-Id: bae867b0-59ae-4abf-8216-601d4d5b23d8
	I0916 11:00:22.967370  182212 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1007"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"982","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 90963 chars]
	I0916 11:00:22.971200  182212 system_pods.go:86] 12 kube-system pods found
	I0916 11:00:22.971233  182212 system_pods.go:89] "coredns-7c65d6cfc9-ft9gh" [8052b6a1-7257-44d4-a318-740afd039d2c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0916 11:00:22.971240  182212 system_pods.go:89] "etcd-multinode-079070" [fdcbee48-0d3b-47d7-acd2-298979345c7a] Running
	I0916 11:00:22.971245  182212 system_pods.go:89] "kindnet-flmdv" [91449e63-0ca3-4dc6-92ef-e3c5ab102dae] Running
	I0916 11:00:22.971249  182212 system_pods.go:89] "kindnet-fs5x4" [3c4eb83d-3eba-427a-ac72-d8967f67abc1] Running
	I0916 11:00:22.971252  182212 system_pods.go:89] "kindnet-kxnzq" [bdf63c4c-0d22-4d74-b604-df3131d86f07] Running
	I0916 11:00:22.971257  182212 system_pods.go:89] "kube-apiserver-multinode-079070" [72784d35-4a94-476c-a4e9-dc3c6c3a8c46] Running
	I0916 11:00:22.971261  182212 system_pods.go:89] "kube-controller-manager-multinode-079070" [44a2f17c-654a-4d64-b474-70ce475c3afe] Running
	I0916 11:00:22.971265  182212 system_pods.go:89] "kube-proxy-2vhmt" [6f3faf85-04e9-4840-855d-dd1ef9d4e463] Running
	I0916 11:00:22.971270  182212 system_pods.go:89] "kube-proxy-9z4qh" [7bae6f8c-6440-4b47-aaa2-e7bbc9dabd7d] Running
	I0916 11:00:22.971273  182212 system_pods.go:89] "kube-proxy-xkr65" [b8d1009a-f71f-4cb1-a2f0-510a2894874f] Running
	I0916 11:00:22.971277  182212 system_pods.go:89] "kube-scheduler-multinode-079070" [cc5dd2a3-a136-42b4-a6f9-11c733cb38c4] Running
	I0916 11:00:22.971282  182212 system_pods.go:89] "storage-provisioner" [43862f2e-c773-468d-ab03-8b0bc0633ad4] Running
	I0916 11:00:22.971293  182212 system_pods.go:126] duration metric: took 7.198873ms to wait for k8s-apps to be running ...
	I0916 11:00:22.971305  182212 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 11:00:22.971355  182212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:00:22.982886  182212 system_svc.go:56] duration metric: took 11.570228ms WaitForService to wait for kubelet
	I0916 11:00:22.982920  182212 kubeadm.go:582] duration metric: took 18.325249665s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:00:22.982937  182212 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:00:22.983019  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes
	I0916 11:00:22.983028  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:22.983037  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:22.983044  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:22.985655  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:22.985676  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:22.985682  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:22 GMT
	I0916 11:00:22.985686  182212 round_trippers.go:580]     Audit-Id: 561e12e3-17e3-48c4-be82-24e892cb465f
	I0916 11:00:22.985689  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:22.985692  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:22.985695  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:22.985698  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:22.985940  182212 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1007"},"items":[{"metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"mana
gedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1" [truncated 11304 chars]
	I0916 11:00:22.986460  182212 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 11:00:22.986486  182212 node_conditions.go:123] node cpu capacity is 8
	I0916 11:00:22.986498  182212 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 11:00:22.986503  182212 node_conditions.go:123] node cpu capacity is 8
	I0916 11:00:22.986511  182212 node_conditions.go:105] duration metric: took 3.568907ms to run NodePressure ...
	I0916 11:00:22.986521  182212 start.go:241] waiting for startup goroutines ...
	I0916 11:00:22.986532  182212 start.go:246] waiting for cluster config update ...
	I0916 11:00:22.986539  182212 start.go:255] writing updated cluster config ...
	I0916 11:00:22.988650  182212 out.go:201] 
	I0916 11:00:22.990375  182212 config.go:182] Loaded profile config "multinode-079070": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:00:22.990492  182212 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/config.json ...
	I0916 11:00:22.992141  182212 out.go:177] * Starting "multinode-079070-m02" worker node in "multinode-079070" cluster
	I0916 11:00:22.993651  182212 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 11:00:22.995216  182212 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 11:00:22.996536  182212 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:00:22.996578  182212 cache.go:56] Caching tarball of preloaded images
	I0916 11:00:22.996632  182212 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 11:00:22.996699  182212 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 11:00:22.996714  182212 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 11:00:22.996854  182212 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/config.json ...
	W0916 11:00:23.017654  182212 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 11:00:23.017674  182212 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 11:00:23.017770  182212 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 11:00:23.017789  182212 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 11:00:23.017797  182212 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 11:00:23.017805  182212 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 11:00:23.017812  182212 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 11:00:23.018883  182212 image.go:273] response: 
	I0916 11:00:23.073039  182212 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 11:00:23.073079  182212 cache.go:194] Successfully downloaded all kic artifacts
	I0916 11:00:23.073112  182212 start.go:360] acquireMachinesLock for multinode-079070-m02: {Name:mk1713c8fba020df744918162d1a483c7b41a015 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:00:23.073186  182212 start.go:364] duration metric: took 53.246µs to acquireMachinesLock for "multinode-079070-m02"
	I0916 11:00:23.073210  182212 start.go:96] Skipping create...Using existing machine configuration
	I0916 11:00:23.073220  182212 fix.go:54] fixHost starting: m02
	I0916 11:00:23.073464  182212 cli_runner.go:164] Run: docker container inspect multinode-079070-m02 --format={{.State.Status}}
	I0916 11:00:23.091344  182212 fix.go:112] recreateIfNeeded on multinode-079070-m02: state=Stopped err=<nil>
	W0916 11:00:23.091378  182212 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 11:00:23.093361  182212 out.go:177] * Restarting existing docker container for "multinode-079070-m02" ...
	I0916 11:00:23.094511  182212 cli_runner.go:164] Run: docker start multinode-079070-m02
	I0916 11:00:23.378228  182212 cli_runner.go:164] Run: docker container inspect multinode-079070-m02 --format={{.State.Status}}
	I0916 11:00:23.396800  182212 kic.go:430] container "multinode-079070-m02" state is running.
	I0916 11:00:23.397221  182212 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-079070-m02
	I0916 11:00:23.416295  182212 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/config.json ...
	I0916 11:00:23.416567  182212 machine.go:93] provisionDockerMachine start ...
	I0916 11:00:23.416632  182212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070-m02
	I0916 11:00:23.435255  182212 main.go:141] libmachine: Using SSH client type: native
	I0916 11:00:23.435433  182212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32948 <nil> <nil>}
	I0916 11:00:23.435444  182212 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 11:00:23.436137  182212 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34372->127.0.0.1:32948: read: connection reset by peer
	I0916 11:00:26.567205  182212 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-079070-m02
	
	I0916 11:00:26.567231  182212 ubuntu.go:169] provisioning hostname "multinode-079070-m02"
	I0916 11:00:26.567294  182212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070-m02
	I0916 11:00:26.584289  182212 main.go:141] libmachine: Using SSH client type: native
	I0916 11:00:26.584494  182212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32948 <nil> <nil>}
	I0916 11:00:26.584512  182212 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-079070-m02 && echo "multinode-079070-m02" | sudo tee /etc/hostname
	I0916 11:00:26.726686  182212 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-079070-m02
	
	I0916 11:00:26.726760  182212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070-m02
	I0916 11:00:26.743777  182212 main.go:141] libmachine: Using SSH client type: native
	I0916 11:00:26.743978  182212 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 32948 <nil> <nil>}
	I0916 11:00:26.744002  182212 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-079070-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-079070-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-079070-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:00:26.876012  182212 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:00:26.876042  182212 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 11:00:26.876064  182212 ubuntu.go:177] setting up certificates
	I0916 11:00:26.876076  182212 provision.go:84] configureAuth start
	I0916 11:00:26.876129  182212 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-079070-m02
	I0916 11:00:26.893509  182212 provision.go:143] copyHostCerts
	I0916 11:00:26.893545  182212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 11:00:26.893585  182212 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 11:00:26.893597  182212 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 11:00:26.893676  182212 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 11:00:26.893769  182212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 11:00:26.893794  182212 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 11:00:26.893801  182212 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 11:00:26.893841  182212 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 11:00:26.893901  182212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 11:00:26.893929  182212 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 11:00:26.893938  182212 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 11:00:26.893975  182212 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 11:00:26.894043  182212 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.multinode-079070-m02 san=[127.0.0.1 192.168.67.3 localhost minikube multinode-079070-m02]
	I0916 11:00:27.014143  182212 provision.go:177] copyRemoteCerts
	I0916 11:00:27.014203  182212 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:00:27.014240  182212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070-m02
	I0916 11:00:27.031620  182212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32948 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070-m02/id_rsa Username:docker}
	I0916 11:00:27.124839  182212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0916 11:00:27.124894  182212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 11:00:27.146763  182212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0916 11:00:27.146817  182212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0916 11:00:27.168774  182212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0916 11:00:27.168833  182212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 11:00:27.190988  182212 provision.go:87] duration metric: took 314.8991ms to configureAuth
	I0916 11:00:27.191021  182212 ubuntu.go:193] setting minikube options for container-runtime
	I0916 11:00:27.191221  182212 config.go:182] Loaded profile config "multinode-079070": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:00:27.191233  182212 machine.go:96] duration metric: took 3.774652517s to provisionDockerMachine
	I0916 11:00:27.191240  182212 start.go:293] postStartSetup for "multinode-079070-m02" (driver="docker")
	I0916 11:00:27.191249  182212 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:00:27.191292  182212 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:00:27.191333  182212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070-m02
	I0916 11:00:27.208449  182212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32948 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070-m02/id_rsa Username:docker}
	I0916 11:00:27.305001  182212 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:00:27.307966  182212 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.4 LTS"
	I0916 11:00:27.307983  182212 command_runner.go:130] > NAME="Ubuntu"
	I0916 11:00:27.307988  182212 command_runner.go:130] > VERSION_ID="22.04"
	I0916 11:00:27.307994  182212 command_runner.go:130] > VERSION="22.04.4 LTS (Jammy Jellyfish)"
	I0916 11:00:27.307999  182212 command_runner.go:130] > VERSION_CODENAME=jammy
	I0916 11:00:27.308003  182212 command_runner.go:130] > ID=ubuntu
	I0916 11:00:27.308006  182212 command_runner.go:130] > ID_LIKE=debian
	I0916 11:00:27.308013  182212 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0916 11:00:27.308017  182212 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0916 11:00:27.308025  182212 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0916 11:00:27.308033  182212 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0916 11:00:27.308039  182212 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0916 11:00:27.308079  182212 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 11:00:27.308103  182212 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 11:00:27.308113  182212 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 11:00:27.308119  182212 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 11:00:27.308129  182212 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 11:00:27.308179  182212 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 11:00:27.308258  182212 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 11:00:27.308267  182212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /etc/ssl/certs/111892.pem
	I0916 11:00:27.308346  182212 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:00:27.316271  182212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:00:27.337779  182212 start.go:296] duration metric: took 146.523287ms for postStartSetup
	I0916 11:00:27.337850  182212 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 11:00:27.337886  182212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070-m02
	I0916 11:00:27.354534  182212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32948 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070-m02/id_rsa Username:docker}
	I0916 11:00:27.448134  182212 command_runner.go:130] > 31%
	I0916 11:00:27.448431  182212 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 11:00:27.452339  182212 command_runner.go:130] > 202G
	I0916 11:00:27.452492  182212 fix.go:56] duration metric: took 4.379269534s for fixHost
	I0916 11:00:27.452510  182212 start.go:83] releasing machines lock for "multinode-079070-m02", held for 4.379312607s
	I0916 11:00:27.452587  182212 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-079070-m02
	I0916 11:00:27.473304  182212 out.go:177] * Found network options:
	I0916 11:00:27.475128  182212 out.go:177]   - NO_PROXY=192.168.67.2
	W0916 11:00:27.476578  182212 proxy.go:119] fail to check proxy env: Error ip not in block
	W0916 11:00:27.476618  182212 proxy.go:119] fail to check proxy env: Error ip not in block
	I0916 11:00:27.476709  182212 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:00:27.476746  182212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070-m02
	I0916 11:00:27.476777  182212 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:00:27.476837  182212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070-m02
	I0916 11:00:27.494951  182212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32948 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070-m02/id_rsa Username:docker}
	I0916 11:00:27.495017  182212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32948 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070-m02/id_rsa Username:docker}
	I0916 11:00:27.668847  182212 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0916 11:00:27.671191  182212 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0916 11:00:27.671214  182212 command_runner.go:130] >   Size: 78        	Blocks: 8          IO Block: 4096   regular file
	I0916 11:00:27.671224  182212 command_runner.go:130] > Device: 100006h/1048582d	Inode: 821280      Links: 1
	I0916 11:00:27.671234  182212 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 11:00:27.671244  182212 command_runner.go:130] > Access: 2024-09-16 11:00:23.839742373 +0000
	I0916 11:00:27.671253  182212 command_runner.go:130] > Modify: 2024-09-16 10:58:53.163750905 +0000
	I0916 11:00:27.671261  182212 command_runner.go:130] > Change: 2024-09-16 10:58:53.163750905 +0000
	I0916 11:00:27.671274  182212 command_runner.go:130] >  Birth: 2024-09-16 10:58:53.163750905 +0000
	I0916 11:00:27.671351  182212 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 11:00:27.689869  182212 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 11:00:27.689963  182212 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:00:27.698126  182212 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 11:00:27.698148  182212 start.go:495] detecting cgroup driver to use...
	I0916 11:00:27.698188  182212 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 11:00:27.698236  182212 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 11:00:27.709433  182212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 11:00:27.719975  182212 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:00:27.720038  182212 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:00:27.731773  182212 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:00:27.742360  182212 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:00:27.821137  182212 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:00:27.906247  182212 docker.go:233] disabling docker service ...
	I0916 11:00:27.906327  182212 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:00:27.917591  182212 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:00:27.927812  182212 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:00:28.002340  182212 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:00:28.081739  182212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:00:28.092600  182212 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:00:28.107874  182212 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0916 11:00:28.107963  182212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 11:00:28.117271  182212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 11:00:28.126688  182212 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 11:00:28.126747  182212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 11:00:28.136495  182212 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 11:00:28.145751  182212 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 11:00:28.155344  182212 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 11:00:28.164871  182212 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:00:28.173590  182212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 11:00:28.182925  182212 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 11:00:28.192540  182212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 11:00:28.201839  182212 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:00:28.208971  182212 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0916 11:00:28.209658  182212 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:00:28.217334  182212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:00:28.294002  182212 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 11:00:28.407675  182212 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 11:00:28.407781  182212 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 11:00:28.411263  182212 command_runner.go:130] >   File: /run/containerd/containerd.sock
	I0916 11:00:28.411291  182212 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0916 11:00:28.411302  182212 command_runner.go:130] > Device: 10000fh/1048591d	Inode: 172         Links: 1
	I0916 11:00:28.411311  182212 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0916 11:00:28.411321  182212 command_runner.go:130] > Access: 2024-09-16 11:00:28.368141474 +0000
	I0916 11:00:28.411326  182212 command_runner.go:130] > Modify: 2024-09-16 11:00:28.368141474 +0000
	I0916 11:00:28.411331  182212 command_runner.go:130] > Change: 2024-09-16 11:00:28.368141474 +0000
	I0916 11:00:28.411335  182212 command_runner.go:130] >  Birth: -
	I0916 11:00:28.411374  182212 start.go:563] Will wait 60s for crictl version
	I0916 11:00:28.411413  182212 ssh_runner.go:195] Run: which crictl
	I0916 11:00:28.414551  182212 command_runner.go:130] > /usr/bin/crictl
	I0916 11:00:28.414633  182212 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:00:28.445656  182212 command_runner.go:130] > Version:  0.1.0
	I0916 11:00:28.445682  182212 command_runner.go:130] > RuntimeName:  containerd
	I0916 11:00:28.445691  182212 command_runner.go:130] > RuntimeVersion:  1.7.22
	I0916 11:00:28.445697  182212 command_runner.go:130] > RuntimeApiVersion:  v1
	I0916 11:00:28.447864  182212 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 11:00:28.447925  182212 ssh_runner.go:195] Run: containerd --version
	I0916 11:00:28.470070  182212 command_runner.go:130] > containerd containerd.io 1.7.22 7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c
	I0916 11:00:28.470162  182212 ssh_runner.go:195] Run: containerd --version
	I0916 11:00:28.491285  182212 command_runner.go:130] > containerd containerd.io 1.7.22 7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c
	I0916 11:00:28.496043  182212 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 11:00:28.497502  182212 out.go:177]   - env NO_PROXY=192.168.67.2
	I0916 11:00:28.498874  182212 cli_runner.go:164] Run: docker network inspect multinode-079070 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:00:28.515669  182212 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0916 11:00:28.519356  182212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:00:28.529934  182212 mustload.go:65] Loading cluster: multinode-079070
	I0916 11:00:28.530191  182212 config.go:182] Loaded profile config "multinode-079070": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:00:28.530484  182212 cli_runner.go:164] Run: docker container inspect multinode-079070 --format={{.State.Status}}
	I0916 11:00:28.548230  182212 host.go:66] Checking if "multinode-079070" exists ...
	I0916 11:00:28.548502  182212 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070 for IP: 192.168.67.3
	I0916 11:00:28.548514  182212 certs.go:194] generating shared ca certs ...
	I0916 11:00:28.548527  182212 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:00:28.548644  182212 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 11:00:28.548679  182212 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 11:00:28.548690  182212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0916 11:00:28.548704  182212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0916 11:00:28.548715  182212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0916 11:00:28.548727  182212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0916 11:00:28.548776  182212 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 11:00:28.548804  182212 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 11:00:28.548812  182212 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:00:28.548837  182212 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 11:00:28.548861  182212 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:00:28.548881  182212 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 11:00:28.548919  182212 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:00:28.548947  182212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:00:28.548960  182212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem -> /usr/share/ca-certificates/11189.pem
	I0916 11:00:28.548969  182212 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> /usr/share/ca-certificates/111892.pem
	I0916 11:00:28.548987  182212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:00:28.573389  182212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:00:28.596273  182212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:00:28.619462  182212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:00:28.642621  182212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:00:28.665569  182212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 11:00:28.688595  182212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 11:00:28.710997  182212 ssh_runner.go:195] Run: openssl version
	I0916 11:00:28.715844  182212 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0916 11:00:28.716093  182212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 11:00:28.725015  182212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 11:00:28.728424  182212 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 11:00:28.728475  182212 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 11:00:28.728526  182212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 11:00:28.734644  182212 command_runner.go:130] > 3ec20f2e
	I0916 11:00:28.734845  182212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:00:28.743757  182212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:00:28.752749  182212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:00:28.756422  182212 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:00:28.756454  182212 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:00:28.756499  182212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:00:28.762858  182212 command_runner.go:130] > b5213941
	I0916 11:00:28.763034  182212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:00:28.773171  182212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 11:00:28.782255  182212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 11:00:28.785566  182212 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 11:00:28.785601  182212 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 11:00:28.785652  182212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 11:00:28.791920  182212 command_runner.go:130] > 51391683
	I0916 11:00:28.792293  182212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 11:00:28.800691  182212 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:00:28.803729  182212 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 11:00:28.803791  182212 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 11:00:28.803828  182212 kubeadm.go:934] updating node {m02 192.168.67.3 8443 v1.31.1 containerd false true} ...
	I0916 11:00:28.803914  182212 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-079070-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-079070 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:00:28.803961  182212 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 11:00:28.811260  182212 command_runner.go:130] > kubeadm
	I0916 11:00:28.811287  182212 command_runner.go:130] > kubectl
	I0916 11:00:28.811293  182212 command_runner.go:130] > kubelet
	I0916 11:00:28.811957  182212 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 11:00:28.812014  182212 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0916 11:00:28.819714  182212 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0916 11:00:28.836558  182212 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:00:28.853747  182212 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0916 11:00:28.857085  182212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:00:28.867619  182212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:00:28.947792  182212 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:00:28.959628  182212 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.67.3 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:false Worker:true}
	I0916 11:00:28.959896  182212 config.go:182] Loaded profile config "multinode-079070": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:00:28.961872  182212 out.go:177] * Verifying Kubernetes components...
	I0916 11:00:28.963180  182212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:00:29.041845  182212 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:00:29.053238  182212 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:00:29.053472  182212 kapi.go:59] client config for multinode-079070: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/multinode-079070/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 11:00:29.053716  182212 node_ready.go:35] waiting up to 6m0s for node "multinode-079070-m02" to be "Ready" ...
	I0916 11:00:29.053802  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m02
	I0916 11:00:29.053811  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:29.053819  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:29.053822  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:29.056287  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:29.056313  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:29.056322  182212 round_trippers.go:580]     Audit-Id: f973cc39-4fc5-4a8f-9137-1b105f2822dd
	I0916 11:00:29.056329  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:29.056345  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:29.056352  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:29.056356  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:29.056361  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:29 GMT
	I0916 11:00:29.056535  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070-m02","uid":"5bc3423c-6b73-4003-b0b3-5aa501e974c0","resourceVersion":"757","creationTimestamp":"2024-09-16T10:56:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_56_58_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:57Z","fieldsType":"FieldsV1","fiel
dsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl": [truncated 5023 chars]
	I0916 11:00:29.056854  182212 node_ready.go:49] node "multinode-079070-m02" has status "Ready":"True"
	I0916 11:00:29.056870  182212 node_ready.go:38] duration metric: took 3.140498ms for node "multinode-079070-m02" to be "Ready" ...
	I0916 11:00:29.056880  182212 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:00:29.056945  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0916 11:00:29.056954  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:29.056961  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:29.056964  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:29.060130  182212 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 11:00:29.060155  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:29.060165  182212 round_trippers.go:580]     Audit-Id: d924b7ff-c6e3-47fb-9c0f-708bed116a10
	I0916 11:00:29.060170  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:29.060175  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:29.060179  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:29.060185  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:29.060189  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:29 GMT
	I0916 11:00:29.061045  182212 request.go:1351] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1010"},"items":[{"metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"982","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 90963 chars]
	I0916 11:00:29.063630  182212 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace to be "Ready" ...
	I0916 11:00:29.063706  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 11:00:29.063713  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:29.063721  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:29.063728  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:29.065875  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:29.065895  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:29.065903  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:29.065907  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:29.065910  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:29.065915  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:29.065919  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:29 GMT
	I0916 11:00:29.065925  182212 round_trippers.go:580]     Audit-Id: f95700c8-977a-4e23-a5f3-0512b171bd66
	I0916 11:00:29.066107  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"982","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 11:00:29.066597  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:29.066611  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:29.066618  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:29.066622  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:29.068407  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:29.068427  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:29.068436  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:29.068443  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:29.068448  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:29.068452  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:29.068467  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:29 GMT
	I0916 11:00:29.068475  182212 round_trippers.go:580]     Audit-Id: 56620e48-7944-415b-9a6f-099a6c1c982f
	I0916 11:00:29.068645  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:29.563913  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 11:00:29.563943  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:29.563952  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:29.563959  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:29.566337  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:29.566359  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:29.566366  182212 round_trippers.go:580]     Audit-Id: b070a758-fe22-4c59-973f-b638ffe9e456
	I0916 11:00:29.566371  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:29.566374  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:29.566378  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:29.566381  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:29.566384  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:29 GMT
	I0916 11:00:29.566533  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"982","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 11:00:29.567011  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:29.567028  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:29.567038  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:29.567043  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:29.568970  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:29.568992  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:29.569002  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:29 GMT
	I0916 11:00:29.569009  182212 round_trippers.go:580]     Audit-Id: 328ea61a-b878-4d68-9daf-fa84f1a542fb
	I0916 11:00:29.569013  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:29.569018  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:29.569023  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:29.569028  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:29.569190  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:30.064867  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 11:00:30.064893  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:30.064902  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:30.064908  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:30.067443  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:30.067466  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:30.067480  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:30.067486  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:30.067489  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:30.067492  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:30 GMT
	I0916 11:00:30.067495  182212 round_trippers.go:580]     Audit-Id: fa8e4533-a886-4b2a-892a-424728acd046
	I0916 11:00:30.067498  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:30.067602  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"982","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 11:00:30.068130  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:30.068146  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:30.068153  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:30.068156  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:30.069938  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:30.069957  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:30.069963  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:30 GMT
	I0916 11:00:30.069966  182212 round_trippers.go:580]     Audit-Id: 71177336-c96f-4d30-b50d-8e6c1a2d1ee5
	I0916 11:00:30.069968  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:30.069971  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:30.069973  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:30.069975  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:30.070165  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:30.563872  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 11:00:30.563898  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:30.563905  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:30.563910  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:30.566371  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:30.566388  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:30.566394  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:30.566397  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:30.566401  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:30 GMT
	I0916 11:00:30.566404  182212 round_trippers.go:580]     Audit-Id: f82c927c-6f5a-449d-9439-9b29d32f81b3
	I0916 11:00:30.566408  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:30.566410  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:30.566616  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"982","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 11:00:30.567188  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:30.567206  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:30.567217  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:30.567224  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:30.569017  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:30.569042  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:30.569051  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:30.569056  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:30.569059  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:30.569061  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:30 GMT
	I0916 11:00:30.569064  182212 round_trippers.go:580]     Audit-Id: 3a9824b5-6450-49c2-800f-e382ea416ab1
	I0916 11:00:30.569067  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:30.569181  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:31.064883  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 11:00:31.064911  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:31.064921  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:31.064929  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:31.067279  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:31.067330  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:31.067339  182212 round_trippers.go:580]     Audit-Id: 60bcc542-6889-45a0-85ca-fd66d6ef9ac6
	I0916 11:00:31.067344  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:31.067348  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:31.067353  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:31.067358  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:31.067363  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:31 GMT
	I0916 11:00:31.067483  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"982","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 11:00:31.068058  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:31.068080  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:31.068090  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:31.068095  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:31.070396  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:31.070417  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:31.070425  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:31.070430  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:31.070433  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:31.070438  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:31 GMT
	I0916 11:00:31.070442  182212 round_trippers.go:580]     Audit-Id: 1ae013fb-d6b7-437e-8350-0e442a1dafdd
	I0916 11:00:31.070446  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:31.070606  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:31.070915  182212 pod_ready.go:103] pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace has status "Ready":"False"
	I0916 11:00:31.564134  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 11:00:31.564155  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:31.564162  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:31.564166  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:31.566721  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:31.566746  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:31.566756  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:31.566760  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:31.566764  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:31.566768  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:31.566773  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:31 GMT
	I0916 11:00:31.566781  182212 round_trippers.go:580]     Audit-Id: eadf995f-deb0-49d9-9665-63247da30aae
	I0916 11:00:31.566918  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"982","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 11:00:31.567518  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:31.567533  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:31.567541  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:31.567546  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:31.569529  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:31.569552  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:31.569560  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:31.569565  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:31 GMT
	I0916 11:00:31.569570  182212 round_trippers.go:580]     Audit-Id: 74f22e83-cf70-4753-8f08-60ba8e2d6e5c
	I0916 11:00:31.569573  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:31.569578  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:31.569582  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:31.569701  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:32.063937  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 11:00:32.063962  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:32.063970  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:32.063975  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:32.066103  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:32.066123  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:32.066132  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:32.066138  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:32.066142  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:32.066146  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:32.066150  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:32 GMT
	I0916 11:00:32.066156  182212 round_trippers.go:580]     Audit-Id: 5ff177d7-3cde-4f5a-91b9-4f90fdd6a778
	I0916 11:00:32.066336  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"982","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 11:00:32.066945  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:32.066962  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:32.066972  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:32.066977  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:32.068859  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:32.068878  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:32.068885  182212 round_trippers.go:580]     Audit-Id: 0050ab11-d947-4f31-bcfb-52e905f11886
	I0916 11:00:32.068891  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:32.068897  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:32.068904  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:32.068908  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:32.068912  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:32 GMT
	I0916 11:00:32.069024  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:32.564762  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 11:00:32.564787  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:32.564794  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:32.564799  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:32.567146  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:32.567168  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:32.567177  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:32.567185  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:32 GMT
	I0916 11:00:32.567190  182212 round_trippers.go:580]     Audit-Id: cf4700ad-9f3c-406d-b50b-3f21ee0b8ef0
	I0916 11:00:32.567193  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:32.567197  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:32.567203  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:32.567311  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"982","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 11:00:32.567865  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:32.567883  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:32.567890  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:32.567895  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:32.569713  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:32.569733  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:32.569740  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:32 GMT
	I0916 11:00:32.569744  182212 round_trippers.go:580]     Audit-Id: c2d3aa73-f68b-4af5-8b20-cadf1725501b
	I0916 11:00:32.569748  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:32.569754  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:32.569758  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:32.569762  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:32.569885  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:33.064847  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 11:00:33.064872  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:33.064882  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:33.064887  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:33.067254  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:33.067281  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:33.067290  182212 round_trippers.go:580]     Audit-Id: 09335146-93b6-4bfe-a6a3-70ab0d8b0b8d
	I0916 11:00:33.067295  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:33.067300  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:33.067306  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:33.067311  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:33.067316  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:33 GMT
	I0916 11:00:33.067437  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"982","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 11:00:33.068048  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:33.068071  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:33.068080  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:33.068086  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:33.070011  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:33.070030  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:33.070036  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:33.070040  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:33.070044  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:33 GMT
	I0916 11:00:33.070047  182212 round_trippers.go:580]     Audit-Id: 5b08eb29-4a4e-4db3-b49a-2ad9117b8994
	I0916 11:00:33.070050  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:33.070052  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:33.070226  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:33.563850  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 11:00:33.563881  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:33.563889  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:33.563893  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:33.566418  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:33.566447  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:33.566457  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:33.566463  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:33.566466  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:33 GMT
	I0916 11:00:33.566470  182212 round_trippers.go:580]     Audit-Id: cefb8199-577c-4fc8-a123-3b8be97949f6
	I0916 11:00:33.566472  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:33.566474  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:33.566668  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"982","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 11:00:33.567149  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:33.567163  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:33.567170  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:33.567174  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:33.569083  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:33.569113  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:33.569122  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:33.569129  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:33.569136  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:33.569141  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:33 GMT
	I0916 11:00:33.569145  182212 round_trippers.go:580]     Audit-Id: 927529d1-d5cc-403b-9db2-3e5801e5ec03
	I0916 11:00:33.569150  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:33.569256  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:33.569562  182212 pod_ready.go:103] pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace has status "Ready":"False"
	I0916 11:00:34.063951  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 11:00:34.064039  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:34.064062  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:34.064073  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:34.067289  182212 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0916 11:00:34.067314  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:34.067323  182212 round_trippers.go:580]     Audit-Id: 2805dee7-4f2f-4477-906b-973adfc5467d
	I0916 11:00:34.067331  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:34.067337  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:34.067362  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:34.067371  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:34.067375  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:34 GMT
	I0916 11:00:34.067500  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"982","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 11:00:34.068237  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:34.068257  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:34.068268  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:34.068275  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:34.070758  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:34.070779  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:34.070788  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:34.070793  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:34 GMT
	I0916 11:00:34.070798  182212 round_trippers.go:580]     Audit-Id: 46e7df63-53bf-4ac7-bed3-9cb235b73a6d
	I0916 11:00:34.070804  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:34.070809  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:34.070814  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:34.071330  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:34.563952  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 11:00:34.563976  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:34.563986  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:34.563991  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:34.566622  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:34.566644  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:34.566654  182212 round_trippers.go:580]     Audit-Id: 1122bef1-f191-4751-bc2c-fde70df5b5d2
	I0916 11:00:34.566660  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:34.566664  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:34.566669  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:34.566672  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:34.566676  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:34 GMT
	I0916 11:00:34.566850  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"982","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 11:00:34.567335  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:34.567351  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:34.567359  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:34.567363  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:34.569235  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:34.569257  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:34.569264  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:34.569269  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:34.569274  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:34.569279  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:34.569282  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:34 GMT
	I0916 11:00:34.569286  182212 round_trippers.go:580]     Audit-Id: 9ee8413c-3a72-4701-a56f-5c1fe4946e07
	I0916 11:00:34.569406  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:35.063935  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 11:00:35.063965  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:35.063976  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:35.063981  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:35.066297  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:35.066323  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:35.066332  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:35 GMT
	I0916 11:00:35.066339  182212 round_trippers.go:580]     Audit-Id: 1607258c-ec9d-481d-b83f-d4eb63999dc8
	I0916 11:00:35.066343  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:35.066348  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:35.066354  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:35.066359  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:35.066484  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"982","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 11:00:35.066974  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:35.066988  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:35.066995  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:35.066998  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:35.068864  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:35.068879  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:35.068886  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:35.068890  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:35 GMT
	I0916 11:00:35.068893  182212 round_trippers.go:580]     Audit-Id: 95a08dd8-3327-4a0e-8262-8af728c48f48
	I0916 11:00:35.068896  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:35.068899  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:35.068902  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:35.069047  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:35.564801  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 11:00:35.564835  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:35.564853  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:35.564859  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:35.567192  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:35.567216  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:35.567223  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:35.567229  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:35.567233  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:35.567237  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:35.567241  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:35 GMT
	I0916 11:00:35.567246  182212 round_trippers.go:580]     Audit-Id: 447c5d8a-1090-4a29-9d74-85a75cb9984d
	I0916 11:00:35.567362  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"982","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 11:00:35.568105  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:35.568129  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:35.568140  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:35.568149  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:35.570056  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:35.570074  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:35.570080  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:35.570084  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:35.570087  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:35 GMT
	I0916 11:00:35.570090  182212 round_trippers.go:580]     Audit-Id: e4700812-0f28-4fd2-8831-ea8ff3421141
	I0916 11:00:35.570093  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:35.570095  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:35.570213  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:35.570545  182212 pod_ready.go:103] pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace has status "Ready":"False"
	I0916 11:00:36.063910  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 11:00:36.063932  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:36.063939  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:36.063943  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:36.066269  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:36.066300  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:36.066309  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:36.066313  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:36.066316  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:36.066320  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:36.066325  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:36 GMT
	I0916 11:00:36.066329  182212 round_trippers.go:580]     Audit-Id: 5adeb28e-de54-4d25-82c6-9eee68583d86
	I0916 11:00:36.066433  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"982","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 11:00:36.067006  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:36.067024  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:36.067034  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:36.067040  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:36.068900  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:36.068919  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:36.068924  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:36.068927  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:36 GMT
	I0916 11:00:36.068930  182212 round_trippers.go:580]     Audit-Id: 2a7c4d08-fac3-4c43-ae9d-5b6fd9ce6ed0
	I0916 11:00:36.068933  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:36.068935  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:36.068937  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:36.069051  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:36.564602  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 11:00:36.564625  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:36.564633  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:36.564636  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:36.566950  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:36.566969  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:36.566975  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:36.566979  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:36.566983  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:36.566987  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:36 GMT
	I0916 11:00:36.566990  182212 round_trippers.go:580]     Audit-Id: 6811ef24-b351-4aa6-ab50-e2b6b422d376
	I0916 11:00:36.566993  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:36.567144  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"982","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 11:00:36.567599  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:36.567611  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:36.567628  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:36.567633  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:36.569409  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:36.569425  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:36.569431  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:36.569433  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:36.569437  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:36.569440  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:36.569443  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:36 GMT
	I0916 11:00:36.569445  182212 round_trippers.go:580]     Audit-Id: e5c5b3e6-efa0-4cfb-943b-1184a81c3a25
	I0916 11:00:36.569594  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:37.064248  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 11:00:37.064273  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:37.064280  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:37.064284  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:37.066480  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:37.066505  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:37.066516  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:37 GMT
	I0916 11:00:37.066521  182212 round_trippers.go:580]     Audit-Id: 834cc615-d6fc-42bc-8dd5-493e55e4006e
	I0916 11:00:37.066526  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:37.066529  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:37.066533  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:37.066536  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:37.066695  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"982","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 11:00:37.067198  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:37.067212  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:37.067219  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:37.067223  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:37.069174  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:37.069194  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:37.069202  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:37.069208  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:37.069212  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:37.069216  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:37.069221  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:37 GMT
	I0916 11:00:37.069228  182212 round_trippers.go:580]     Audit-Id: 873a8c5e-8b42-4b28-804e-d33901e61d6f
	I0916 11:00:37.069378  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:37.563911  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 11:00:37.563935  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:37.563943  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:37.563947  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:37.566330  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:37.566354  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:37.566360  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:37 GMT
	I0916 11:00:37.566365  182212 round_trippers.go:580]     Audit-Id: 8b7b52cc-ba61-40cf-af62-34fbc506b005
	I0916 11:00:37.566368  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:37.566370  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:37.566376  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:37.566379  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:37.566519  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"982","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 11:00:37.566977  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:37.566991  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:37.566998  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:37.567002  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:37.568847  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:37.568868  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:37.568878  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:37.568883  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:37 GMT
	I0916 11:00:37.568887  182212 round_trippers.go:580]     Audit-Id: bdb8528b-a8b2-46f4-a7b5-b81f4e667c33
	I0916 11:00:37.568891  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:37.568895  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:37.568898  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:37.569029  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:38.064081  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 11:00:38.064102  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:38.064115  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:38.064118  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:38.066435  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:38.066466  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:38.066478  182212 round_trippers.go:580]     Audit-Id: cb14fd7c-830f-4f46-9453-623df31b171e
	I0916 11:00:38.066483  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:38.066487  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:38.066492  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:38.066499  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:38.066503  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:38 GMT
	I0916 11:00:38.066636  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"982","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 11:00:38.067085  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:38.067096  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:38.067104  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:38.067107  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:38.069366  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:38.069391  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:38.069400  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:38 GMT
	I0916 11:00:38.069405  182212 round_trippers.go:580]     Audit-Id: 8925f1cf-c60b-4075-80d2-a925df67cb27
	I0916 11:00:38.069408  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:38.069412  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:38.069417  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:38.069420  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:38.069548  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:38.069865  182212 pod_ready.go:103] pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace has status "Ready":"False"
	I0916 11:00:38.563927  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 11:00:38.563956  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:38.563968  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:38.563975  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:38.566313  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:38.566338  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:38.566347  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:38.566353  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:38.566356  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:38 GMT
	I0916 11:00:38.566361  182212 round_trippers.go:580]     Audit-Id: 1d439f48-15a5-4a29-836b-a7a09432d22e
	I0916 11:00:38.566364  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:38.566368  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:38.566539  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"982","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 11:00:38.567130  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:38.567148  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:38.567159  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:38.567164  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:38.569075  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:38.569108  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:38.569117  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:38 GMT
	I0916 11:00:38.569123  182212 round_trippers.go:580]     Audit-Id: b23b1c67-b5ee-49fa-9913-136c7d53cf61
	I0916 11:00:38.569128  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:38.569134  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:38.569144  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:38.569149  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:38.569318  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:39.063947  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 11:00:39.063976  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:39.063988  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:39.063996  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:39.066332  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:39.066355  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:39.066364  182212 round_trippers.go:580]     Audit-Id: e128b699-30b0-431a-815c-dfdb9b2f44ca
	I0916 11:00:39.066368  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:39.066372  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:39.066375  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:39.066379  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:39.066385  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:39 GMT
	I0916 11:00:39.066493  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"982","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 11:00:39.067065  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:39.067088  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:39.067095  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:39.067099  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:39.068867  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:39.068887  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:39.068895  182212 round_trippers.go:580]     Audit-Id: e310ecb9-6985-476e-ab09-2309a2f24e69
	I0916 11:00:39.068900  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:39.068908  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:39.068911  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:39.068916  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:39.068921  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:39 GMT
	I0916 11:00:39.069046  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:39.564774  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 11:00:39.564796  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:39.564804  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:39.564809  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:39.566943  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:39.566963  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:39.566970  182212 round_trippers.go:580]     Audit-Id: 2a79aff9-82b5-4e32-af50-8e67977c581e
	I0916 11:00:39.566976  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:39.566981  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:39.566985  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:39.566989  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:39.566993  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:39 GMT
	I0916 11:00:39.567147  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"982","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 11:00:39.567626  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:39.567640  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:39.567646  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:39.567651  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:39.569502  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:39.569518  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:39.569523  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:39.569528  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:39.569534  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:39.569538  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:39 GMT
	I0916 11:00:39.569543  182212 round_trippers.go:580]     Audit-Id: 79555a6a-c276-4874-8f4e-f20f3886b087
	I0916 11:00:39.569547  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:39.569667  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:40.063940  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 11:00:40.063965  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:40.063973  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:40.063977  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:40.066242  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:40.066263  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:40.066270  182212 round_trippers.go:580]     Audit-Id: 4f270516-ebf6-47c3-9af9-a51772356607
	I0916 11:00:40.066276  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:40.066281  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:40.066285  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:40.066292  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:40.066296  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:40 GMT
	I0916 11:00:40.066416  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"982","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 11:00:40.066902  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:40.066916  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:40.066923  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:40.066926  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:40.068703  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:40.068722  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:40.068729  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:40.068734  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:40.068737  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:40.068741  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:40 GMT
	I0916 11:00:40.068745  182212 round_trippers.go:580]     Audit-Id: af5241f0-5ecc-4b01-8552-5ceff2c0374f
	I0916 11:00:40.068747  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:40.068894  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:40.564600  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 11:00:40.564627  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:40.564637  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:40.564644  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:40.566981  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:40.567007  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:40.567015  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:40.567022  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:40 GMT
	I0916 11:00:40.567027  182212 round_trippers.go:580]     Audit-Id: bf8658a5-6f8a-488e-9930-cde1dc633578
	I0916 11:00:40.567031  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:40.567035  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:40.567040  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:40.567197  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"982","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 11:00:40.567661  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:40.567674  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:40.567681  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:40.567685  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:40.569582  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:40.569601  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:40.569609  182212 round_trippers.go:580]     Audit-Id: 88e3de42-1c06-4363-be82-ac9d155148d3
	I0916 11:00:40.569614  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:40.569618  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:40.569622  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:40.569627  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:40.569633  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:40 GMT
	I0916 11:00:40.569751  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:40.570051  182212 pod_ready.go:103] pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace has status "Ready":"False"
	I0916 11:00:41.064463  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 11:00:41.064486  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:41.064493  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:41.064498  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:41.067098  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:41.067126  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:41.067138  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:41.067145  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:41 GMT
	I0916 11:00:41.067152  182212 round_trippers.go:580]     Audit-Id: ece7723d-a0a7-4167-a17a-a929e8f6307a
	I0916 11:00:41.067157  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:41.067162  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:41.067171  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:41.067306  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"982","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 11:00:41.067942  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:41.067962  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:41.067972  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:41.067978  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:41.069963  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:41.069983  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:41.069990  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:41.069996  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:41.070001  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:41.070005  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:41.070010  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:41 GMT
	I0916 11:00:41.070015  182212 round_trippers.go:580]     Audit-Id: 1803f084-4ecb-4532-919d-d8df3e85ca7c
	I0916 11:00:41.070139  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:41.564788  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 11:00:41.564814  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:41.564822  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:41.564832  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:41.567165  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:41.567186  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:41.567193  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:41.567197  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:41.567202  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:41.567207  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:41.567212  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:41 GMT
	I0916 11:00:41.567216  182212 round_trippers.go:580]     Audit-Id: 950870f3-58b4-4798-8e58-3bb52c8f3862
	I0916 11:00:41.567353  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"982","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 11:00:41.567849  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:41.567863  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:41.567870  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:41.567875  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:41.569834  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:41.569851  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:41.569856  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:41 GMT
	I0916 11:00:41.569861  182212 round_trippers.go:580]     Audit-Id: a5d75b8b-b016-468c-b379-c2b477b57e1e
	I0916 11:00:41.569863  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:41.569865  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:41.569868  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:41.569870  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:41.570032  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:42.064696  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 11:00:42.064722  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:42.064730  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:42.064734  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:42.067081  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:42.067106  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:42.067115  182212 round_trippers.go:580]     Audit-Id: 43886017-6582-4759-b6e5-55f50f73d954
	I0916 11:00:42.067122  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:42.067128  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:42.067133  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:42.067136  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:42.067142  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:42 GMT
	I0916 11:00:42.067248  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"982","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 11:00:42.067773  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:42.067789  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:42.067796  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:42.067801  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:42.069576  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:42.069598  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:42.069607  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:42 GMT
	I0916 11:00:42.069612  182212 round_trippers.go:580]     Audit-Id: 8928bf23-5931-4411-aedc-d8a31ccb4959
	I0916 11:00:42.069616  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:42.069622  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:42.069626  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:42.069631  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:42.069766  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:42.564466  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 11:00:42.564491  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:42.564499  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:42.564503  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:42.567038  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:42.567061  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:42.567068  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:42.567073  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:42.567079  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:42.567083  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:42.567086  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:42 GMT
	I0916 11:00:42.567090  182212 round_trippers.go:580]     Audit-Id: a36e0f42-4cd7-4d63-8706-6380b0ce0a0c
	I0916 11:00:42.567268  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"982","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 11:00:42.567784  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:42.567804  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:42.567813  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:42.567817  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:42.569713  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:42.569728  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:42.569735  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:42.569738  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:42.569742  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:42.569745  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:42.569749  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:42 GMT
	I0916 11:00:42.569751  182212 round_trippers.go:580]     Audit-Id: e0939a6f-6d4b-4270-abff-c53d8d1c4172
	I0916 11:00:42.569867  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:42.570162  182212 pod_ready.go:103] pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace has status "Ready":"False"
	I0916 11:00:43.064799  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 11:00:43.064821  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:43.064829  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:43.064834  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:43.067071  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:43.067096  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:43.067106  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:43 GMT
	I0916 11:00:43.067113  182212 round_trippers.go:580]     Audit-Id: f35e7a9f-1cfd-4630-955d-ef69039ce5fe
	I0916 11:00:43.067116  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:43.067122  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:43.067129  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:43.067133  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:43.067244  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"982","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 11:00:43.067726  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:43.067767  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:43.067779  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:43.067785  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:43.069592  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:43.069613  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:43.069622  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:43.069628  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:43.069634  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:43.069640  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:43 GMT
	I0916 11:00:43.069644  182212 round_trippers.go:580]     Audit-Id: 75052931-d6ca-4cdb-847a-22e2e06c4a12
	I0916 11:00:43.069648  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:43.069746  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:43.564458  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 11:00:43.564484  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:43.564491  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:43.564496  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:43.566833  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:43.566857  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:43.566867  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:43 GMT
	I0916 11:00:43.566874  182212 round_trippers.go:580]     Audit-Id: 91db03f0-8d7c-4184-8e16-1b430c3d1a59
	I0916 11:00:43.566881  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:43.566885  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:43.566889  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:43.566893  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:43.567001  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"982","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 11:00:43.567448  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:43.567459  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:43.567466  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:43.567469  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:43.569262  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:43.569299  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:43.569308  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:43.569327  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:43 GMT
	I0916 11:00:43.569339  182212 round_trippers.go:580]     Audit-Id: 6bfb6d30-fcc9-448f-8119-f2f8b77e0c62
	I0916 11:00:43.569345  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:43.569353  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:43.569359  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:43.569493  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:44.063918  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 11:00:44.063940  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:44.063948  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:44.063953  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:44.066328  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:44.066350  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:44.066359  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:44.066364  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:44 GMT
	I0916 11:00:44.066369  182212 round_trippers.go:580]     Audit-Id: b31ad435-e327-440d-8ebf-f80f0451fc55
	I0916 11:00:44.066372  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:44.066376  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:44.066381  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:44.066598  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"982","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 11:00:44.067096  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:44.067111  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:44.067118  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:44.067124  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:44.069128  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:44.069146  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:44.069153  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:44.069157  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:44.069160  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:44.069165  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:44 GMT
	I0916 11:00:44.069170  182212 round_trippers.go:580]     Audit-Id: 35fbf3b6-28b3-4d46-982c-0a558a9663c8
	I0916 11:00:44.069175  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:44.069304  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:44.563922  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 11:00:44.563944  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:44.563953  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:44.563957  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:44.566082  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:44.566107  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:44.566117  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:44.566123  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:44.566127  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:44 GMT
	I0916 11:00:44.566132  182212 round_trippers.go:580]     Audit-Id: b778e3e1-d143-4e6c-8e03-675c194e5c22
	I0916 11:00:44.566136  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:44.566140  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:44.566403  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"982","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 11:00:44.566994  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:44.567010  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:44.567022  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:44.567027  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:44.568698  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:44.568713  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:44.568719  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:44.568722  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:44 GMT
	I0916 11:00:44.568725  182212 round_trippers.go:580]     Audit-Id: 39bd14d8-420b-43db-9bd5-0e9c067f814e
	I0916 11:00:44.568727  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:44.568731  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:44.568734  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:44.568842  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:45.064816  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 11:00:45.064848  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:45.064860  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:45.064867  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:45.067132  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:45.067160  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:45.067168  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:45 GMT
	I0916 11:00:45.067172  182212 round_trippers.go:580]     Audit-Id: 3f0de78f-de2b-4dc9-958a-af9b89933036
	I0916 11:00:45.067176  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:45.067179  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:45.067182  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:45.067184  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:45.067357  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"982","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6922 chars]
	I0916 11:00:45.067864  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:45.067879  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:45.067886  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:45.067889  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:45.069708  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:45.069728  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:45.069736  182212 round_trippers.go:580]     Audit-Id: 1b4b1efa-3961-4c6d-927e-4d302c53bccc
	I0916 11:00:45.069743  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:45.069747  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:45.069751  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:45.069760  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:45.069763  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:45 GMT
	I0916 11:00:45.069914  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:45.070324  182212 pod_ready.go:103] pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace has status "Ready":"False"
	I0916 11:00:45.564669  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-ft9gh
	I0916 11:00:45.564694  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:45.564702  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:45.564706  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:45.566847  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:45.566872  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:45.566882  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:45 GMT
	I0916 11:00:45.566888  182212 round_trippers.go:580]     Audit-Id: ef8075cc-2060-47b1-af4f-4ca516e304a3
	I0916 11:00:45.566893  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:45.566899  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:45.566904  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:45.566908  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:45.567013  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7c65d6cfc9-ft9gh","generateName":"coredns-7c65d6cfc9-","namespace":"kube-system","uid":"8052b6a1-7257-44d4-a318-740afd039d2c","resourceVersion":"1070","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7c65d6cfc9"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7c65d6cfc9","uid":"056d20f1-53d0-4da9-921b-26372f8f8987","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"056d20f1-53d0-4da9-921b-26372f8f8987\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6694 chars]
	I0916 11:00:45.567597  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:45.567616  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:45.567627  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:45.567633  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:45.569767  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:45.569788  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:45.569798  182212 round_trippers.go:580]     Audit-Id: 18d2d899-2888-422f-b134-04f122651bd9
	I0916 11:00:45.569803  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:45.569807  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:45.569812  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:45.569816  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:45.569819  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:45 GMT
	I0916 11:00:45.569944  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:45.570255  182212 pod_ready.go:93] pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace has status "Ready":"True"
	I0916 11:00:45.570276  182212 pod_ready.go:82] duration metric: took 16.506625034s for pod "coredns-7c65d6cfc9-ft9gh" in "kube-system" namespace to be "Ready" ...
	I0916 11:00:45.570285  182212 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 11:00:45.570347  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-079070
	I0916 11:00:45.570355  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:45.570361  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:45.570365  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:45.572232  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:45.572269  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:45.572276  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:45.572281  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:45 GMT
	I0916 11:00:45.572285  182212 round_trippers.go:580]     Audit-Id: 33e2da64-df83-49a5-80c2-b3e9e8dcfabd
	I0916 11:00:45.572289  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:45.572292  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:45.572295  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:45.572381  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-079070","namespace":"kube-system","uid":"fdcbee48-0d3b-47d7-acd2-298979345c7a","resourceVersion":"1005","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"4cd98cb286c25ab6542db09649b1ab0f","kubernetes.io/config.mirror":"4cd98cb286c25ab6542db09649b1ab0f","kubernetes.io/config.seen":"2024-09-16T10:56:25.844758522Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6654 chars]
	I0916 11:00:45.572769  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:45.572784  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:45.572793  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:45.572797  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:45.574783  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:45.574800  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:45.574806  182212 round_trippers.go:580]     Audit-Id: d741e1f0-000c-4a22-84c7-aa25703753f0
	I0916 11:00:45.574812  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:45.574817  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:45.574821  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:45.574826  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:45.574833  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:45 GMT
	I0916 11:00:45.574960  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:45.575241  182212 pod_ready.go:93] pod "etcd-multinode-079070" in "kube-system" namespace has status "Ready":"True"
	I0916 11:00:45.575256  182212 pod_ready.go:82] duration metric: took 4.962836ms for pod "etcd-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 11:00:45.575273  182212 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 11:00:45.575336  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-079070
	I0916 11:00:45.575343  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:45.575350  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:45.575354  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:45.577356  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:45.577375  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:45.577384  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:45.577389  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:45 GMT
	I0916 11:00:45.577392  182212 round_trippers.go:580]     Audit-Id: ea188913-bec8-4575-a150-02f53953dcd2
	I0916 11:00:45.577398  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:45.577402  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:45.577407  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:45.577542  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-079070","namespace":"kube-system","uid":"72784d35-4a94-476c-a4e9-dc3c6c3a8c46","resourceVersion":"1001","creationTimestamp":"2024-09-16T10:56:25Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.mirror":"cec1645cdab3aa5089df4900af238464","kubernetes.io/config.seen":"2024-09-16T10:56:20.578394748Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8732 chars]
	I0916 11:00:45.578005  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:45.578019  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:45.578026  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:45.578031  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:45.579666  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:45.579685  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:45.579693  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:45.579697  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:45 GMT
	I0916 11:00:45.579701  182212 round_trippers.go:580]     Audit-Id: 48daedb3-f123-49cf-8a3e-63dbb1288795
	I0916 11:00:45.579704  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:45.579708  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:45.579711  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:45.579827  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:45.580106  182212 pod_ready.go:93] pod "kube-apiserver-multinode-079070" in "kube-system" namespace has status "Ready":"True"
	I0916 11:00:45.580122  182212 pod_ready.go:82] duration metric: took 4.839657ms for pod "kube-apiserver-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 11:00:45.580131  182212 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 11:00:45.580184  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-079070
	I0916 11:00:45.580191  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:45.580198  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:45.580201  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:45.581940  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:45.581964  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:45.581974  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:45.581979  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:45 GMT
	I0916 11:00:45.581984  182212 round_trippers.go:580]     Audit-Id: 38c5cc39-c93b-48e1-8ad5-c7257a2ce054
	I0916 11:00:45.581990  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:45.581999  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:45.582008  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:45.582170  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-079070","namespace":"kube-system","uid":"44a2f17c-654a-4d64-b474-70ce475c3afe","resourceVersion":"998","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"93bd5dba25d1e51504f9fc3f55fd27c8","kubernetes.io/config.mirror":"93bd5dba25d1e51504f9fc3f55fd27c8","kubernetes.io/config.seen":"2024-09-16T10:56:25.844752735Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8306 chars]
	I0916 11:00:45.582649  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:45.582663  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:45.582670  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:45.582674  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:45.584270  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:45.584294  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:45.584304  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:45.584310  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:45.584315  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:45 GMT
	I0916 11:00:45.584319  182212 round_trippers.go:580]     Audit-Id: 7d9ae89b-dc76-46e7-bf30-38c62b94c49c
	I0916 11:00:45.584322  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:45.584325  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:45.584451  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:45.584816  182212 pod_ready.go:93] pod "kube-controller-manager-multinode-079070" in "kube-system" namespace has status "Ready":"True"
	I0916 11:00:45.584836  182212 pod_ready.go:82] duration metric: took 4.695024ms for pod "kube-controller-manager-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 11:00:45.584848  182212 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2vhmt" in "kube-system" namespace to be "Ready" ...
	I0916 11:00:45.584918  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2vhmt
	I0916 11:00:45.584929  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:45.584936  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:45.584940  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:45.586476  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:45.586490  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:45.586500  182212 round_trippers.go:580]     Audit-Id: d093c6ac-36f2-44d7-85df-ad1a6f08592a
	I0916 11:00:45.586506  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:45.586510  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:45.586514  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:45.586519  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:45.586523  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:45 GMT
	I0916 11:00:45.586621  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-2vhmt","generateName":"kube-proxy-","namespace":"kube-system","uid":"6f3faf85-04e9-4840-855d-dd1ef9d4e463","resourceVersion":"986","creationTimestamp":"2024-09-16T10:56:31Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6388 chars]
	I0916 11:00:45.587058  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:45.587072  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:45.587079  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:45.587083  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:45.588656  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:45.588676  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:45.588684  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:45.588688  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:45.588691  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:45.588694  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:45 GMT
	I0916 11:00:45.588697  182212 round_trippers.go:580]     Audit-Id: 04cd203b-07ae-4cab-89ff-770017794d00
	I0916 11:00:45.588701  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:45.588842  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:45.589166  182212 pod_ready.go:93] pod "kube-proxy-2vhmt" in "kube-system" namespace has status "Ready":"True"
	I0916 11:00:45.589181  182212 pod_ready.go:82] duration metric: took 4.3242ms for pod "kube-proxy-2vhmt" in "kube-system" namespace to be "Ready" ...
	I0916 11:00:45.589190  182212 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9z4qh" in "kube-system" namespace to be "Ready" ...
	I0916 11:00:45.765600  182212 request.go:632] Waited for 176.314337ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9z4qh
	I0916 11:00:45.765721  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9z4qh
	I0916 11:00:45.765733  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:45.765741  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:45.765745  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:45.768129  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:45.768157  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:45.768167  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:45 GMT
	I0916 11:00:45.768174  182212 round_trippers.go:580]     Audit-Id: 0bdcf187-5514-4065-9921-634cf3d68be3
	I0916 11:00:45.768178  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:45.768183  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:45.768187  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:45.768191  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:45.768284  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9z4qh","generateName":"kube-proxy-","namespace":"kube-system","uid":"7bae6f8c-6440-4b47-aaa2-e7bbc9dabd7d","resourceVersion":"882","creationTimestamp":"2024-09-16T10:57:29Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:57:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6396 chars]
	I0916 11:00:45.964915  182212 request.go:632] Waited for 196.167868ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m03
	I0916 11:00:45.964988  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m03
	I0916 11:00:45.964994  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:45.965001  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:45.965005  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:45.967109  182212 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0916 11:00:45.967129  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:45.967135  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:45.967139  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:45.967142  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:45.967144  182212 round_trippers.go:580]     Content-Length: 210
	I0916 11:00:45.967147  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:45 GMT
	I0916 11:00:45.967149  182212 round_trippers.go:580]     Audit-Id: 075c5224-150e-4caf-adba-5ab984f7c934
	I0916 11:00:45.967152  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:45.967172  182212 request.go:1351] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-079070-m03\" not found","reason":"NotFound","details":{"name":"multinode-079070-m03","kind":"nodes"},"code":404}
	I0916 11:00:45.967264  182212 pod_ready.go:98] node "multinode-079070-m03" hosting pod "kube-proxy-9z4qh" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-079070-m03": nodes "multinode-079070-m03" not found
	I0916 11:00:45.967283  182212 pod_ready.go:82] duration metric: took 378.085526ms for pod "kube-proxy-9z4qh" in "kube-system" namespace to be "Ready" ...
	E0916 11:00:45.967294  182212 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-079070-m03" hosting pod "kube-proxy-9z4qh" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-079070-m03": nodes "multinode-079070-m03" not found
	I0916 11:00:45.967307  182212 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xkr65" in "kube-system" namespace to be "Ready" ...
	I0916 11:00:46.165500  182212 request.go:632] Waited for 198.105355ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xkr65
	I0916 11:00:46.165588  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xkr65
	I0916 11:00:46.165594  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:46.165602  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:46.165614  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:46.167861  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:46.167887  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:46.167896  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:46.167902  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:46 GMT
	I0916 11:00:46.167906  182212 round_trippers.go:580]     Audit-Id: 8ad3f6a1-7e2a-469e-bc1f-b68f77054fa6
	I0916 11:00:46.167910  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:46.167914  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:46.167920  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:46.168055  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xkr65","generateName":"kube-proxy-","namespace":"kube-system","uid":"b8d1009a-f71f-4cb1-a2f0-510a2894874f","resourceVersion":"1021","creationTimestamp":"2024-09-16T10:56:57Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6720 chars]
	I0916 11:00:46.364792  182212 request.go:632] Waited for 196.275127ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m02
	I0916 11:00:46.364871  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m02
	I0916 11:00:46.364877  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:46.364886  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:46.364890  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:46.367010  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:46.367030  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:46.367037  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:46 GMT
	I0916 11:00:46.367041  182212 round_trippers.go:580]     Audit-Id: b16ca884-c659-4361-afec-5e825c07e761
	I0916 11:00:46.367043  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:46.367046  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:46.367050  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:46.367052  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:46.367208  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070-m02","uid":"5bc3423c-6b73-4003-b0b3-5aa501e974c0","resourceVersion":"1013","creationTimestamp":"2024-09-16T10:56:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_56_58_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:57Z","fieldsType":"FieldsV1","fie
ldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl" [truncated 5024 chars]
	I0916 11:00:46.564717  182212 request.go:632] Waited for 97.256668ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xkr65
	I0916 11:00:46.564786  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xkr65
	I0916 11:00:46.564794  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:46.564807  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:46.564815  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:46.567029  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:46.567050  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:46.567057  182212 round_trippers.go:580]     Audit-Id: df05cc5b-7ea2-40c6-af9e-af9725210c1c
	I0916 11:00:46.567060  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:46.567064  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:46.567067  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:46.567071  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:46.567074  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:46 GMT
	I0916 11:00:46.567228  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xkr65","generateName":"kube-proxy-","namespace":"kube-system","uid":"b8d1009a-f71f-4cb1-a2f0-510a2894874f","resourceVersion":"1021","creationTimestamp":"2024-09-16T10:56:57Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6720 chars]
	I0916 11:00:46.764917  182212 request.go:632] Waited for 197.207963ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m02
	I0916 11:00:46.765006  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m02
	I0916 11:00:46.765016  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:46.765028  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:46.765034  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:46.770267  182212 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0916 11:00:46.770295  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:46.770306  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:46.770311  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:46.770316  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:46.770322  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:46.770328  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:46 GMT
	I0916 11:00:46.770335  182212 round_trippers.go:580]     Audit-Id: 6ed1c3fc-0d53-4d0e-99de-99c6ce8d05ee
	I0916 11:00:46.770469  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070-m02","uid":"5bc3423c-6b73-4003-b0b3-5aa501e974c0","resourceVersion":"1013","creationTimestamp":"2024-09-16T10:56:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_56_58_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:57Z","fieldsType":"FieldsV1","fie
ldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl" [truncated 5024 chars]
	I0916 11:00:46.967892  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xkr65
	I0916 11:00:46.967921  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:46.967932  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:46.967941  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:46.970717  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:46.970745  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:46.970754  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:46.970762  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:46.970766  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:46.970771  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:46.970775  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:46 GMT
	I0916 11:00:46.970780  182212 round_trippers.go:580]     Audit-Id: 4d0fe89c-6170-45fb-99b9-54d7ed510cee
	I0916 11:00:46.970949  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xkr65","generateName":"kube-proxy-","namespace":"kube-system","uid":"b8d1009a-f71f-4cb1-a2f0-510a2894874f","resourceVersion":"1021","creationTimestamp":"2024-09-16T10:56:57Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6720 chars]
	I0916 11:00:47.164738  182212 request.go:632] Waited for 193.275275ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m02
	I0916 11:00:47.164792  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m02
	I0916 11:00:47.164798  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:47.164808  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:47.164813  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:47.167057  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:47.167084  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:47.167093  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:47 GMT
	I0916 11:00:47.167098  182212 round_trippers.go:580]     Audit-Id: 2d7baf35-c768-4aaa-80e0-1c01ed7e0751
	I0916 11:00:47.167106  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:47.167112  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:47.167116  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:47.167120  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:47.167243  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070-m02","uid":"5bc3423c-6b73-4003-b0b3-5aa501e974c0","resourceVersion":"1013","creationTimestamp":"2024-09-16T10:56:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_56_58_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:57Z","fieldsType":"FieldsV1","fie
ldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl" [truncated 5024 chars]
	I0916 11:00:47.467654  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xkr65
	I0916 11:00:47.467676  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:47.467684  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:47.467688  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:47.469998  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:47.470025  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:47.470035  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:47 GMT
	I0916 11:00:47.470040  182212 round_trippers.go:580]     Audit-Id: a0e54662-72ca-4e67-876b-d6f135ed8e88
	I0916 11:00:47.470047  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:47.470051  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:47.470056  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:47.470061  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:47.470201  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xkr65","generateName":"kube-proxy-","namespace":"kube-system","uid":"b8d1009a-f71f-4cb1-a2f0-510a2894874f","resourceVersion":"1021","creationTimestamp":"2024-09-16T10:56:57Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6720 chars]
	I0916 11:00:47.564956  182212 request.go:632] Waited for 94.288764ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m02
	I0916 11:00:47.565036  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m02
	I0916 11:00:47.565042  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:47.565052  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:47.565057  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:47.567534  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:47.567561  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:47.567569  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:47.567574  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:47.567579  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:47 GMT
	I0916 11:00:47.567585  182212 round_trippers.go:580]     Audit-Id: 8154a207-d0a4-4056-a145-fec9ba5625db
	I0916 11:00:47.567590  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:47.567594  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:47.567758  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070-m02","uid":"5bc3423c-6b73-4003-b0b3-5aa501e974c0","resourceVersion":"1013","creationTimestamp":"2024-09-16T10:56:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_56_58_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:57Z","fieldsType":"FieldsV1","fie
ldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl" [truncated 5024 chars]
	I0916 11:00:47.967542  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xkr65
	I0916 11:00:47.967564  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:47.967573  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:47.967579  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:47.969957  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:47.969980  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:47.969988  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:47.969991  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:47.969995  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:47.969999  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:47.970002  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:47 GMT
	I0916 11:00:47.970005  182212 round_trippers.go:580]     Audit-Id: fd12bb20-3202-4fe3-9d42-c2262b2f31f1
	I0916 11:00:47.970178  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xkr65","generateName":"kube-proxy-","namespace":"kube-system","uid":"b8d1009a-f71f-4cb1-a2f0-510a2894874f","resourceVersion":"1081","creationTimestamp":"2024-09-16T10:56:57Z","labels":{"controller-revision-hash":"648b489c5b","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cd9b5306-3128-4f2d-abd7-fd6a80dbfe71\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6397 chars]
	I0916 11:00:47.970722  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070-m02
	I0916 11:00:47.970739  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:47.970748  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:47.970755  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:47.972658  182212 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0916 11:00:47.972676  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:47.972683  182212 round_trippers.go:580]     Audit-Id: 2f23a013-fe2c-4799-a5d1-6f5055151206
	I0916 11:00:47.972686  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:47.972690  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:47.972696  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:47.972700  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:47.972703  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:47 GMT
	I0916 11:00:47.972875  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070-m02","uid":"5bc3423c-6b73-4003-b0b3-5aa501e974c0","resourceVersion":"1013","creationTimestamp":"2024-09-16T10:56:57Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_09_16T10_56_58_0700","minikube.k8s.io/version":"v1.34.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:57Z","fieldsType":"FieldsV1","fie
ldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl" [truncated 5024 chars]
	I0916 11:00:47.973160  182212 pod_ready.go:93] pod "kube-proxy-xkr65" in "kube-system" namespace has status "Ready":"True"
	I0916 11:00:47.973176  182212 pod_ready.go:82] duration metric: took 2.005852272s for pod "kube-proxy-xkr65" in "kube-system" namespace to be "Ready" ...
	I0916 11:00:47.973185  182212 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 11:00:48.165609  182212 request.go:632] Waited for 192.354527ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 11:00:48.165712  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-079070
	I0916 11:00:48.165723  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:48.165732  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:48.165740  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:48.168063  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:48.168082  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:48.168088  182212 round_trippers.go:580]     Audit-Id: 05c68aa0-50bb-4e01-9088-99b09e36185d
	I0916 11:00:48.168091  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:48.168094  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:48.168097  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:48.168101  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:48.168103  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:48 GMT
	I0916 11:00:48.168281  182212 request.go:1351] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-079070","namespace":"kube-system","uid":"cc5dd2a3-a136-42b4-a6f9-11c733cb38c4","resourceVersion":"1007","creationTimestamp":"2024-09-16T10:56:26Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.mirror":"b4d60d557a4cfb2c6d1e1c4e2473b237","kubernetes.io/config.seen":"2024-09-16T10:56:25.844756911Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-09-16T10:56:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5189 chars]
	I0916 11:00:48.364919  182212 request.go:632] Waited for 196.237623ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:48.364983  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-079070
	I0916 11:00:48.364999  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:48.365009  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:48.365013  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:48.367064  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:48.367095  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:48.367102  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:48 GMT
	I0916 11:00:48.367106  182212 round_trippers.go:580]     Audit-Id: c218ed3b-33a7-4f8f-be4a-74226be317f5
	I0916 11:00:48.367109  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:48.367113  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:48.367115  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:48.367119  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:48.367297  182212 request.go:1351] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update
","apiVersion":"v1","time":"2024-09-16T10:56:23Z","fieldsType":"FieldsV [truncated 5235 chars]
	I0916 11:00:48.367646  182212 pod_ready.go:93] pod "kube-scheduler-multinode-079070" in "kube-system" namespace has status "Ready":"True"
	I0916 11:00:48.367663  182212 pod_ready.go:82] duration metric: took 394.470442ms for pod "kube-scheduler-multinode-079070" in "kube-system" namespace to be "Ready" ...
	I0916 11:00:48.367680  182212 pod_ready.go:39] duration metric: took 19.310783971s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:00:48.367698  182212 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 11:00:48.367776  182212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:00:48.378698  182212 system_svc.go:56] duration metric: took 10.989318ms WaitForService to wait for kubelet
	I0916 11:00:48.378731  182212 kubeadm.go:582] duration metric: took 19.41905516s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:00:48.378754  182212 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:00:48.565121  182212 request.go:632] Waited for 186.284155ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes
	I0916 11:00:48.565198  182212 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes
	I0916 11:00:48.565203  182212 round_trippers.go:469] Request Headers:
	I0916 11:00:48.565210  182212 round_trippers.go:473]     Accept: application/json, */*
	I0916 11:00:48.565216  182212 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0916 11:00:48.567729  182212 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0916 11:00:48.567782  182212 round_trippers.go:577] Response Headers:
	I0916 11:00:48.567789  182212 round_trippers.go:580]     Content-Type: application/json
	I0916 11:00:48.567793  182212 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 07bf4e99-cdde-428e-bd17-5235eb59497f
	I0916 11:00:48.567798  182212 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 0c987133-af33-4e26-8249-1f3d60a87a75
	I0916 11:00:48.567801  182212 round_trippers.go:580]     Date: Mon, 16 Sep 2024 11:00:48 GMT
	I0916 11:00:48.567805  182212 round_trippers.go:580]     Audit-Id: cf81479c-521b-4c98-9944-2d8c7eb761b2
	I0916 11:00:48.567815  182212 round_trippers.go:580]     Cache-Control: no-cache, private
	I0916 11:00:48.567978  182212 request.go:1351] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1084"},"items":[{"metadata":{"name":"multinode-079070","uid":"a412b266-57e6-4e6d-abfd-b5db18233ca9","resourceVersion":"914","creationTimestamp":"2024-09-16T10:56:23Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-079070","kubernetes.io/os":"linux","minikube.k8s.io/commit":"90d544f06ea0f69499271b003be64a9a224d57ed","minikube.k8s.io/name":"multinode-079070","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_09_16T10_56_26_0700","minikube.k8s.io/version":"v1.34.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///run/containerd/containerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"mana
gedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1" [truncated 11305 chars]
	I0916 11:00:48.568432  182212 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 11:00:48.568448  182212 node_conditions.go:123] node cpu capacity is 8
	I0916 11:00:48.568460  182212 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 11:00:48.568464  182212 node_conditions.go:123] node cpu capacity is 8
	I0916 11:00:48.568470  182212 node_conditions.go:105] duration metric: took 189.71073ms to run NodePressure ...
	I0916 11:00:48.568480  182212 start.go:241] waiting for startup goroutines ...
	I0916 11:00:48.568505  182212 start.go:255] writing updated cluster config ...
	I0916 11:00:48.568796  182212 ssh_runner.go:195] Run: rm -f paused
	I0916 11:00:48.574873  182212 out.go:177] * Done! kubectl is now configured to use "multinode-079070" cluster and "default" namespace by default
	E0916 11:00:48.575955  182212 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fa4a14dd250a9       12968670680f4       41 seconds ago      Running             kindnet-cni               2                   259db18aea2e6       kindnet-flmdv
	3d413b7b1a8eb       8c811b4aec35f       41 seconds ago      Running             busybox                   2                   4c98ed70ef014       busybox-7dff88458-pjlvx
	6a47b9f2fa06e       c69fa2e9cbf5f       41 seconds ago      Running             coredns                   2                   e2106a1fbef85       coredns-7c65d6cfc9-ft9gh
	b98c46a0c30cd       6e38f40d628db       41 seconds ago      Exited              storage-provisioner       3                   76d4106a9719c       storage-provisioner
	8917f146f1620       60c005f310ff3       41 seconds ago      Running             kube-proxy                2                   e95b947005b5d       kube-proxy-2vhmt
	b5befb68baf51       175ffd71cce3d       45 seconds ago      Running             kube-controller-manager   2                   f8757a65895f3       kube-controller-manager-multinode-079070
	396666a5f33cc       6bab7719df100       45 seconds ago      Running             kube-apiserver            2                   4dc8c8f7df443       kube-apiserver-multinode-079070
	bd343c585217c       9aa1fad941575       45 seconds ago      Running             kube-scheduler            2                   a1e5b577dcd15       kube-scheduler-multinode-079070
	0b09fd36dc229       2e96e5913fc06       45 seconds ago      Running             etcd                      2                   6cccdd8798cbe       etcd-multinode-079070
	e7dd060f7494b       12968670680f4       2 minutes ago       Exited              kindnet-cni               1                   60b7e4d184cb1       kindnet-flmdv
	f11253e8ef61a       60c005f310ff3       2 minutes ago       Exited              kube-proxy                1                   bd07015878a2b       kube-proxy-2vhmt
	9f936546ae131       c69fa2e9cbf5f       2 minutes ago       Exited              coredns                   1                   9f22be20239e6       coredns-7c65d6cfc9-ft9gh
	fb80a77bac6e7       8c811b4aec35f       2 minutes ago       Exited              busybox                   1                   8cc3146f6064d       busybox-7dff88458-pjlvx
	ca0cc800d9c78       9aa1fad941575       2 minutes ago       Exited              kube-scheduler            1                   cfba435487c50       kube-scheduler-multinode-079070
	50645a9df44a5       2e96e5913fc06       2 minutes ago       Exited              etcd                      1                   f06f43a302aa5       etcd-multinode-079070
	f8c9dd99b83da       6bab7719df100       2 minutes ago       Exited              kube-apiserver            1                   b27c67f5a330d       kube-apiserver-multinode-079070
	224f3c76893fd       175ffd71cce3d       2 minutes ago       Exited              kube-controller-manager   1                   20b671abc1444       kube-controller-manager-multinode-079070
	
	
	==> containerd <==
	Sep 16 11:00:08 multinode-079070 containerd[600]: time="2024-09-16T11:00:08.555224184Z" level=info msg="CreateContainer within sandbox \"76d4106a9719cec07eef2dc4c5c97bacdbee08f0a97a93e02aae42ce418b2478\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:3,}"
	Sep 16 11:00:08 multinode-079070 containerd[600]: time="2024-09-16T11:00:08.621397142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ft9gh,Uid:8052b6a1-7257-44d4-a318-740afd039d2c,Namespace:kube-system,Attempt:2,} returns sandbox id \"e2106a1fbef85abac7432c15b098ca11be55ef5ea8f6c90741c040df094c44ca\""
	Sep 16 11:00:08 multinode-079070 containerd[600]: time="2024-09-16T11:00:08.624494821Z" level=info msg="CreateContainer within sandbox \"e2106a1fbef85abac7432c15b098ca11be55ef5ea8f6c90741c040df094c44ca\" for container &ContainerMetadata{Name:coredns,Attempt:2,}"
	Sep 16 11:00:08 multinode-079070 containerd[600]: time="2024-09-16T11:00:08.631457408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox-7dff88458-pjlvx,Uid:e697a697-12c1-405c-bc2e-fa881b5fd008,Namespace:default,Attempt:2,} returns sandbox id \"4c98ed70ef014ea0bc9e06cb617ab17fe9f2d01859b93cee7ebd9df5c9ecaeb7\""
	Sep 16 11:00:08 multinode-079070 containerd[600]: time="2024-09-16T11:00:08.637415855Z" level=info msg="CreateContainer within sandbox \"76d4106a9719cec07eef2dc4c5c97bacdbee08f0a97a93e02aae42ce418b2478\" for &ContainerMetadata{Name:storage-provisioner,Attempt:3,} returns container id \"b98c46a0c30cd2ae6e39a2406e8b9c533294175eff2437b7114b7d2bf84c145e\""
	Sep 16 11:00:08 multinode-079070 containerd[600]: time="2024-09-16T11:00:08.637789988Z" level=info msg="StartContainer for \"b98c46a0c30cd2ae6e39a2406e8b9c533294175eff2437b7114b7d2bf84c145e\""
	Sep 16 11:00:08 multinode-079070 containerd[600]: time="2024-09-16T11:00:08.638707081Z" level=info msg="CreateContainer within sandbox \"4c98ed70ef014ea0bc9e06cb617ab17fe9f2d01859b93cee7ebd9df5c9ecaeb7\" for container &ContainerMetadata{Name:busybox,Attempt:2,}"
	Sep 16 11:00:08 multinode-079070 containerd[600]: time="2024-09-16T11:00:08.640338917Z" level=info msg="CreateContainer within sandbox \"e2106a1fbef85abac7432c15b098ca11be55ef5ea8f6c90741c040df094c44ca\" for &ContainerMetadata{Name:coredns,Attempt:2,} returns container id \"6a47b9f2fa06e1166951c0acad47eb3669e477ce0d9c208153d9eaced12022d6\""
	Sep 16 11:00:08 multinode-079070 containerd[600]: time="2024-09-16T11:00:08.640831073Z" level=info msg="StartContainer for \"6a47b9f2fa06e1166951c0acad47eb3669e477ce0d9c208153d9eaced12022d6\""
	Sep 16 11:00:08 multinode-079070 containerd[600]: time="2024-09-16T11:00:08.651502113Z" level=info msg="CreateContainer within sandbox \"4c98ed70ef014ea0bc9e06cb617ab17fe9f2d01859b93cee7ebd9df5c9ecaeb7\" for &ContainerMetadata{Name:busybox,Attempt:2,} returns container id \"3d413b7b1a8eb2221c0e1b0d2fb16776044145149d5fdccac98ecd1f45c2b982\""
	Sep 16 11:00:08 multinode-079070 containerd[600]: time="2024-09-16T11:00:08.653611560Z" level=info msg="StartContainer for \"3d413b7b1a8eb2221c0e1b0d2fb16776044145149d5fdccac98ecd1f45c2b982\""
	Sep 16 11:00:08 multinode-079070 containerd[600]: time="2024-09-16T11:00:08.743438866Z" level=info msg="StartContainer for \"8917f146f1620dcaacf5849e72ba4e883325655ce5dd77d7d54dfb79b271f268\" returns successfully"
	Sep 16 11:00:08 multinode-079070 containerd[600]: time="2024-09-16T11:00:08.745085779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kindnet-flmdv,Uid:91449e63-0ca3-4dc6-92ef-e3c5ab102dae,Namespace:kube-system,Attempt:2,} returns sandbox id \"259db18aea2e6941fcd92d03288783cbba0e53b41eb23dcc45f8fc6f3fed5995\""
	Sep 16 11:00:08 multinode-079070 containerd[600]: time="2024-09-16T11:00:08.751437237Z" level=info msg="CreateContainer within sandbox \"259db18aea2e6941fcd92d03288783cbba0e53b41eb23dcc45f8fc6f3fed5995\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:2,}"
	Sep 16 11:00:08 multinode-079070 containerd[600]: time="2024-09-16T11:00:08.832571110Z" level=info msg="StartContainer for \"6a47b9f2fa06e1166951c0acad47eb3669e477ce0d9c208153d9eaced12022d6\" returns successfully"
	Sep 16 11:00:08 multinode-079070 containerd[600]: time="2024-09-16T11:00:08.832698260Z" level=info msg="StartContainer for \"b98c46a0c30cd2ae6e39a2406e8b9c533294175eff2437b7114b7d2bf84c145e\" returns successfully"
	Sep 16 11:00:08 multinode-079070 containerd[600]: time="2024-09-16T11:00:08.845616295Z" level=info msg="CreateContainer within sandbox \"259db18aea2e6941fcd92d03288783cbba0e53b41eb23dcc45f8fc6f3fed5995\" for &ContainerMetadata{Name:kindnet-cni,Attempt:2,} returns container id \"fa4a14dd250a9b449f007f261c67db560f38b6b3e45a6320b0ca1e09814fc9e8\""
	Sep 16 11:00:08 multinode-079070 containerd[600]: time="2024-09-16T11:00:08.848015854Z" level=info msg="StartContainer for \"fa4a14dd250a9b449f007f261c67db560f38b6b3e45a6320b0ca1e09814fc9e8\""
	Sep 16 11:00:08 multinode-079070 containerd[600]: time="2024-09-16T11:00:08.923960723Z" level=info msg="StartContainer for \"3d413b7b1a8eb2221c0e1b0d2fb16776044145149d5fdccac98ecd1f45c2b982\" returns successfully"
	Sep 16 11:00:09 multinode-079070 containerd[600]: time="2024-09-16T11:00:09.039830506Z" level=info msg="StartContainer for \"fa4a14dd250a9b449f007f261c67db560f38b6b3e45a6320b0ca1e09814fc9e8\" returns successfully"
	Sep 16 11:00:38 multinode-079070 containerd[600]: time="2024-09-16T11:00:38.944999145Z" level=info msg="shim disconnected" id=b98c46a0c30cd2ae6e39a2406e8b9c533294175eff2437b7114b7d2bf84c145e namespace=k8s.io
	Sep 16 11:00:38 multinode-079070 containerd[600]: time="2024-09-16T11:00:38.945069460Z" level=warning msg="cleaning up after shim disconnected" id=b98c46a0c30cd2ae6e39a2406e8b9c533294175eff2437b7114b7d2bf84c145e namespace=k8s.io
	Sep 16 11:00:38 multinode-079070 containerd[600]: time="2024-09-16T11:00:38.945080933Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 16 11:00:39 multinode-079070 containerd[600]: time="2024-09-16T11:00:39.217666088Z" level=info msg="RemoveContainer for \"b73ca772183b59baa5be8014c698ba6eb41bdef3f84e5083cb7a0313a0fb938d\""
	Sep 16 11:00:39 multinode-079070 containerd[600]: time="2024-09-16T11:00:39.222039396Z" level=info msg="RemoveContainer for \"b73ca772183b59baa5be8014c698ba6eb41bdef3f84e5083cb7a0313a0fb938d\" returns successfully"
	
	
	==> coredns [6a47b9f2fa06e1166951c0acad47eb3669e477ce0d9c208153d9eaced12022d6] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:55042 - 4614 "HINFO IN 3400725033068006219.4679708335615470129. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011865851s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1550661726]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 11:00:08.934) (total time: 30000ms):
	Trace[1550661726]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (11:00:38.935)
	Trace[1550661726]: [30.000636073s] [30.000636073s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1312392482]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 11:00:08.934) (total time: 30000ms):
	Trace[1312392482]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (11:00:38.935)
	Trace[1312392482]: [30.000597926s] [30.000597926s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[430856135]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 11:00:08.934) (total time: 30000ms):
	Trace[430856135]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (11:00:38.935)
	Trace[430856135]: [30.000800919s] [30.000800919s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [9f936546ae13163e90e47cc8dcec45a4a44eb6f873708c6deb509ebe216c4213] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:39434 - 23847 "HINFO IN 6529897643441096498.450809085404921830. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010818363s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1119269091]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 10:58:33.843) (total time: 30001ms):
	Trace[1119269091]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (10:59:03.845)
	Trace[1119269091]: [30.001853356s] [30.001853356s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1128850326]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 10:58:33.843) (total time: 30001ms):
	Trace[1128850326]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (10:59:03.845)
	Trace[1128850326]: [30.001976921s] [30.001976921s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[595338145]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 10:58:33.843) (total time: 30002ms):
	Trace[595338145]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (10:59:03.845)
	Trace[595338145]: [30.002100587s] [30.002100587s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               multinode-079070
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-079070
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=multinode-079070
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T10_56_26_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:56:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-079070
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:00:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:00:08 +0000   Mon, 16 Sep 2024 10:56:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:00:08 +0000   Mon, 16 Sep 2024 10:56:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:00:08 +0000   Mon, 16 Sep 2024 10:56:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:00:08 +0000   Mon, 16 Sep 2024 10:56:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    multinode-079070
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 849ae5d0e2574642941e645733bfe580
	  System UUID:                aacf5fc8-9d89-4df8-b6e3-7265bb86b554
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-pjlvx                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m48s
	  kube-system                 coredns-7c65d6cfc9-ft9gh                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m19s
	  kube-system                 etcd-multinode-079070                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m24s
	  kube-system                 kindnet-flmdv                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m19s
	  kube-system                 kube-apiserver-multinode-079070             250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m25s
	  kube-system                 kube-controller-manager-multinode-079070    200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 kube-proxy-2vhmt                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-scheduler-multinode-079070             100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 41s                    kube-proxy       
	  Normal   Starting                 2m16s                  kube-proxy       
	  Normal   Starting                 4m18s                  kube-proxy       
	  Normal   Starting                 4m25s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m25s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  4m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  4m24s                  kubelet          Node multinode-079070 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m24s                  kubelet          Node multinode-079070 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m24s                  kubelet          Node multinode-079070 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m20s                  node-controller  Node multinode-079070 event: Registered Node multinode-079070 in Controller
	  Normal   NodeHasSufficientMemory  2m23s (x8 over 2m23s)  kubelet          Node multinode-079070 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m23s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 2m23s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    2m23s (x7 over 2m23s)  kubelet          Node multinode-079070 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m23s (x7 over 2m23s)  kubelet          Node multinode-079070 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  2m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           2m15s                  node-controller  Node multinode-079070 event: Registered Node multinode-079070 in Controller
	  Normal   Starting                 47s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 47s                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  46s (x8 over 46s)      kubelet          Node multinode-079070 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    46s (x7 over 46s)      kubelet          Node multinode-079070 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     46s (x7 over 46s)      kubelet          Node multinode-079070 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  46s                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           39s                    node-controller  Node multinode-079070 event: Registered Node multinode-079070 in Controller
	
	
	Name:               multinode-079070-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-079070-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=multinode-079070
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_09_16T10_56_58_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 10:56:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-079070-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:00:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:00:31 +0000   Mon, 16 Sep 2024 10:56:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:00:31 +0000   Mon, 16 Sep 2024 10:56:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:00:31 +0000   Mon, 16 Sep 2024 10:56:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:00:31 +0000   Mon, 16 Sep 2024 10:56:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.3
	  Hostname:    multinode-079070-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 8bf28f26c74644d58f91772dd4f5408b
	  System UUID:                230f6bd5-a1b9-46e1-be41-9ec64c608739
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-x6h7b    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m48s
	  kube-system                 kindnet-fs5x4              100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3m53s
	  kube-system                 kube-proxy-xkr65           0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%)  100m (1%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 3m50s                  kube-proxy       
	  Normal   Starting                 110s                   kube-proxy       
	  Normal   Starting                 3s                     kube-proxy       
	  Warning  CgroupV1                 3m53s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  3m53s (x2 over 3m53s)  kubelet          Node multinode-079070-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m53s (x2 over 3m53s)  kubelet          Node multinode-079070-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m53s (x2 over 3m53s)  kubelet          Node multinode-079070-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  3m53s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeReady                3m52s                  kubelet          Node multinode-079070-m02 status is now: NodeReady
	  Normal   RegisteredNode           3m50s                  node-controller  Node multinode-079070-m02 event: Registered Node multinode-079070-m02 in Controller
	  Normal   RegisteredNode           2m15s                  node-controller  Node multinode-079070-m02 event: Registered Node multinode-079070-m02 in Controller
	  Normal   NodeAllocatableEnforced  2m                     kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 2m                     kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m                     kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  114s (x7 over 2m)      kubelet          Node multinode-079070-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     114s (x7 over 2m)      kubelet          Node multinode-079070-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    114s (x7 over 2m)      kubelet          Node multinode-079070-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           39s                    node-controller  Node multinode-079070-m02 event: Registered Node multinode-079070-m02 in Controller
	  Normal   Starting                 26s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 26s                    kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  26s                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  19s (x7 over 26s)      kubelet          Node multinode-079070-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19s (x7 over 26s)      kubelet          Node multinode-079070-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     19s (x7 over 26s)      kubelet          Node multinode-079070-m02 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[  +0.000005] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000001] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000001] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[Sep16 11:00] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000006] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000002] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +0.000040] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000004] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +1.028430] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000006] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +0.004229] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000004] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +2.011572] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000009] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +4.031652] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000006] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +0.000018] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +8.195254] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000007] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	
	
	==> etcd [0b09fd36dc229f2a7b2391cdd98556eb763ad25cf15c7f2fcbd150608340f61b] <==
	{"level":"info","ts":"2024-09-16T11:00:05.042684Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2024-09-16T11:00:05.042801Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:00:05.042844Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:00:05.042959Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:00:05.045948Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T11:00:05.046147Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-09-16T11:00:05.046199Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-09-16T11:00:05.046237Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T11:00:05.046280Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T11:00:06.834073Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 3"}
	{"level":"info","ts":"2024-09-16T11:00:06.834126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-09-16T11:00:06.834187Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2024-09-16T11:00:06.834210Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 4"}
	{"level":"info","ts":"2024-09-16T11:00:06.834220Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2024-09-16T11:00:06.834244Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 4"}
	{"level":"info","ts":"2024-09-16T11:00:06.834259Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2024-09-16T11:00:06.835348Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:multinode-079070 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T11:00:06.835396Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:00:06.835443Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:00:06.835647Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T11:00:06.835731Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T11:00:06.837071Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:00:06.837351Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:00:06.838362Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T11:00:06.838607Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	
	
	==> etcd [50645a9df44a5be5ef6705e3c8cc321dc230a8a742eff68356246f7fd9869b85] <==
	{"level":"info","ts":"2024-09-16T10:58:28.752744Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:58:28.752781Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:58:28.752793Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T10:58:28.753007Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-09-16T10:58:28.753018Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-09-16T10:58:28.754378Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2024-09-16T10:58:28.754448Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2024-09-16T10:58:28.754544Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:58:28.754573Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T10:58:30.638607Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-16T10:58:30.638667Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T10:58:30.638691Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-09-16T10:58:30.638704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T10:58:30.638709Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2024-09-16T10:58:30.638718Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2024-09-16T10:58:30.638725Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2024-09-16T10:58:30.641363Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:multinode-079070 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T10:58:30.641415Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:58:30.641435Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T10:58:30.641822Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T10:58:30.641894Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T10:58:30.642596Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:58:30.642630Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T10:58:30.643443Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T10:58:30.643446Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	
	
	==> kernel <==
	 11:00:50 up 43 min,  0 users,  load average: 2.04, 1.43, 1.16
	Linux multinode-079070 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [e7dd060f7494bc9b42225cbca571b99a4eff363411d2e3c5d94b7fe635b2c5fc] <==
	I0916 10:59:04.724597       1 main.go:299] handling current node
	I0916 10:59:04.724612       1 main.go:295] Handling node with IPs: map[192.168.67.3:{}]
	I0916 10:59:04.724617       1 main.go:322] Node multinode-079070-m02 has CIDR [10.244.1.0/24] 
	I0916 10:59:04.724755       1 main.go:295] Handling node with IPs: map[192.168.67.4:{}]
	I0916 10:59:04.724767       1 main.go:322] Node multinode-079070-m03 has CIDR [10.244.2.0/24] 
	I0916 10:59:14.722919       1 main.go:295] Handling node with IPs: map[192.168.67.2:{}]
	I0916 10:59:14.722968       1 main.go:299] handling current node
	I0916 10:59:14.722983       1 main.go:295] Handling node with IPs: map[192.168.67.3:{}]
	I0916 10:59:14.722988       1 main.go:322] Node multinode-079070-m02 has CIDR [10.244.1.0/24] 
	I0916 10:59:14.723127       1 main.go:295] Handling node with IPs: map[192.168.67.4:{}]
	I0916 10:59:14.723143       1 main.go:322] Node multinode-079070-m03 has CIDR [10.244.2.0/24] 
	I0916 10:59:24.721022       1 main.go:295] Handling node with IPs: map[192.168.67.2:{}]
	I0916 10:59:24.721063       1 main.go:299] handling current node
	I0916 10:59:24.721079       1 main.go:295] Handling node with IPs: map[192.168.67.3:{}]
	I0916 10:59:24.721084       1 main.go:322] Node multinode-079070-m02 has CIDR [10.244.1.0/24] 
	I0916 10:59:24.721270       1 main.go:295] Handling node with IPs: map[192.168.67.4:{}]
	I0916 10:59:24.721281       1 main.go:322] Node multinode-079070-m03 has CIDR [10.244.2.0/24] 
	I0916 10:59:34.720281       1 main.go:295] Handling node with IPs: map[192.168.67.2:{}]
	I0916 10:59:34.720326       1 main.go:299] handling current node
	I0916 10:59:34.720346       1 main.go:295] Handling node with IPs: map[192.168.67.3:{}]
	I0916 10:59:34.720351       1 main.go:322] Node multinode-079070-m02 has CIDR [10.244.1.0/24] 
	I0916 10:59:44.727844       1 main.go:295] Handling node with IPs: map[192.168.67.2:{}]
	I0916 10:59:44.727884       1 main.go:299] handling current node
	I0916 10:59:44.727910       1 main.go:295] Handling node with IPs: map[192.168.67.3:{}]
	I0916 10:59:44.727918       1 main.go:322] Node multinode-079070-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [fa4a14dd250a9b449f007f261c67db560f38b6b3e45a6320b0ca1e09814fc9e8] <==
	I0916 11:00:09.224740       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 11:00:09.224763       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 11:00:09.548960       1 controller.go:334] Starting controller kube-network-policies
	I0916 11:00:09.548977       1 controller.go:338] Waiting for informer caches to sync
	I0916 11:00:09.548983       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 11:00:09.920275       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 11:00:09.920310       1 metrics.go:61] Registering metrics
	I0916 11:00:09.920414       1 controller.go:374] Syncing nftables rules
	I0916 11:00:19.549317       1 main.go:295] Handling node with IPs: map[192.168.67.2:{}]
	I0916 11:00:19.549395       1 main.go:299] handling current node
	I0916 11:00:19.552154       1 main.go:295] Handling node with IPs: map[192.168.67.3:{}]
	I0916 11:00:19.552191       1 main.go:322] Node multinode-079070-m02 has CIDR [10.244.1.0/24] 
	I0916 11:00:19.552400       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.67.3 Flags: [] Table: 0} 
	I0916 11:00:29.554899       1 main.go:295] Handling node with IPs: map[192.168.67.2:{}]
	I0916 11:00:29.554939       1 main.go:299] handling current node
	I0916 11:00:29.554957       1 main.go:295] Handling node with IPs: map[192.168.67.3:{}]
	I0916 11:00:29.554962       1 main.go:322] Node multinode-079070-m02 has CIDR [10.244.1.0/24] 
	I0916 11:00:39.548928       1 main.go:295] Handling node with IPs: map[192.168.67.2:{}]
	I0916 11:00:39.548985       1 main.go:299] handling current node
	I0916 11:00:39.549005       1 main.go:295] Handling node with IPs: map[192.168.67.3:{}]
	I0916 11:00:39.549011       1 main.go:322] Node multinode-079070-m02 has CIDR [10.244.1.0/24] 
	I0916 11:00:49.549333       1 main.go:295] Handling node with IPs: map[192.168.67.3:{}]
	I0916 11:00:49.549374       1 main.go:322] Node multinode-079070-m02 has CIDR [10.244.1.0/24] 
	I0916 11:00:49.549529       1 main.go:295] Handling node with IPs: map[192.168.67.2:{}]
	I0916 11:00:49.549548       1 main.go:299] handling current node
	
	
	==> kube-apiserver [396666a5f33ccb0a8b755c495fe8f7fb01450201377cc5e93215dc63fdd5471e] <==
	I0916 11:00:07.823267       1 apiservice_controller.go:100] Starting APIServiceRegistrationController
	I0916 11:00:07.823948       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0916 11:00:07.823400       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0916 11:00:07.823415       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0916 11:00:07.925516       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 11:00:08.019897       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 11:00:08.019945       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 11:00:08.019964       1 policy_source.go:224] refreshing policies
	I0916 11:00:08.020845       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 11:00:08.021509       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 11:00:08.021558       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 11:00:08.021598       1 aggregator.go:171] initial CRD sync complete...
	I0916 11:00:08.021613       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 11:00:08.021620       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 11:00:08.021626       1 cache.go:39] Caches are synced for autoregister controller
	I0916 11:00:08.023437       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 11:00:08.023534       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 11:00:08.024025       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 11:00:08.024051       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E0916 11:00:08.030067       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0916 11:00:08.031061       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 11:00:08.031115       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 11:00:08.826020       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 11:00:11.483812       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 11:00:11.684200       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [f8c9dd99b83dacf4270ec16fb010b101dbdc6c7542deaf690a717fb265515d4a] <==
	I0916 10:58:31.622575       1 cache.go:32] Waiting for caches to sync for RemoteAvailability controller
	I0916 10:58:31.622741       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0916 10:58:31.621445       1 cluster_authentication_trust_controller.go:443] Starting cluster_authentication_trust_controller controller
	I0916 10:58:31.622951       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0916 10:58:31.724432       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 10:58:31.725774       1 policy_source.go:224] refreshing policies
	I0916 10:58:31.726045       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 10:58:31.724631       1 shared_informer.go:320] Caches are synced for configmaps
	I0916 10:58:31.724652       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0916 10:58:31.727817       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 10:58:31.727964       1 aggregator.go:171] initial CRD sync complete...
	I0916 10:58:31.728042       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 10:58:31.728080       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 10:58:31.728136       1 cache.go:39] Caches are synced for autoregister controller
	I0916 10:58:31.738090       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 10:58:31.740343       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 10:58:31.820145       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0916 10:58:31.823882       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0916 10:58:31.823988       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0916 10:58:31.824807       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0916 10:58:31.824824       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	E0916 10:58:31.844633       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0916 10:58:32.624625       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 10:58:35.346502       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 10:58:35.394006       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [224f3c76893fd9b065b89216b3facf9e0652faec36d68b791b48068b9f5cef50] <==
	I0916 10:58:35.295610       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 10:58:35.301765       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="250.90877ms"
	I0916 10:58:35.301873       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="65.447µs"
	I0916 10:58:35.710816       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:58:35.727216       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 10:58:35.727248       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 10:58:56.459155       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m02"
	I0916 10:58:59.100413       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="5.583041ms"
	I0916 10:58:59.100515       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="53.596µs"
	I0916 10:59:00.143017       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="8.083825ms"
	I0916 10:59:00.143132       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="65.535µs"
	I0916 10:59:11.803604       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="8.310689ms"
	I0916 10:59:11.803761       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="76.134µs"
	I0916 10:59:15.305747       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m03"
	I0916 10:59:15.305805       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-079070-m02"
	I0916 10:59:15.316502       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m03"
	I0916 10:59:20.349665       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m03"
	I0916 10:59:21.457175       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m03"
	I0916 10:59:21.457182       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-079070-m03"
	I0916 10:59:21.465445       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m03"
	I0916 10:59:25.333709       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m03"
	I0916 10:59:26.632146       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m03"
	I0916 10:59:26.641172       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m03"
	I0916 10:59:27.126032       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-079070-m02"
	I0916 10:59:27.126069       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m03"
	
	
	==> kube-controller-manager [b5befb68baf512b8ca6829a5fe38d52d9799f283ca53e481feab013a6f9e6678] <==
	I0916 11:00:11.286579       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m02"
	I0916 11:00:11.290703       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0916 11:00:11.368384       1 shared_informer.go:320] Caches are synced for disruption
	I0916 11:00:11.430235       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0916 11:00:11.430255       1 shared_informer.go:320] Caches are synced for PV protection
	I0916 11:00:11.431412       1 shared_informer.go:320] Caches are synced for attach detach
	I0916 11:00:11.438123       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0916 11:00:11.461319       1 shared_informer.go:320] Caches are synced for persistent volume
	I0916 11:00:11.471436       1 shared_informer.go:320] Caches are synced for cronjob
	I0916 11:00:11.484635       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 11:00:11.487395       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 11:00:11.532740       1 shared_informer.go:320] Caches are synced for job
	I0916 11:00:11.589033       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="358.17709ms"
	I0916 11:00:11.589361       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="76.15µs"
	I0916 11:00:11.901255       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:00:11.942831       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:00:11.942864       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 11:00:31.087536       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="multinode-079070-m02"
	I0916 11:00:33.742923       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="5.167339ms"
	I0916 11:00:33.743001       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="38.374µs"
	I0916 11:00:34.802544       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="62.945µs"
	I0916 11:00:45.208053       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="7.928536ms"
	I0916 11:00:45.208165       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="68.524µs"
	I0916 11:00:49.829932       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="6.327314ms"
	I0916 11:00:49.830030       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="52.771µs"
	
	
	==> kube-proxy [8917f146f1620dcaacf5849e72ba4e883325655ce5dd77d7d54dfb79b271f268] <==
	I0916 11:00:08.928024       1 server_linux.go:66] "Using iptables proxy"
	I0916 11:00:09.080320       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.67.2"]
	E0916 11:00:09.080416       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 11:00:09.130032       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 11:00:09.130119       1 server_linux.go:169] "Using iptables Proxier"
	I0916 11:00:09.132511       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 11:00:09.132869       1 server.go:483] "Version info" version="v1.31.1"
	I0916 11:00:09.132888       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:00:09.134119       1 config.go:199] "Starting service config controller"
	I0916 11:00:09.134130       1 config.go:105] "Starting endpoint slice config controller"
	I0916 11:00:09.134138       1 config.go:328] "Starting node config controller"
	I0916 11:00:09.134152       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 11:00:09.134154       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 11:00:09.134177       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 11:00:09.235084       1 shared_informer.go:320] Caches are synced for node config
	I0916 11:00:09.235169       1 shared_informer.go:320] Caches are synced for service config
	I0916 11:00:09.235253       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [f11253e8ef61a01c3740b24f1b74855531922a5c71ae0705b35472b9baa28a46] <==
	I0916 10:58:34.155147       1 server_linux.go:66] "Using iptables proxy"
	I0916 10:58:34.267226       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.67.2"]
	E0916 10:58:34.267320       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 10:58:34.285784       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 10:58:34.285854       1 server_linux.go:169] "Using iptables Proxier"
	I0916 10:58:34.287753       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 10:58:34.288177       1 server.go:483] "Version info" version="v1.31.1"
	I0916 10:58:34.288207       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:58:34.289596       1 config.go:105] "Starting endpoint slice config controller"
	I0916 10:58:34.289816       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 10:58:34.289652       1 config.go:199] "Starting service config controller"
	I0916 10:58:34.289903       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 10:58:34.289712       1 config.go:328] "Starting node config controller"
	I0916 10:58:34.289977       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 10:58:34.390019       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 10:58:34.390050       1 shared_informer.go:320] Caches are synced for node config
	I0916 10:58:34.390025       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [bd343c585217cad723a3c810f0ad7d360f8d52d99304cceba8fad478a482e7f1] <==
	I0916 11:00:05.471105       1 serving.go:386] Generated self-signed cert in-memory
	W0916 11:00:07.834905       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 11:00:07.834941       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 11:00:07.834954       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 11:00:07.834964       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 11:00:07.933561       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 11:00:07.933842       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:00:07.936595       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 11:00:07.936656       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 11:00:07.936791       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 11:00:07.936825       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 11:00:08.037166       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [ca0cc800d9c7855a343484f3d2f0ffc35459a84c699a3c4d1a4f9fc511b1b850] <==
	I0916 10:58:29.733121       1 serving.go:386] Generated self-signed cert in-memory
	W0916 10:58:31.727169       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 10:58:31.727213       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 10:58:31.727225       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 10:58:31.727234       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 10:58:31.822085       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 10:58:31.822120       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 10:58:31.825101       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 10:58:31.825732       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 10:58:31.829644       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 10:58:31.829688       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 10:58:31.930692       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 11:00:08 multinode-079070 kubelet[733]: I0916 11:00:08.021132     733 apiserver.go:52] "Watching apiserver"
	Sep 16 11:00:08 multinode-079070 kubelet[733]: I0916 11:00:08.044815     733 kubelet_node_status.go:111] "Node was previously registered" node="multinode-079070"
	Sep 16 11:00:08 multinode-079070 kubelet[733]: I0916 11:00:08.044923     733 kubelet_node_status.go:75] "Successfully registered node" node="multinode-079070"
	Sep 16 11:00:08 multinode-079070 kubelet[733]: I0916 11:00:08.044969     733 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 16 11:00:08 multinode-079070 kubelet[733]: I0916 11:00:08.045844     733 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 16 11:00:08 multinode-079070 kubelet[733]: I0916 11:00:08.063489     733 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 16 11:00:08 multinode-079070 kubelet[733]: I0916 11:00:08.142268     733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/91449e63-0ca3-4dc6-92ef-e3c5ab102dae-cni-cfg\") pod \"kindnet-flmdv\" (UID: \"91449e63-0ca3-4dc6-92ef-e3c5ab102dae\") " pod="kube-system/kindnet-flmdv"
	Sep 16 11:00:08 multinode-079070 kubelet[733]: I0916 11:00:08.142356     733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f3faf85-04e9-4840-855d-dd1ef9d4e463-xtables-lock\") pod \"kube-proxy-2vhmt\" (UID: \"6f3faf85-04e9-4840-855d-dd1ef9d4e463\") " pod="kube-system/kube-proxy-2vhmt"
	Sep 16 11:00:08 multinode-079070 kubelet[733]: I0916 11:00:08.142437     733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91449e63-0ca3-4dc6-92ef-e3c5ab102dae-xtables-lock\") pod \"kindnet-flmdv\" (UID: \"91449e63-0ca3-4dc6-92ef-e3c5ab102dae\") " pod="kube-system/kindnet-flmdv"
	Sep 16 11:00:08 multinode-079070 kubelet[733]: I0916 11:00:08.142505     733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/43862f2e-c773-468d-ab03-8b0bc0633ad4-tmp\") pod \"storage-provisioner\" (UID: \"43862f2e-c773-468d-ab03-8b0bc0633ad4\") " pod="kube-system/storage-provisioner"
	Sep 16 11:00:08 multinode-079070 kubelet[733]: I0916 11:00:08.142572     733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f3faf85-04e9-4840-855d-dd1ef9d4e463-lib-modules\") pod \"kube-proxy-2vhmt\" (UID: \"6f3faf85-04e9-4840-855d-dd1ef9d4e463\") " pod="kube-system/kube-proxy-2vhmt"
	Sep 16 11:00:08 multinode-079070 kubelet[733]: I0916 11:00:08.142597     733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91449e63-0ca3-4dc6-92ef-e3c5ab102dae-lib-modules\") pod \"kindnet-flmdv\" (UID: \"91449e63-0ca3-4dc6-92ef-e3c5ab102dae\") " pod="kube-system/kindnet-flmdv"
	Sep 16 11:00:08 multinode-079070 kubelet[733]: I0916 11:00:08.148584     733 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Sep 16 11:00:14 multinode-079070 kubelet[733]: E0916 11:00:14.066860     733 summary_sys_containers.go:51] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Sep 16 11:00:14 multinode-079070 kubelet[733]: E0916 11:00:14.066907     733 helpers.go:854] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal="allocatableMemory.available"
	Sep 16 11:00:15 multinode-079070 kubelet[733]: I0916 11:00:15.187498     733 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Sep 16 11:00:24 multinode-079070 kubelet[733]: E0916 11:00:24.088165     733 summary_sys_containers.go:51] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Sep 16 11:00:24 multinode-079070 kubelet[733]: E0916 11:00:24.088228     733 helpers.go:854] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal="allocatableMemory.available"
	Sep 16 11:00:34 multinode-079070 kubelet[733]: E0916 11:00:34.106441     733 summary_sys_containers.go:51] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Sep 16 11:00:34 multinode-079070 kubelet[733]: E0916 11:00:34.106524     733 helpers.go:854] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal="allocatableMemory.available"
	Sep 16 11:00:39 multinode-079070 kubelet[733]: I0916 11:00:39.216491     733 scope.go:117] "RemoveContainer" containerID="b73ca772183b59baa5be8014c698ba6eb41bdef3f84e5083cb7a0313a0fb938d"
	Sep 16 11:00:39 multinode-079070 kubelet[733]: I0916 11:00:39.216932     733 scope.go:117] "RemoveContainer" containerID="b98c46a0c30cd2ae6e39a2406e8b9c533294175eff2437b7114b7d2bf84c145e"
	Sep 16 11:00:39 multinode-079070 kubelet[733]: E0916 11:00:39.217127     733 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(43862f2e-c773-468d-ab03-8b0bc0633ad4)\"" pod="kube-system/storage-provisioner" podUID="43862f2e-c773-468d-ab03-8b0bc0633ad4"
	Sep 16 11:00:44 multinode-079070 kubelet[733]: E0916 11:00:44.122620     733 summary_sys_containers.go:51] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	Sep 16 11:00:44 multinode-079070 kubelet[733]: E0916 11:00:44.122663     733 helpers.go:854] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal="allocatableMemory.available"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-079070 -n multinode-079070
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-079070 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context multinode-079070 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (531.654µs)
helpers_test.go:263: kubectl --context multinode-079070 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestMultiNode/serial/RestartMultiNode (53.87s)

                                                
                                    
x
+
TestKubernetesUpgrade (322.37s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-311911 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-311911 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (50.757510973s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-311911
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-311911: (1.206513257s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-311911 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-311911 status --format={{.Host}}: exit status 7 (75.858805ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-311911 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-311911 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m26.458749547s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-311911 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-311911 version --output=json: fork/exec /usr/local/bin/kubectl: exec format error (715.751µs)
version_upgrade_test.go:250: error running kubectl: fork/exec /usr/local/bin/kubectl: exec format error
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-09-16 11:12:29.091270335 +0000 UTC m=+3016.073527413
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-311911
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-311911:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b8f8a6f70a413dbd0579d5bd8dcff7b35160c9da8312d633e3432ac70b8af13f",
	        "Created": "2024-09-16T11:07:20.401497642Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 255044,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T11:08:03.161622847Z",
	            "FinishedAt": "2024-09-16T11:08:02.252935486Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/b8f8a6f70a413dbd0579d5bd8dcff7b35160c9da8312d633e3432ac70b8af13f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b8f8a6f70a413dbd0579d5bd8dcff7b35160c9da8312d633e3432ac70b8af13f/hostname",
	        "HostsPath": "/var/lib/docker/containers/b8f8a6f70a413dbd0579d5bd8dcff7b35160c9da8312d633e3432ac70b8af13f/hosts",
	        "LogPath": "/var/lib/docker/containers/b8f8a6f70a413dbd0579d5bd8dcff7b35160c9da8312d633e3432ac70b8af13f/b8f8a6f70a413dbd0579d5bd8dcff7b35160c9da8312d633e3432ac70b8af13f-json.log",
	        "Name": "/kubernetes-upgrade-311911",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-311911:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "kubernetes-upgrade-311911",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/694ae68e5cfb023c4c49a4d7486b026f1b22e063bea90d333ce359c65d7e3943-init/diff:/var/lib/docker/overlay2/e391a31d00d345931d18da68f8a7d7338830f6d9b98fe203461225d03a65d594/diff",
	                "MergedDir": "/var/lib/docker/overlay2/694ae68e5cfb023c4c49a4d7486b026f1b22e063bea90d333ce359c65d7e3943/merged",
	                "UpperDir": "/var/lib/docker/overlay2/694ae68e5cfb023c4c49a4d7486b026f1b22e063bea90d333ce359c65d7e3943/diff",
	                "WorkDir": "/var/lib/docker/overlay2/694ae68e5cfb023c4c49a4d7486b026f1b22e063bea90d333ce359c65d7e3943/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-311911",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-311911/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-311911",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-311911",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-311911",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1eb43924747c53eeed9d5bef1797fd5e6bb3addde51bb1dd9bdb00c08931cb2a",
	            "SandboxKey": "/var/run/docker/netns/1eb43924747c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33048"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33049"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33052"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33050"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33051"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-311911": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "45dc384def2853094b8670cce1eaf1f5f5c011a5a66333894087b0793b1532be",
	                    "EndpointID": "e15da6552d61692200c46099feaf085964d048e6cf18a813e381ed8a46773725",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-311911",
	                        "b8f8a6f70a41"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-311911 -n kubernetes-upgrade-311911
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-311911 logs -n 25
E0916 11:12:29.777143   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p kubernetes-upgrade-311911                           | kubernetes-upgrade-311911 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| start   | -p cert-options-840054                                 | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-311911                           | kubernetes-upgrade-311911 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:12 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                                   |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-917705                              | force-systemd-flag-917705 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | ssh cat                                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-917705                           | force-systemd-flag-917705 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| start   | -p old-k8s-version-371039                              | old-k8s-version-371039    | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:10 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| ssh     | cert-options-840054 ssh                                | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | openssl x509 -text -noout -in                          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-840054 -- sudo                         | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                           |         |         |                     |                     |
	| delete  | -p cert-options-840054                                 | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| start   | -p no-preload-349453                                   | no-preload-349453         | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:09 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-349453             | no-preload-349453         | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC | 16 Sep 24 11:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p no-preload-349453                                   | no-preload-349453         | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC | 16 Sep 24 11:09 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-349453                  | no-preload-349453         | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC | 16 Sep 24 11:09 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p no-preload-349453                                   | no-preload-349453         | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-371039        | old-k8s-version-371039    | jenkins | v1.34.0 | 16 Sep 24 11:10 UTC | 16 Sep 24 11:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p old-k8s-version-371039                              | old-k8s-version-371039    | jenkins | v1.34.0 | 16 Sep 24 11:10 UTC | 16 Sep 24 11:10 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-371039             | old-k8s-version-371039    | jenkins | v1.34.0 | 16 Sep 24 11:10 UTC | 16 Sep 24 11:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-371039                              | old-k8s-version-371039    | jenkins | v1.34.0 | 16 Sep 24 11:10 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| start   | -p cert-expiration-021107                              | cert-expiration-021107    | jenkins | v1.34.0 | 16 Sep 24 11:11 UTC | 16 Sep 24 11:11 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-021107                              | cert-expiration-021107    | jenkins | v1.34.0 | 16 Sep 24 11:11 UTC | 16 Sep 24 11:11 UTC |
	| start   | -p embed-certs-679624                                  | embed-certs-679624        | jenkins | v1.34.0 | 16 Sep 24 11:11 UTC | 16 Sep 24 11:12 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-679624            | embed-certs-679624        | jenkins | v1.34.0 | 16 Sep 24 11:12 UTC | 16 Sep 24 11:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p embed-certs-679624                                  | embed-certs-679624        | jenkins | v1.34.0 | 16 Sep 24 11:12 UTC | 16 Sep 24 11:12 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-679624                 | embed-certs-679624        | jenkins | v1.34.0 | 16 Sep 24 11:12 UTC | 16 Sep 24 11:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p embed-certs-679624                                  | embed-certs-679624        | jenkins | v1.34.0 | 16 Sep 24 11:12 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:12:15
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:12:15.055319  298514 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:12:15.055551  298514 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:12:15.055559  298514 out.go:358] Setting ErrFile to fd 2...
	I0916 11:12:15.055563  298514 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:12:15.055849  298514 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 11:12:15.056436  298514 out.go:352] Setting JSON to false
	I0916 11:12:15.057758  298514 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3279,"bootTime":1726481856,"procs":353,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:12:15.057884  298514 start.go:139] virtualization: kvm guest
	I0916 11:12:15.059795  298514 out.go:177] * [embed-certs-679624] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:12:15.060965  298514 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:12:15.061003  298514 notify.go:220] Checking for updates...
	I0916 11:12:15.062999  298514 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:12:15.064255  298514 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:12:15.065383  298514 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 11:12:15.066484  298514 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:12:15.067754  298514 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:12:15.069426  298514 config.go:182] Loaded profile config "embed-certs-679624": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:12:15.069886  298514 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:12:15.092919  298514 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 11:12:15.093055  298514 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:12:15.141084  298514 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:86 SystemTime:2024-09-16 11:12:15.131810425 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:12:15.141191  298514 docker.go:318] overlay module found
	I0916 11:12:15.142973  298514 out.go:177] * Using the docker driver based on existing profile
	I0916 11:12:15.144180  298514 start.go:297] selected driver: docker
	I0916 11:12:15.144193  298514 start.go:901] validating driver "docker" against &{Name:embed-certs-679624 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-679624 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Moun
tString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:12:15.144278  298514 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:12:15.145068  298514 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:12:15.193906  298514 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:86 SystemTime:2024-09-16 11:12:15.183038799 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:12:15.194376  298514 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:12:15.194419  298514 cni.go:84] Creating CNI manager for ""
	I0916 11:12:15.194484  298514 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:12:15.194569  298514 start.go:340] cluster config:
	{Name:embed-certs-679624 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-679624 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:12:15.196953  298514 out.go:177] * Starting "embed-certs-679624" primary control-plane node in "embed-certs-679624" cluster
	I0916 11:12:15.198587  298514 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 11:12:15.199979  298514 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 11:12:15.201299  298514 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:12:15.201341  298514 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 11:12:15.201338  298514 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4
	I0916 11:12:15.201360  298514 cache.go:56] Caching tarball of preloaded images
	I0916 11:12:15.201458  298514 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 11:12:15.201474  298514 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 11:12:15.201605  298514 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/config.json ...
	W0916 11:12:15.221952  298514 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 11:12:15.221970  298514 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 11:12:15.222055  298514 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 11:12:15.222070  298514 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 11:12:15.222078  298514 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 11:12:15.222089  298514 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 11:12:15.222100  298514 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 11:12:15.279395  298514 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 11:12:15.279437  298514 cache.go:194] Successfully downloaded all kic artifacts
	I0916 11:12:15.279467  298514 start.go:360] acquireMachinesLock for embed-certs-679624: {Name:mk5c5a1695ab7bba9827e17eb437dd80adf4e091 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:12:15.279529  298514 start.go:364] duration metric: took 43.264µs to acquireMachinesLock for "embed-certs-679624"
	I0916 11:12:15.279547  298514 start.go:96] Skipping create...Using existing machine configuration
	I0916 11:12:15.279554  298514 fix.go:54] fixHost starting: 
	I0916 11:12:15.279832  298514 cli_runner.go:164] Run: docker container inspect embed-certs-679624 --format={{.State.Status}}
	I0916 11:12:15.297301  298514 fix.go:112] recreateIfNeeded on embed-certs-679624: state=Stopped err=<nil>
	W0916 11:12:15.297333  298514 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 11:12:15.299178  298514 out.go:177] * Restarting existing docker container for "embed-certs-679624" ...
	I0916 11:12:12.758511  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:15.258014  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:12.720332  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:12:12.720750  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:12:12.720799  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:12:12.720849  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:12:12.757932  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:12:12.757957  254463 cri.go:89] found id: ""
	I0916 11:12:12.757967  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:12:12.758021  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:12:12.761338  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:12:12.761395  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:12:12.793914  254463 cri.go:89] found id: ""
	I0916 11:12:12.793936  254463 logs.go:276] 0 containers: []
	W0916 11:12:12.793944  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:12:12.793949  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:12:12.794001  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:12:12.826163  254463 cri.go:89] found id: ""
	I0916 11:12:12.826189  254463 logs.go:276] 0 containers: []
	W0916 11:12:12.826201  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:12:12.826208  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:12:12.826264  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:12:12.860654  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:12:12.860676  254463 cri.go:89] found id: ""
	I0916 11:12:12.860685  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:12:12.860740  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:12:12.864103  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:12:12.864175  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:12:12.896248  254463 cri.go:89] found id: ""
	I0916 11:12:12.896274  254463 logs.go:276] 0 containers: []
	W0916 11:12:12.896284  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:12:12.896290  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:12:12.896341  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:12:12.928597  254463 cri.go:89] found id: "1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:12:12.928621  254463 cri.go:89] found id: ""
	I0916 11:12:12.928630  254463 logs.go:276] 1 containers: [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff]
	I0916 11:12:12.928683  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:12:12.932108  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:12:12.932165  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:12:12.966650  254463 cri.go:89] found id: ""
	I0916 11:12:12.966677  254463 logs.go:276] 0 containers: []
	W0916 11:12:12.966688  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:12:12.966695  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:12:12.966754  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:12:13.000751  254463 cri.go:89] found id: ""
	I0916 11:12:13.000777  254463 logs.go:276] 0 containers: []
	W0916 11:12:13.000798  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:12:13.000807  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:12:13.000820  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:12:13.036172  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:12:13.036196  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:12:13.132563  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:12:13.132603  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:12:13.153447  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:12:13.153480  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:12:13.211404  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:12:13.211425  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:12:13.211441  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:12:13.247193  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:12:13.247223  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:12:13.319281  254463 logs.go:123] Gathering logs for kube-controller-manager [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff] ...
	I0916 11:12:13.319320  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:12:13.354127  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:12:13.354162  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:12:15.903791  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:12:15.904260  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:12:15.904314  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:12:15.904361  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:12:15.949950  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:12:15.949978  254463 cri.go:89] found id: ""
	I0916 11:12:15.949988  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:12:15.950045  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:12:15.954235  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:12:15.954312  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:12:15.989960  254463 cri.go:89] found id: ""
	I0916 11:12:15.989988  254463 logs.go:276] 0 containers: []
	W0916 11:12:15.989999  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:12:15.990010  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:12:15.990078  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:12:16.023671  254463 cri.go:89] found id: ""
	I0916 11:12:16.023700  254463 logs.go:276] 0 containers: []
	W0916 11:12:16.023712  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:12:16.023720  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:12:16.023829  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:12:16.056296  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:12:16.056322  254463 cri.go:89] found id: ""
	I0916 11:12:16.056331  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:12:16.056388  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:12:16.059966  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:12:16.060033  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:12:16.095038  254463 cri.go:89] found id: ""
	I0916 11:12:16.095062  254463 logs.go:276] 0 containers: []
	W0916 11:12:16.095071  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:12:16.095077  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:12:16.095120  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:12:16.128301  254463 cri.go:89] found id: "1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:12:16.128322  254463 cri.go:89] found id: ""
	I0916 11:12:16.128329  254463 logs.go:276] 1 containers: [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff]
	I0916 11:12:16.128373  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:12:16.131661  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:12:16.131716  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:12:16.164179  254463 cri.go:89] found id: ""
	I0916 11:12:16.164207  254463 logs.go:276] 0 containers: []
	W0916 11:12:16.164238  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:12:16.164247  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:12:16.164313  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:12:16.196968  254463 cri.go:89] found id: ""
	I0916 11:12:16.196993  254463 logs.go:276] 0 containers: []
	W0916 11:12:16.197004  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:12:16.197018  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:12:16.197037  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:12:16.271794  254463 logs.go:123] Gathering logs for kube-controller-manager [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff] ...
	I0916 11:12:16.271835  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:12:16.304961  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:12:16.304985  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:12:16.352148  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:12:16.352189  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:12:16.389167  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:12:16.389198  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:12:16.479865  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:12:16.479902  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:12:16.502377  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:12:16.502414  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:12:16.561845  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:12:16.561881  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:12:16.561896  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:12:14.237920  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:16.238725  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:18.738144  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:19.097559  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:12:19.097947  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:12:19.098005  254463 kubeadm.go:597] duration metric: took 4m5.204947837s to restartPrimaryControlPlane
	W0916 11:12:19.098066  254463 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0916 11:12:19.098093  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0916 11:12:19.813698  254463 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:12:19.825917  254463 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 11:12:19.834908  254463 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 11:12:19.834974  254463 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 11:12:19.844370  254463 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 11:12:19.844394  254463 kubeadm.go:157] found existing configuration files:
	
	I0916 11:12:19.844446  254463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 11:12:19.853417  254463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 11:12:19.853486  254463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 11:12:19.862063  254463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 11:12:19.871425  254463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 11:12:19.871483  254463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 11:12:19.880008  254463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 11:12:19.888875  254463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 11:12:19.888936  254463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 11:12:19.897656  254463 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 11:12:19.906214  254463 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 11:12:19.906302  254463 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 11:12:19.914529  254463 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 11:12:19.953797  254463 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 11:12:19.953870  254463 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 11:12:19.971822  254463 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 11:12:19.971942  254463 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 11:12:19.971986  254463 kubeadm.go:310] OS: Linux
	I0916 11:12:19.972042  254463 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 11:12:19.972085  254463 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 11:12:19.972143  254463 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 11:12:19.972193  254463 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 11:12:19.972255  254463 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 11:12:19.972348  254463 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 11:12:19.972410  254463 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 11:12:19.972473  254463 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 11:12:19.972521  254463 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 11:12:20.030099  254463 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 11:12:20.030276  254463 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 11:12:20.030442  254463 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 11:12:20.035426  254463 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 11:12:15.300323  298514 cli_runner.go:164] Run: docker start embed-certs-679624
	I0916 11:12:15.571704  298514 cli_runner.go:164] Run: docker container inspect embed-certs-679624 --format={{.State.Status}}
	I0916 11:12:15.590955  298514 kic.go:430] container "embed-certs-679624" state is running.
	I0916 11:12:15.591417  298514 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-679624
	I0916 11:12:15.610450  298514 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/config.json ...
	I0916 11:12:15.610688  298514 machine.go:93] provisionDockerMachine start ...
	I0916 11:12:15.610781  298514 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:12:15.629717  298514 main.go:141] libmachine: Using SSH client type: native
	I0916 11:12:15.629971  298514 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I0916 11:12:15.629988  298514 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 11:12:15.630715  298514 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50524->127.0.0.1:33083: read: connection reset by peer
	I0916 11:12:18.763248  298514 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-679624
	
	I0916 11:12:18.763275  298514 ubuntu.go:169] provisioning hostname "embed-certs-679624"
	I0916 11:12:18.763339  298514 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:12:18.780898  298514 main.go:141] libmachine: Using SSH client type: native
	I0916 11:12:18.781106  298514 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I0916 11:12:18.781124  298514 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-679624 && echo "embed-certs-679624" | sudo tee /etc/hostname
	I0916 11:12:18.922862  298514 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-679624
	
	I0916 11:12:18.922950  298514 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:12:18.942642  298514 main.go:141] libmachine: Using SSH client type: native
	I0916 11:12:18.942871  298514 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33083 <nil> <nil>}
	I0916 11:12:18.942896  298514 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-679624' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-679624/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-679624' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:12:19.076027  298514 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:12:19.076059  298514 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 11:12:19.076107  298514 ubuntu.go:177] setting up certificates
	I0916 11:12:19.076125  298514 provision.go:84] configureAuth start
	I0916 11:12:19.076196  298514 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-679624
	I0916 11:12:19.094193  298514 provision.go:143] copyHostCerts
	I0916 11:12:19.094259  298514 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 11:12:19.094268  298514 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 11:12:19.094346  298514 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 11:12:19.094428  298514 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 11:12:19.094436  298514 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 11:12:19.094461  298514 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 11:12:19.094510  298514 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 11:12:19.094517  298514 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 11:12:19.094540  298514 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 11:12:19.094589  298514 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.embed-certs-679624 san=[127.0.0.1 192.168.85.2 embed-certs-679624 localhost minikube]
	I0916 11:12:19.244186  298514 provision.go:177] copyRemoteCerts
	I0916 11:12:19.244265  298514 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:12:19.244307  298514 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:12:19.262824  298514 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/embed-certs-679624/id_rsa Username:docker}
	I0916 11:12:19.361975  298514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 11:12:19.385449  298514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0916 11:12:19.408688  298514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 11:12:19.431903  298514 provision.go:87] duration metric: took 355.762593ms to configureAuth
	I0916 11:12:19.431940  298514 ubuntu.go:193] setting minikube options for container-runtime
	I0916 11:12:19.432111  298514 config.go:182] Loaded profile config "embed-certs-679624": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:12:19.432122  298514 machine.go:96] duration metric: took 3.821420726s to provisionDockerMachine
	I0916 11:12:19.432130  298514 start.go:293] postStartSetup for "embed-certs-679624" (driver="docker")
	I0916 11:12:19.432139  298514 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:12:19.432183  298514 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:12:19.432217  298514 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:12:19.450092  298514 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/embed-certs-679624/id_rsa Username:docker}
	I0916 11:12:19.545146  298514 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:12:19.548477  298514 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 11:12:19.548525  298514 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 11:12:19.548539  298514 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 11:12:19.548548  298514 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 11:12:19.548557  298514 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 11:12:19.548621  298514 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 11:12:19.548709  298514 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 11:12:19.548793  298514 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:12:19.556787  298514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:12:19.579869  298514 start.go:296] duration metric: took 147.724296ms for postStartSetup
	I0916 11:12:19.579946  298514 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 11:12:19.579981  298514 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:12:19.597320  298514 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/embed-certs-679624/id_rsa Username:docker}
	I0916 11:12:19.693105  298514 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 11:12:19.698096  298514 fix.go:56] duration metric: took 4.41853357s for fixHost
	I0916 11:12:19.698124  298514 start.go:83] releasing machines lock for "embed-certs-679624", held for 4.418584686s
	I0916 11:12:19.698224  298514 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-679624
	I0916 11:12:19.719827  298514 ssh_runner.go:195] Run: cat /version.json
	I0916 11:12:19.719874  298514 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:12:19.719913  298514 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:12:19.719985  298514 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:12:19.741140  298514 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/embed-certs-679624/id_rsa Username:docker}
	I0916 11:12:19.742662  298514 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/embed-certs-679624/id_rsa Username:docker}
	I0916 11:12:19.926158  298514 ssh_runner.go:195] Run: systemctl --version
	I0916 11:12:19.930700  298514 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:12:19.935362  298514 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 11:12:19.955414  298514 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 11:12:19.955493  298514 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:12:19.965415  298514 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 11:12:19.965447  298514 start.go:495] detecting cgroup driver to use...
	I0916 11:12:19.965496  298514 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 11:12:19.965544  298514 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 11:12:19.981694  298514 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 11:12:19.994377  298514 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:12:19.994440  298514 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:12:20.009405  298514 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:12:20.020873  298514 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:12:20.098593  298514 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:12:20.186194  298514 docker.go:233] disabling docker service ...
	I0916 11:12:20.186266  298514 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:12:20.197984  298514 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:12:20.209150  298514 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:12:20.296095  298514 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:12:20.371492  298514 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:12:20.382992  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:12:20.399340  298514 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 11:12:20.409447  298514 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 11:12:20.419238  298514 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 11:12:20.419320  298514 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 11:12:20.429907  298514 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 11:12:20.441322  298514 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 11:12:20.452132  298514 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 11:12:20.463263  298514 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:12:20.472894  298514 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 11:12:20.483220  298514 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 11:12:20.492949  298514 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 11:12:20.503287  298514 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:12:20.511540  298514 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:12:20.519864  298514 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:12:20.595352  298514 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 11:12:20.709340  298514 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 11:12:20.709409  298514 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 11:12:20.713184  298514 start.go:563] Will wait 60s for crictl version
	I0916 11:12:20.713236  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:12:20.716777  298514 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:12:20.755786  298514 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 11:12:20.755852  298514 ssh_runner.go:195] Run: containerd --version
	I0916 11:12:20.781847  298514 ssh_runner.go:195] Run: containerd --version
	I0916 11:12:20.809009  298514 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 11:12:20.039053  254463 out.go:235]   - Generating certificates and keys ...
	I0916 11:12:20.039156  254463 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 11:12:20.039244  254463 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 11:12:20.039360  254463 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0916 11:12:20.039457  254463 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0916 11:12:20.039539  254463 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0916 11:12:20.039627  254463 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0916 11:12:20.039698  254463 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0916 11:12:20.039823  254463 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0916 11:12:20.039902  254463 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0916 11:12:20.039997  254463 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0916 11:12:20.040033  254463 kubeadm.go:310] [certs] Using the existing "sa" key
	I0916 11:12:20.040081  254463 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 11:12:20.198599  254463 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 11:12:20.336170  254463 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 11:12:20.604876  254463 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 11:12:20.978184  254463 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 11:12:21.111606  254463 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 11:12:21.112047  254463 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 11:12:21.114361  254463 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 11:12:20.810577  298514 cli_runner.go:164] Run: docker network inspect embed-certs-679624 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:12:20.828560  298514 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0916 11:12:20.832235  298514 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:12:20.843224  298514 kubeadm.go:883] updating cluster {Name:embed-certs-679624 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-679624 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:12:20.843397  298514 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:12:20.843473  298514 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:12:20.883308  298514 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 11:12:20.883335  298514 containerd.go:534] Images already preloaded, skipping extraction
	I0916 11:12:20.883408  298514 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:12:20.916241  298514 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 11:12:20.916264  298514 cache_images.go:84] Images are preloaded, skipping loading
	I0916 11:12:20.916274  298514 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.31.1 containerd true true} ...
	I0916 11:12:20.916388  298514 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-679624 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-679624 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:12:20.916449  298514 ssh_runner.go:195] Run: sudo crictl info
	I0916 11:12:20.960379  298514 cni.go:84] Creating CNI manager for ""
	I0916 11:12:20.960404  298514 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:12:20.960413  298514 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:12:20.960437  298514 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-679624 NodeName:embed-certs-679624 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 11:12:20.960568  298514 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-679624"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:12:20.960625  298514 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 11:12:20.969929  298514 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 11:12:20.970013  298514 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:12:20.978654  298514 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0916 11:12:20.996158  298514 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:12:21.013100  298514 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0916 11:12:21.030051  298514 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0916 11:12:21.033503  298514 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:12:21.043688  298514 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:12:21.121009  298514 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:12:21.135281  298514 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624 for IP: 192.168.85.2
	I0916 11:12:21.135304  298514 certs.go:194] generating shared ca certs ...
	I0916 11:12:21.135323  298514 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:12:21.135485  298514 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 11:12:21.135567  298514 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 11:12:21.135586  298514 certs.go:256] generating profile certs ...
	I0916 11:12:21.135704  298514 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/client.key
	I0916 11:12:21.135820  298514 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/apiserver.key.67e55e90
	I0916 11:12:21.135876  298514 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/proxy-client.key
	I0916 11:12:21.136070  298514 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 11:12:21.136136  298514 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 11:12:21.136151  298514 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:12:21.136187  298514 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 11:12:21.136223  298514 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:12:21.136257  298514 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 11:12:21.136316  298514 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:12:21.137074  298514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:12:21.163986  298514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:12:21.194957  298514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:12:21.233365  298514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:12:21.267143  298514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0916 11:12:21.338915  298514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 11:12:21.363665  298514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:12:21.386988  298514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 11:12:21.408643  298514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:12:21.432145  298514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 11:12:21.454258  298514 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 11:12:21.478050  298514 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:12:21.494824  298514 ssh_runner.go:195] Run: openssl version
	I0916 11:12:21.500255  298514 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:12:21.509357  298514 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:12:21.512630  298514 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:12:21.512684  298514 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:12:21.518944  298514 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:12:21.527256  298514 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 11:12:21.536217  298514 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 11:12:21.539702  298514 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 11:12:21.539792  298514 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 11:12:21.546128  298514 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 11:12:21.554535  298514 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 11:12:21.563193  298514 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 11:12:21.566906  298514 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 11:12:21.566963  298514 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 11:12:21.573381  298514 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:12:21.581993  298514 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:12:21.585333  298514 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 11:12:21.591454  298514 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 11:12:21.597730  298514 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 11:12:21.604147  298514 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 11:12:21.610263  298514 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 11:12:21.616359  298514 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 11:12:21.622452  298514 kubeadm.go:392] StartCluster: {Name:embed-certs-679624 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-679624 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkin
s:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:12:21.622567  298514 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 11:12:21.622611  298514 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:12:21.666138  298514 cri.go:89] found id: "3dab298bfe5b5f466c279d41236e5592bfcab2429f9c5b2ce3ac5bf7c01c183d"
	I0916 11:12:21.666163  298514 cri.go:89] found id: "f590d121c5d6d1937cefe41166b62f9f511f34157bec2b92d38767e94a5b8ba8"
	I0916 11:12:21.666169  298514 cri.go:89] found id: "2dbb170a519e89c32bdecca493bd2479c29f546b0a9ba488f7768c7e7cdd8cc6"
	I0916 11:12:21.666178  298514 cri.go:89] found id: "c182b9d7c07dff5e07ae6d7a2a6ee44903b7d16b719cbe713e148ddee7a32eae"
	I0916 11:12:21.666182  298514 cri.go:89] found id: "debbdc082cc9ca9a3fe53ba70655a9e18f2b6df022109c93f5db8c5e17b5c30a"
	I0916 11:12:21.666188  298514 cri.go:89] found id: "7637dc0ee3d4d0dad8e111abe371f5489a1304abc063851969b1c262fb4b2a10"
	I0916 11:12:21.666193  298514 cri.go:89] found id: "98ba0135cf4f3961fbf8ace4f9fb6ca27a2883eb5d31aa7115bb9c9828c71f32"
	I0916 11:12:21.666198  298514 cri.go:89] found id: "e7db7be77ed78b3a10c8d675a7803092991d0e4a8c3e81f93fe10d600f5fd3a0"
	I0916 11:12:21.666202  298514 cri.go:89] found id: ""
	I0916 11:12:21.666251  298514 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0916 11:12:21.679275  298514 cri.go:116] JSON = null
	W0916 11:12:21.679333  298514 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0916 11:12:21.679383  298514 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:12:21.689488  298514 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 11:12:21.689507  298514 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 11:12:21.689554  298514 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0916 11:12:21.701264  298514 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0916 11:12:21.702298  298514 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-679624" does not appear in /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:12:21.702999  298514 kubeconfig.go:62] /home/jenkins/minikube-integration/19651-3687/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-679624" cluster setting kubeconfig missing "embed-certs-679624" context setting]
	I0916 11:12:21.704108  298514 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:12:21.706093  298514 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 11:12:21.731632  298514 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I0916 11:12:21.731672  298514 kubeadm.go:597] duration metric: took 42.159543ms to restartPrimaryControlPlane
	I0916 11:12:21.731683  298514 kubeadm.go:394] duration metric: took 109.240751ms to StartCluster
	I0916 11:12:21.731701  298514 settings.go:142] acquiring lock: {Name:mk5f140da961bb985c566f732d611f3e238f1f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:12:21.731827  298514 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:12:21.734198  298514 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:12:21.734515  298514 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 11:12:21.734738  298514 config.go:182] Loaded profile config "embed-certs-679624": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:12:21.734796  298514 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:12:21.734876  298514 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-679624"
	I0916 11:12:21.734899  298514 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-679624"
	W0916 11:12:21.734911  298514 addons.go:243] addon storage-provisioner should already be in state true
	I0916 11:12:21.734946  298514 host.go:66] Checking if "embed-certs-679624" exists ...
	I0916 11:12:21.735437  298514 cli_runner.go:164] Run: docker container inspect embed-certs-679624 --format={{.State.Status}}
	I0916 11:12:21.735521  298514 addons.go:69] Setting default-storageclass=true in profile "embed-certs-679624"
	I0916 11:12:21.735551  298514 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-679624"
	I0916 11:12:21.735680  298514 addons.go:69] Setting metrics-server=true in profile "embed-certs-679624"
	I0916 11:12:21.735719  298514 addons.go:234] Setting addon metrics-server=true in "embed-certs-679624"
	W0916 11:12:21.735768  298514 addons.go:243] addon metrics-server should already be in state true
	I0916 11:12:21.735733  298514 addons.go:69] Setting dashboard=true in profile "embed-certs-679624"
	I0916 11:12:21.735805  298514 addons.go:234] Setting addon dashboard=true in "embed-certs-679624"
	I0916 11:12:21.735808  298514 host.go:66] Checking if "embed-certs-679624" exists ...
	W0916 11:12:21.735813  298514 addons.go:243] addon dashboard should already be in state true
	I0916 11:12:21.735846  298514 host.go:66] Checking if "embed-certs-679624" exists ...
	I0916 11:12:21.735851  298514 cli_runner.go:164] Run: docker container inspect embed-certs-679624 --format={{.State.Status}}
	I0916 11:12:21.736367  298514 cli_runner.go:164] Run: docker container inspect embed-certs-679624 --format={{.State.Status}}
	I0916 11:12:21.736386  298514 cli_runner.go:164] Run: docker container inspect embed-certs-679624 --format={{.State.Status}}
	I0916 11:12:21.739055  298514 out.go:177] * Verifying Kubernetes components...
	I0916 11:12:21.740893  298514 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:12:21.771808  298514 addons.go:234] Setting addon default-storageclass=true in "embed-certs-679624"
	W0916 11:12:21.771833  298514 addons.go:243] addon default-storageclass should already be in state true
	I0916 11:12:21.771862  298514 host.go:66] Checking if "embed-certs-679624" exists ...
	I0916 11:12:21.772580  298514 cli_runner.go:164] Run: docker container inspect embed-certs-679624 --format={{.State.Status}}
	I0916 11:12:21.774038  298514 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:12:21.775402  298514 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0916 11:12:21.775468  298514 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:12:21.775486  298514 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:12:21.775543  298514 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:12:21.778209  298514 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0916 11:12:21.779443  298514 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0916 11:12:17.757779  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:19.758822  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:21.765067  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:21.118206  254463 out.go:235]   - Booting up control plane ...
	I0916 11:12:21.118341  254463 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 11:12:21.118439  254463 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 11:12:21.118529  254463 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 11:12:21.131841  254463 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 11:12:21.138510  254463 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 11:12:21.138574  254463 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 11:12:21.239463  254463 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 11:12:21.239627  254463 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 11:12:22.241040  254463 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001515279s
	I0916 11:12:22.241165  254463 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 11:12:20.739010  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:22.739672  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:21.779521  298514 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0916 11:12:21.779534  298514 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0916 11:12:21.779596  298514 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:12:21.780589  298514 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 11:12:21.780610  298514 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 11:12:21.780666  298514 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:12:21.799696  298514 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:12:21.799690  298514 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/embed-certs-679624/id_rsa Username:docker}
	I0916 11:12:21.799718  298514 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:12:21.799807  298514 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:12:21.805824  298514 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/embed-certs-679624/id_rsa Username:docker}
	I0916 11:12:21.807818  298514 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/embed-certs-679624/id_rsa Username:docker}
	I0916 11:12:21.818551  298514 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/embed-certs-679624/id_rsa Username:docker}
	I0916 11:12:22.045003  298514 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:12:22.059050  298514 node_ready.go:35] waiting up to 6m0s for node "embed-certs-679624" to be "Ready" ...
	I0916 11:12:22.220339  298514 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 11:12:22.220369  298514 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0916 11:12:22.223370  298514 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:12:22.225036  298514 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0916 11:12:22.225056  298514 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0916 11:12:22.231210  298514 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:12:22.334916  298514 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 11:12:22.334948  298514 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 11:12:22.338418  298514 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0916 11:12:22.338444  298514 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0916 11:12:22.443289  298514 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 11:12:22.443322  298514 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 11:12:22.530988  298514 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0916 11:12:22.531024  298514 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0916 11:12:22.630056  298514 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0916 11:12:22.641921  298514 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 11:12:22.641959  298514 retry.go:31] will retry after 300.10952ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0916 11:12:22.642030  298514 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 11:12:22.642043  298514 retry.go:31] will retry after 184.56131ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 11:12:22.651081  298514 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0916 11:12:22.651110  298514 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0916 11:12:22.827631  298514 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:12:22.833294  298514 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0916 11:12:22.833328  298514 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0916 11:12:22.942868  298514 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:12:23.065016  298514 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0916 11:12:23.065045  298514 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0916 11:12:23.146837  298514 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0916 11:12:23.146862  298514 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0916 11:12:23.255613  298514 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0916 11:12:23.255653  298514 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0916 11:12:23.344343  298514 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0916 11:12:23.344371  298514 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0916 11:12:23.426109  298514 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0916 11:12:26.743167  254463 kubeadm.go:310] [api-check] The API server is healthy after 4.502134553s
	I0916 11:12:26.759183  254463 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 11:12:26.772171  254463 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 11:12:26.792507  254463 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 11:12:26.792749  254463 kubeadm.go:310] [mark-control-plane] Marking the node kubernetes-upgrade-311911 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 11:12:26.800581  254463 kubeadm.go:310] [bootstrap-token] Using token: g34269.45j9r3eiiza32z5r
	I0916 11:12:24.258200  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:26.259384  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:26.758238  283294 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"True"
	I0916 11:12:26.758261  283294 pod_ready.go:82] duration metric: took 1m11.006865105s for pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:26.758271  283294 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w2kp4" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:26.763092  283294 pod_ready.go:93] pod "kube-proxy-w2kp4" in "kube-system" namespace has status "Ready":"True"
	I0916 11:12:26.763116  283294 pod_ready.go:82] duration metric: took 4.838602ms for pod "kube-proxy-w2kp4" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:26.763128  283294 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-371039" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:26.802084  254463 out.go:235]   - Configuring RBAC rules ...
	I0916 11:12:26.802258  254463 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 11:12:26.805621  254463 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 11:12:26.811293  254463 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 11:12:26.813987  254463 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 11:12:26.816540  254463 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 11:12:26.819598  254463 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 11:12:27.150114  254463 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 11:12:27.579963  254463 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 11:12:25.539200  298514 node_ready.go:49] node "embed-certs-679624" has status "Ready":"True"
	I0916 11:12:25.539234  298514 node_ready.go:38] duration metric: took 3.480144261s for node "embed-certs-679624" to be "Ready" ...
	I0916 11:12:25.539247  298514 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:12:25.555161  298514 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dmv6t" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:25.625435  298514 pod_ready.go:93] pod "coredns-7c65d6cfc9-dmv6t" in "kube-system" namespace has status "Ready":"True"
	I0916 11:12:25.625465  298514 pod_ready.go:82] duration metric: took 70.268031ms for pod "coredns-7c65d6cfc9-dmv6t" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:25.625491  298514 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-679624" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:25.633067  298514 pod_ready.go:93] pod "etcd-embed-certs-679624" in "kube-system" namespace has status "Ready":"True"
	I0916 11:12:25.633092  298514 pod_ready.go:82] duration metric: took 7.591654ms for pod "etcd-embed-certs-679624" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:25.633107  298514 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-679624" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:25.643244  298514 pod_ready.go:93] pod "kube-apiserver-embed-certs-679624" in "kube-system" namespace has status "Ready":"True"
	I0916 11:12:25.643277  298514 pod_ready.go:82] duration metric: took 10.161229ms for pod "kube-apiserver-embed-certs-679624" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:25.643290  298514 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-679624" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:25.728311  298514 pod_ready.go:93] pod "kube-controller-manager-embed-certs-679624" in "kube-system" namespace has status "Ready":"True"
	I0916 11:12:25.728341  298514 pod_ready.go:82] duration metric: took 85.041923ms for pod "kube-controller-manager-embed-certs-679624" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:25.728355  298514 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bt6k2" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:25.742462  298514 pod_ready.go:93] pod "kube-proxy-bt6k2" in "kube-system" namespace has status "Ready":"True"
	I0916 11:12:25.742493  298514 pod_ready.go:82] duration metric: took 14.128304ms for pod "kube-proxy-bt6k2" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:25.742503  298514 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-679624" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:26.143085  298514 pod_ready.go:93] pod "kube-scheduler-embed-certs-679624" in "kube-system" namespace has status "Ready":"True"
	I0916 11:12:26.143114  298514 pod_ready.go:82] duration metric: took 400.60243ms for pod "kube-scheduler-embed-certs-679624" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:26.143128  298514 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:27.839854  298514 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.209743599s)
	I0916 11:12:27.839906  298514 addons.go:475] Verifying addon metrics-server=true in "embed-certs-679624"
	I0916 11:12:28.039689  298514 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.212011823s)
	I0916 11:12:28.039767  298514 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (5.096867537s)
	I0916 11:12:28.126834  298514 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.70065563s)
	I0916 11:12:28.128882  298514 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-679624 addons enable metrics-server
	
	I0916 11:12:28.130484  298514 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I0916 11:12:28.150212  254463 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 11:12:28.151092  254463 kubeadm.go:310] 
	I0916 11:12:28.151154  254463 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 11:12:28.151159  254463 kubeadm.go:310] 
	I0916 11:12:28.151218  254463 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 11:12:28.151222  254463 kubeadm.go:310] 
	I0916 11:12:28.151269  254463 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 11:12:28.151335  254463 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 11:12:28.151380  254463 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 11:12:28.151384  254463 kubeadm.go:310] 
	I0916 11:12:28.151429  254463 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 11:12:28.151437  254463 kubeadm.go:310] 
	I0916 11:12:28.151477  254463 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 11:12:28.151483  254463 kubeadm.go:310] 
	I0916 11:12:28.151565  254463 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 11:12:28.151683  254463 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 11:12:28.151797  254463 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 11:12:28.151806  254463 kubeadm.go:310] 
	I0916 11:12:28.151888  254463 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 11:12:28.151971  254463 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 11:12:28.151980  254463 kubeadm.go:310] 
	I0916 11:12:28.152070  254463 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token g34269.45j9r3eiiza32z5r \
	I0916 11:12:28.152198  254463 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 \
	I0916 11:12:28.152228  254463 kubeadm.go:310] 	--control-plane 
	I0916 11:12:28.152238  254463 kubeadm.go:310] 
	I0916 11:12:28.152340  254463 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 11:12:28.152349  254463 kubeadm.go:310] 
	I0916 11:12:28.152444  254463 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token g34269.45j9r3eiiza32z5r \
	I0916 11:12:28.152588  254463 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 
	I0916 11:12:28.156094  254463 kubeadm.go:310] W0916 11:12:19.950540    9248 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:12:28.156387  254463 kubeadm.go:310] W0916 11:12:19.951167    9248 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:12:28.156627  254463 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 11:12:28.156759  254463 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 11:12:28.156797  254463 cni.go:84] Creating CNI manager for ""
	I0916 11:12:28.156809  254463 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:12:28.158708  254463 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 11:12:28.159954  254463 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 11:12:28.164065  254463 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 11:12:28.164083  254463 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 11:12:28.183685  254463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 11:12:28.429780  254463 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 11:12:28.429834  254463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:12:28.429883  254463 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kubernetes-upgrade-311911 minikube.k8s.io/updated_at=2024_09_16T11_12_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=kubernetes-upgrade-311911 minikube.k8s.io/primary=true
	I0916 11:12:28.437830  254463 ops.go:34] apiserver oom_adj: -16
	I0916 11:12:28.544271  254463 kubeadm.go:1113] duration metric: took 114.485399ms to wait for elevateKubeSystemPrivileges
	I0916 11:12:28.544327  254463 kubeadm.go:394] duration metric: took 4m14.696261115s to StartCluster
	I0916 11:12:28.544360  254463 settings.go:142] acquiring lock: {Name:mk5f140da961bb985c566f732d611f3e238f1f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:12:28.544442  254463 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:12:28.546999  254463 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:12:28.547303  254463 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 11:12:28.547529  254463 config.go:182] Loaded profile config "kubernetes-upgrade-311911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:12:28.547599  254463 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:12:28.547688  254463 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-311911"
	I0916 11:12:28.547707  254463 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-311911"
	W0916 11:12:28.547715  254463 addons.go:243] addon storage-provisioner should already be in state true
	I0916 11:12:28.547795  254463 host.go:66] Checking if "kubernetes-upgrade-311911" exists ...
	I0916 11:12:28.548307  254463 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-311911 --format={{.State.Status}}
	I0916 11:12:28.548388  254463 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-311911"
	I0916 11:12:28.548426  254463 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-311911"
	I0916 11:12:28.548728  254463 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-311911 --format={{.State.Status}}
	I0916 11:12:28.549587  254463 out.go:177] * Verifying Kubernetes components...
	I0916 11:12:28.551168  254463 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:12:28.578461  254463 kapi.go:59] client config for kubernetes-upgrade-311911: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kubernetes-upgrade-311911/client.crt", KeyFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kubernetes-upgrade-311911/client.key", CAFile:"/home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1f6fca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0916 11:12:28.578809  254463 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-311911"
	W0916 11:12:28.578822  254463 addons.go:243] addon default-storageclass should already be in state true
	I0916 11:12:28.578844  254463 host.go:66] Checking if "kubernetes-upgrade-311911" exists ...
	I0916 11:12:28.579146  254463 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:12:25.240156  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:27.739921  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:28.579213  254463 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-311911 --format={{.State.Status}}
	I0916 11:12:28.581024  254463 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:12:28.581042  254463 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:12:28.581086  254463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-311911
	I0916 11:12:28.606690  254463 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:12:28.606713  254463 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:12:28.606772  254463 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-311911
	I0916 11:12:28.608379  254463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/kubernetes-upgrade-311911/id_rsa Username:docker}
	I0916 11:12:28.630283  254463 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33048 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/kubernetes-upgrade-311911/id_rsa Username:docker}
	I0916 11:12:28.681044  254463 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:12:28.692424  254463 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:12:28.692507  254463 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:12:28.703590  254463 api_server.go:72] duration metric: took 156.246274ms to wait for apiserver process to appear ...
	I0916 11:12:28.703616  254463 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:12:28.703637  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:12:28.708836  254463 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0916 11:12:28.714525  254463 api_server.go:141] control plane version: v1.31.1
	I0916 11:12:28.714550  254463 api_server.go:131] duration metric: took 10.926614ms to wait for apiserver health ...
	I0916 11:12:28.714558  254463 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 11:12:28.714609  254463 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0916 11:12:28.714622  254463 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0916 11:12:28.719796  254463 system_pods.go:59] 4 kube-system pods found
	I0916 11:12:28.719833  254463 system_pods.go:61] "etcd-kubernetes-upgrade-311911" [b404bfeb-d2b5-4235-82cf-55ee5aaeab3b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0916 11:12:28.719846  254463 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-311911" [40889aea-2e9a-47ba-9fcb-a2d75c92be81] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0916 11:12:28.719856  254463 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-311911" [a8392605-29c3-4e70-be09-1ebab57f425b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0916 11:12:28.719880  254463 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-311911" [8ad3fd40-a487-462f-871c-bc9b71175a72] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0916 11:12:28.719893  254463 system_pods.go:74] duration metric: took 5.328305ms to wait for pod list to return data ...
	I0916 11:12:28.719906  254463 kubeadm.go:582] duration metric: took 172.567297ms to wait for: map[apiserver:true system_pods:true]
	I0916 11:12:28.719924  254463 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:12:28.723284  254463 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 11:12:28.723309  254463 node_conditions.go:123] node cpu capacity is 8
	I0916 11:12:28.723323  254463 node_conditions.go:105] duration metric: took 3.393267ms to run NodePressure ...
	I0916 11:12:28.723337  254463 start.go:241] waiting for startup goroutines ...
	I0916 11:12:28.740772  254463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:12:28.747951  254463 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:12:29.064862  254463 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0916 11:12:29.066331  254463 addons.go:510] duration metric: took 518.73081ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 11:12:29.066383  254463 start.go:246] waiting for cluster config update ...
	I0916 11:12:29.066398  254463 start.go:255] writing updated cluster config ...
	I0916 11:12:29.066695  254463 ssh_runner.go:195] Run: rm -f paused
	I0916 11:12:29.072958  254463 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-311911" cluster and "default" namespace by default
	E0916 11:12:29.074458  254463 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	13dbfcc15e3dc       6bab7719df100       7 seconds ago       Running             kube-apiserver            5                   2c77617d724c4       kube-apiserver-kubernetes-upgrade-311911
	baffe1bbc3c9a       175ffd71cce3d       7 seconds ago       Running             kube-controller-manager   5                   e6f0ba33f888d       kube-controller-manager-kubernetes-upgrade-311911
	9a64b67821c94       2e96e5913fc06       7 seconds ago       Running             etcd                      0                   34b19684f622d       etcd-kubernetes-upgrade-311911
	14b5dbba5de3e       9aa1fad941575       7 seconds ago       Running             kube-scheduler            1                   da8db23b0d703       kube-scheduler-kubernetes-upgrade-311911
	
	
	==> containerd <==
	Sep 16 11:12:22 kubernetes-upgrade-311911 containerd[643]: time="2024-09-16T11:12:22.474592512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:12:22 kubernetes-upgrade-311911 containerd[643]: time="2024-09-16T11:12:22.478151031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 11:12:22 kubernetes-upgrade-311911 containerd[643]: time="2024-09-16T11:12:22.478237374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 11:12:22 kubernetes-upgrade-311911 containerd[643]: time="2024-09-16T11:12:22.478257562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:12:22 kubernetes-upgrade-311911 containerd[643]: time="2024-09-16T11:12:22.478460596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:12:22 kubernetes-upgrade-311911 containerd[643]: time="2024-09-16T11:12:22.577795729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-311911,Uid:90c9c0bc67516c1cd0bbd0ea868b188f,Namespace:kube-system,Attempt:0,} returns sandbox id \"da8db23b0d7035ebbeaae7e9be047a05394e5bb0c2f9cc7a6c79ae5e2d49bb7d\""
	Sep 16 11:12:22 kubernetes-upgrade-311911 containerd[643]: time="2024-09-16T11:12:22.625127420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:etcd-kubernetes-upgrade-311911,Uid:12ba5ab16ea139ac3323b4d1aadb4553,Namespace:kube-system,Attempt:0,} returns sandbox id \"34b19684f622dba7fbd99c1c19f883a8572bab93d75e729dec7c3900fcb03026\""
	Sep 16 11:12:22 kubernetes-upgrade-311911 containerd[643]: time="2024-09-16T11:12:22.626161451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-311911,Uid:d6e2db60e88c92cf4eff726f6bf0101c,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6f0ba33f888de18504dd12d196daf32aa30e1d14408c76c1aa98ef0be27b9ad\""
	Sep 16 11:12:22 kubernetes-upgrade-311911 containerd[643]: time="2024-09-16T11:12:22.630520828Z" level=info msg="CreateContainer within sandbox \"da8db23b0d7035ebbeaae7e9be047a05394e5bb0c2f9cc7a6c79ae5e2d49bb7d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}"
	Sep 16 11:12:22 kubernetes-upgrade-311911 containerd[643]: time="2024-09-16T11:12:22.634693257Z" level=info msg="CreateContainer within sandbox \"e6f0ba33f888de18504dd12d196daf32aa30e1d14408c76c1aa98ef0be27b9ad\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:5,}"
	Sep 16 11:12:22 kubernetes-upgrade-311911 containerd[643]: time="2024-09-16T11:12:22.634800027Z" level=info msg="CreateContainer within sandbox \"34b19684f622dba7fbd99c1c19f883a8572bab93d75e729dec7c3900fcb03026\" for container &ContainerMetadata{Name:etcd,Attempt:0,}"
	Sep 16 11:12:22 kubernetes-upgrade-311911 containerd[643]: time="2024-09-16T11:12:22.651123928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-311911,Uid:98b93c0a122e4221ebb4df1c8bcda29c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c77617d724c4807bd7e04abd1c64054f4194c7105e8ffe6143f7af55378e6f0\""
	Sep 16 11:12:22 kubernetes-upgrade-311911 containerd[643]: time="2024-09-16T11:12:22.657857968Z" level=info msg="CreateContainer within sandbox \"2c77617d724c4807bd7e04abd1c64054f4194c7105e8ffe6143f7af55378e6f0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:5,}"
	Sep 16 11:12:22 kubernetes-upgrade-311911 containerd[643]: time="2024-09-16T11:12:22.668597522Z" level=info msg="CreateContainer within sandbox \"da8db23b0d7035ebbeaae7e9be047a05394e5bb0c2f9cc7a6c79ae5e2d49bb7d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"14b5dbba5de3e50152d185229f54bdfbaabee87645035b1070bb7c873877e4cd\""
	Sep 16 11:12:22 kubernetes-upgrade-311911 containerd[643]: time="2024-09-16T11:12:22.669416828Z" level=info msg="StartContainer for \"14b5dbba5de3e50152d185229f54bdfbaabee87645035b1070bb7c873877e4cd\""
	Sep 16 11:12:22 kubernetes-upgrade-311911 containerd[643]: time="2024-09-16T11:12:22.673447514Z" level=info msg="CreateContainer within sandbox \"34b19684f622dba7fbd99c1c19f883a8572bab93d75e729dec7c3900fcb03026\" for &ContainerMetadata{Name:etcd,Attempt:0,} returns container id \"9a64b67821c94a7b5c9c8de0667b1b0be7a15004e31705d905d4518fcf6557eb\""
	Sep 16 11:12:22 kubernetes-upgrade-311911 containerd[643]: time="2024-09-16T11:12:22.673914027Z" level=info msg="CreateContainer within sandbox \"e6f0ba33f888de18504dd12d196daf32aa30e1d14408c76c1aa98ef0be27b9ad\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:5,} returns container id \"baffe1bbc3c9a537b5b6e16d84a94fda3b6967141f22f24a17084a38933d5b02\""
	Sep 16 11:12:22 kubernetes-upgrade-311911 containerd[643]: time="2024-09-16T11:12:22.674259375Z" level=info msg="StartContainer for \"9a64b67821c94a7b5c9c8de0667b1b0be7a15004e31705d905d4518fcf6557eb\""
	Sep 16 11:12:22 kubernetes-upgrade-311911 containerd[643]: time="2024-09-16T11:12:22.674652011Z" level=info msg="StartContainer for \"baffe1bbc3c9a537b5b6e16d84a94fda3b6967141f22f24a17084a38933d5b02\""
	Sep 16 11:12:22 kubernetes-upgrade-311911 containerd[643]: time="2024-09-16T11:12:22.706029037Z" level=info msg="CreateContainer within sandbox \"2c77617d724c4807bd7e04abd1c64054f4194c7105e8ffe6143f7af55378e6f0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:5,} returns container id \"13dbfcc15e3dc181328ce4e2227324bf24d93b87c80d7cf9e58db0aeaa278619\""
	Sep 16 11:12:22 kubernetes-upgrade-311911 containerd[643]: time="2024-09-16T11:12:22.706768268Z" level=info msg="StartContainer for \"13dbfcc15e3dc181328ce4e2227324bf24d93b87c80d7cf9e58db0aeaa278619\""
	Sep 16 11:12:22 kubernetes-upgrade-311911 containerd[643]: time="2024-09-16T11:12:22.781146727Z" level=info msg="StartContainer for \"14b5dbba5de3e50152d185229f54bdfbaabee87645035b1070bb7c873877e4cd\" returns successfully"
	Sep 16 11:12:22 kubernetes-upgrade-311911 containerd[643]: time="2024-09-16T11:12:22.860278707Z" level=info msg="StartContainer for \"baffe1bbc3c9a537b5b6e16d84a94fda3b6967141f22f24a17084a38933d5b02\" returns successfully"
	Sep 16 11:12:22 kubernetes-upgrade-311911 containerd[643]: time="2024-09-16T11:12:22.863346155Z" level=info msg="StartContainer for \"9a64b67821c94a7b5c9c8de0667b1b0be7a15004e31705d905d4518fcf6557eb\" returns successfully"
	Sep 16 11:12:22 kubernetes-upgrade-311911 containerd[643]: time="2024-09-16T11:12:22.928703943Z" level=info msg="StartContainer for \"13dbfcc15e3dc181328ce4e2227324bf24d93b87c80d7cf9e58db0aeaa278619\" returns successfully"
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-311911
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-311911
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=kubernetes-upgrade-311911
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T11_12_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:12:25 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-311911
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:12:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:12:27 +0000   Mon, 16 Sep 2024 11:12:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:12:27 +0000   Mon, 16 Sep 2024 11:12:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:12:27 +0000   Mon, 16 Sep 2024 11:12:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:12:27 +0000   Mon, 16 Sep 2024 11:12:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    kubernetes-upgrade-311911
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 c5df8a8c934f41dab0bbad0d61124ce0
	  System UUID:                362e922e-0ef3-4134-8937-6cede3a03b01
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-311911                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         3s
	  kube-system                 kube-apiserver-kubernetes-upgrade-311911             250m (3%)     0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-311911    200m (2%)     0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-kubernetes-upgrade-311911             100m (1%)     0 (0%)      0 (0%)           0 (0%)         3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (8%)   0 (0%)
	  memory             100Mi (0%)  0 (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age              From     Message
	  ----     ------                   ----             ----     -------
	  Normal   NodeHasSufficientMemory  8s (x8 over 8s)  kubelet  Node kubernetes-upgrade-311911 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x7 over 8s)  kubelet  Node kubernetes-upgrade-311911 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x7 over 8s)  kubelet  Node kubernetes-upgrade-311911 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  8s               kubelet  Updated Node Allocatable limit across pods
	  Normal   Starting                 3s               kubelet  Starting kubelet.
	  Warning  CgroupV1                 3s               kubelet  Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  3s               kubelet  Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  3s               kubelet  Node kubernetes-upgrade-311911 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3s               kubelet  Node kubernetes-upgrade-311911 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3s               kubelet  Node kubernetes-upgrade-311911 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[  +0.000001] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +4.063628] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000008] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.000030] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000007] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.003992] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000006] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +8.187268] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000006] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.000063] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000005] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.003939] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000006] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[Sep16 11:12] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000007] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	[  +0.000229] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000004] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	[  +0.000387] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000003] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	[  +1.007060] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000007] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000001] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000001] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	
	
	==> etcd [9a64b67821c94a7b5c9c8de0667b1b0be7a15004e31705d905d4518fcf6557eb] <==
	{"level":"info","ts":"2024-09-16T11:12:22.969225Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T11:12:22.969563Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-09-16T11:12:22.969614Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-09-16T11:12:22.970831Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T11:12:22.970959Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T11:12:23.421352Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T11:12:23.421432Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T11:12:23.421479Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2024-09-16T11:12:23.421498Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T11:12:23.421507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2024-09-16T11:12:23.421520Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2024-09-16T11:12:23.421530Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2024-09-16T11:12:23.432024Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:kubernetes-upgrade-311911 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T11:12:23.432055Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:12:23.432304Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:12:23.432613Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T11:12:23.432747Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T11:12:23.433655Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:12:23.434706Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T11:12:23.432112Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:12:23.435819Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:12:23.435911Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:12:23.435944Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:12:23.436957Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:12:23.441688Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 11:12:30 up 54 min,  0 users,  load average: 2.75, 3.16, 2.25
	Linux kubernetes-upgrade-311911 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [13dbfcc15e3dc181328ce4e2227324bf24d93b87c80d7cf9e58db0aeaa278619] <==
	I0916 11:12:25.120374       1 aggregator.go:171] initial CRD sync complete...
	I0916 11:12:25.120659       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 11:12:25.120695       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 11:12:25.121425       1 cache.go:39] Caches are synced for autoregister controller
	I0916 11:12:25.120855       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 11:12:25.121475       1 shared_informer.go:320] Caches are synced for configmaps
	E0916 11:12:25.125718       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0916 11:12:25.143436       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 11:12:25.143466       1 policy_source.go:224] refreshing policies
	E0916 11:12:25.180837       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I0916 11:12:25.225150       1 controller.go:615] quota admission added evaluator for: namespaces
	I0916 11:12:25.335239       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 11:12:25.971859       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0916 11:12:25.975726       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0916 11:12:25.975794       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 11:12:26.430717       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 11:12:26.472275       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 11:12:26.529933       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0916 11:12:26.538603       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0916 11:12:26.540126       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 11:12:26.545335       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 11:12:27.047693       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 11:12:27.568283       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 11:12:27.578474       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 11:12:27.586808       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [baffe1bbc3c9a537b5b6e16d84a94fda3b6967141f22f24a17084a38933d5b02] <==
	I0916 11:12:29.396452       1 controllermanager.go:797] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0916 11:12:29.396488       1 certificate_controller.go:120] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0916 11:12:29.396506       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0916 11:12:29.396521       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0916 11:12:29.396460       1 dynamic_serving_content.go:135] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0916 11:12:29.546679       1 controllermanager.go:797] "Started controller" controller="persistentvolume-binder-controller"
	I0916 11:12:29.546809       1 pv_controller_base.go:308] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0916 11:12:29.546828       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0916 11:12:29.696066       1 controllermanager.go:797] "Started controller" controller="persistentvolume-protection-controller"
	I0916 11:12:29.696124       1 pv_protection_controller.go:81] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0916 11:12:29.696134       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0916 11:12:29.846598       1 controllermanager.go:797] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0916 11:12:29.846619       1 controllermanager.go:749] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0916 11:12:29.846670       1 pvc_protection_controller.go:105] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0916 11:12:29.846682       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0916 11:12:29.996202       1 controllermanager.go:797] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0916 11:12:29.996275       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0916 11:12:29.996303       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0916 11:12:30.146624       1 controllermanager.go:797] "Started controller" controller="ttl-controller"
	I0916 11:12:30.146702       1 ttl_controller.go:127] "Starting TTL controller" logger="ttl-controller"
	I0916 11:12:30.146715       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0916 11:12:30.296027       1 controllermanager.go:797] "Started controller" controller="token-cleaner-controller"
	I0916 11:12:30.296105       1 tokencleaner.go:117] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0916 11:12:30.296118       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0916 11:12:30.296131       1 shared_informer.go:320] Caches are synced for token_cleaner
	
	
	==> kube-scheduler [14b5dbba5de3e50152d185229f54bdfbaabee87645035b1070bb7c873877e4cd] <==
	E0916 11:12:25.134427       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E0916 11:12:25.134486       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:25.134586       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 11:12:25.134615       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:25.134644       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 11:12:25.134667       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:25.134706       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 11:12:25.134729       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:25.941001       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 11:12:25.941053       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:25.956927       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 11:12:25.956980       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:25.986393       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 11:12:25.986445       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:25.990813       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 11:12:25.990853       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:26.035207       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 11:12:26.035261       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:26.223562       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 11:12:26.223609       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:26.269278       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 11:12:26.269340       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:26.468848       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:12:26.468916       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 11:12:28.431347       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 11:12:27 kubernetes-upgrade-311911 kubelet[9671]: I0916 11:12:27.742534    9671 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-311911"
	Sep 16 11:12:27 kubernetes-upgrade-311911 kubelet[9671]: I0916 11:12:27.752811    9671 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-311911"
	Sep 16 11:12:27 kubernetes-upgrade-311911 kubelet[9671]: I0916 11:12:27.752928    9671 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-311911"
	Sep 16 11:12:27 kubernetes-upgrade-311911 kubelet[9671]: I0916 11:12:27.890630    9671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/90c9c0bc67516c1cd0bbd0ea868b188f-kubeconfig\") pod \"kube-scheduler-kubernetes-upgrade-311911\" (UID: \"90c9c0bc67516c1cd0bbd0ea868b188f\") " pod="kube-system/kube-scheduler-kubernetes-upgrade-311911"
	Sep 16 11:12:27 kubernetes-upgrade-311911 kubelet[9671]: I0916 11:12:27.890674    9671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98b93c0a122e4221ebb4df1c8bcda29c-k8s-certs\") pod \"kube-apiserver-kubernetes-upgrade-311911\" (UID: \"98b93c0a122e4221ebb4df1c8bcda29c\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-311911"
	Sep 16 11:12:27 kubernetes-upgrade-311911 kubelet[9671]: I0916 11:12:27.890697    9671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98b93c0a122e4221ebb4df1c8bcda29c-usr-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-311911\" (UID: \"98b93c0a122e4221ebb4df1c8bcda29c\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-311911"
	Sep 16 11:12:27 kubernetes-upgrade-311911 kubelet[9671]: I0916 11:12:27.890716    9671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d6e2db60e88c92cf4eff726f6bf0101c-flexvolume-dir\") pod \"kube-controller-manager-kubernetes-upgrade-311911\" (UID: \"d6e2db60e88c92cf4eff726f6bf0101c\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-311911"
	Sep 16 11:12:27 kubernetes-upgrade-311911 kubelet[9671]: I0916 11:12:27.890736    9671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d6e2db60e88c92cf4eff726f6bf0101c-k8s-certs\") pod \"kube-controller-manager-kubernetes-upgrade-311911\" (UID: \"d6e2db60e88c92cf4eff726f6bf0101c\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-311911"
	Sep 16 11:12:27 kubernetes-upgrade-311911 kubelet[9671]: I0916 11:12:27.890749    9671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/12ba5ab16ea139ac3323b4d1aadb4553-etcd-data\") pod \"etcd-kubernetes-upgrade-311911\" (UID: \"12ba5ab16ea139ac3323b4d1aadb4553\") " pod="kube-system/etcd-kubernetes-upgrade-311911"
	Sep 16 11:12:27 kubernetes-upgrade-311911 kubelet[9671]: I0916 11:12:27.890762    9671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98b93c0a122e4221ebb4df1c8bcda29c-etc-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-311911\" (UID: \"98b93c0a122e4221ebb4df1c8bcda29c\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-311911"
	Sep 16 11:12:27 kubernetes-upgrade-311911 kubelet[9671]: I0916 11:12:27.890777    9671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98b93c0a122e4221ebb4df1c8bcda29c-usr-local-share-ca-certificates\") pod \"kube-apiserver-kubernetes-upgrade-311911\" (UID: \"98b93c0a122e4221ebb4df1c8bcda29c\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-311911"
	Sep 16 11:12:27 kubernetes-upgrade-311911 kubelet[9671]: I0916 11:12:27.890794    9671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d6e2db60e88c92cf4eff726f6bf0101c-etc-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-311911\" (UID: \"d6e2db60e88c92cf4eff726f6bf0101c\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-311911"
	Sep 16 11:12:27 kubernetes-upgrade-311911 kubelet[9671]: I0916 11:12:27.890807    9671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/12ba5ab16ea139ac3323b4d1aadb4553-etcd-certs\") pod \"etcd-kubernetes-upgrade-311911\" (UID: \"12ba5ab16ea139ac3323b4d1aadb4553\") " pod="kube-system/etcd-kubernetes-upgrade-311911"
	Sep 16 11:12:27 kubernetes-upgrade-311911 kubelet[9671]: I0916 11:12:27.890820    9671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d6e2db60e88c92cf4eff726f6bf0101c-ca-certs\") pod \"kube-controller-manager-kubernetes-upgrade-311911\" (UID: \"d6e2db60e88c92cf4eff726f6bf0101c\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-311911"
	Sep 16 11:12:27 kubernetes-upgrade-311911 kubelet[9671]: I0916 11:12:27.890838    9671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d6e2db60e88c92cf4eff726f6bf0101c-kubeconfig\") pod \"kube-controller-manager-kubernetes-upgrade-311911\" (UID: \"d6e2db60e88c92cf4eff726f6bf0101c\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-311911"
	Sep 16 11:12:27 kubernetes-upgrade-311911 kubelet[9671]: I0916 11:12:27.890852    9671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d6e2db60e88c92cf4eff726f6bf0101c-usr-local-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-311911\" (UID: \"d6e2db60e88c92cf4eff726f6bf0101c\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-311911"
	Sep 16 11:12:27 kubernetes-upgrade-311911 kubelet[9671]: I0916 11:12:27.890872    9671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98b93c0a122e4221ebb4df1c8bcda29c-ca-certs\") pod \"kube-apiserver-kubernetes-upgrade-311911\" (UID: \"98b93c0a122e4221ebb4df1c8bcda29c\") " pod="kube-system/kube-apiserver-kubernetes-upgrade-311911"
	Sep 16 11:12:27 kubernetes-upgrade-311911 kubelet[9671]: I0916 11:12:27.890885    9671 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d6e2db60e88c92cf4eff726f6bf0101c-usr-share-ca-certificates\") pod \"kube-controller-manager-kubernetes-upgrade-311911\" (UID: \"d6e2db60e88c92cf4eff726f6bf0101c\") " pod="kube-system/kube-controller-manager-kubernetes-upgrade-311911"
	Sep 16 11:12:28 kubernetes-upgrade-311911 kubelet[9671]: I0916 11:12:28.477708    9671 apiserver.go:52] "Watching apiserver"
	Sep 16 11:12:28 kubernetes-upgrade-311911 kubelet[9671]: I0916 11:12:28.489010    9671 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Sep 16 11:12:28 kubernetes-upgrade-311911 kubelet[9671]: E0916 11:12:28.562206    9671 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-311911\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-311911"
	Sep 16 11:12:28 kubernetes-upgrade-311911 kubelet[9671]: I0916 11:12:28.584825    9671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-kubernetes-upgrade-311911" podStartSLOduration=1.584802984 podStartE2EDuration="1.584802984s" podCreationTimestamp="2024-09-16 11:12:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:12:28.584679589 +0000 UTC m=+1.184757443" watchObservedRunningTime="2024-09-16 11:12:28.584802984 +0000 UTC m=+1.184880837"
	Sep 16 11:12:28 kubernetes-upgrade-311911 kubelet[9671]: I0916 11:12:28.622194    9671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-kubernetes-upgrade-311911" podStartSLOduration=1.6221672649999999 podStartE2EDuration="1.622167265s" podCreationTimestamp="2024-09-16 11:12:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:12:28.59643688 +0000 UTC m=+1.196514734" watchObservedRunningTime="2024-09-16 11:12:28.622167265 +0000 UTC m=+1.222245118"
	Sep 16 11:12:28 kubernetes-upgrade-311911 kubelet[9671]: I0916 11:12:28.638848    9671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-kubernetes-upgrade-311911" podStartSLOduration=1.638827022 podStartE2EDuration="1.638827022s" podCreationTimestamp="2024-09-16 11:12:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:12:28.622453473 +0000 UTC m=+1.222531329" watchObservedRunningTime="2024-09-16 11:12:28.638827022 +0000 UTC m=+1.238904876"
	Sep 16 11:12:28 kubernetes-upgrade-311911 kubelet[9671]: I0916 11:12:28.638993    9671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-kubernetes-upgrade-311911" podStartSLOduration=1.638981383 podStartE2EDuration="1.638981383s" podCreationTimestamp="2024-09-16 11:12:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:12:28.638808693 +0000 UTC m=+1.238886547" watchObservedRunningTime="2024-09-16 11:12:28.638981383 +0000 UTC m=+1.239059237"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-311911 -n kubernetes-upgrade-311911
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-311911 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-311911 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (523.346µs)
helpers_test.go:263: kubectl --context kubernetes-upgrade-311911 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:175: Cleaning up "kubernetes-upgrade-311911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-311911
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-311911: (2.209262797s)
--- FAIL: TestKubernetesUpgrade (322.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (3.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-349453 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-349453 create -f testdata/busybox.yaml: fork/exec /usr/local/bin/kubectl: exec format error (701.855µs)
start_stop_delete_test.go:196: kubectl --context no-preload-349453 create -f testdata/busybox.yaml failed: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-349453
helpers_test.go:235: (dbg) docker inspect no-preload-349453:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d44e8cc5581d6fb816ecfca3e6fae57d456bc84775e39f1a8ca84f29fd4dc0a3",
	        "Created": "2024-09-16T11:08:35.617729941Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 265359,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T11:08:35.76202248Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/d44e8cc5581d6fb816ecfca3e6fae57d456bc84775e39f1a8ca84f29fd4dc0a3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d44e8cc5581d6fb816ecfca3e6fae57d456bc84775e39f1a8ca84f29fd4dc0a3/hostname",
	        "HostsPath": "/var/lib/docker/containers/d44e8cc5581d6fb816ecfca3e6fae57d456bc84775e39f1a8ca84f29fd4dc0a3/hosts",
	        "LogPath": "/var/lib/docker/containers/d44e8cc5581d6fb816ecfca3e6fae57d456bc84775e39f1a8ca84f29fd4dc0a3/d44e8cc5581d6fb816ecfca3e6fae57d456bc84775e39f1a8ca84f29fd4dc0a3-json.log",
	        "Name": "/no-preload-349453",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-349453:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-349453",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1f16bfb5a739593a3a28aa0d43852a8530e8255fd42575cca67f10b48c3d69d7-init/diff:/var/lib/docker/overlay2/e391a31d00d345931d18da68f8a7d7338830f6d9b98fe203461225d03a65d594/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1f16bfb5a739593a3a28aa0d43852a8530e8255fd42575cca67f10b48c3d69d7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1f16bfb5a739593a3a28aa0d43852a8530e8255fd42575cca67f10b48c3d69d7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1f16bfb5a739593a3a28aa0d43852a8530e8255fd42575cca67f10b48c3d69d7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-349453",
	                "Source": "/var/lib/docker/volumes/no-preload-349453/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-349453",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-349453",
	                "name.minikube.sigs.k8s.io": "no-preload-349453",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "de544e1372d8cb8fd0e1807ad2b8bb665590a19816c7b2adbc56336e3321ad31",
	            "SandboxKey": "/var/run/docker/netns/de544e1372d8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-349453": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "DriverOpts": null,
	                    "NetworkID": "2cc59d4eff808c995119ae607628ad9854df9618b8c5cd5213cb8d98e98ab4f4",
	                    "EndpointID": "afac10d13376be205fe178b7e126e3c65a6479a99b3db779bc1b7fa1828380a8",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-349453",
	                        "d44e8cc5581d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-349453 -n no-preload-349453
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-349453 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-349453 logs -n 25: (1.099943595s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-771611 sudo                  | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | systemctl status containerd            |                           |         |         |                     |                     |
	|         | --all --full --no-pager                |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo                  | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | systemctl cat containerd               |                           |         |         |                     |                     |
	|         | --no-pager                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo cat              | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo cat              | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo                  | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo                  | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo                  | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo find             | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo crio             | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-771611                       | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	| delete  | -p missing-upgrade-327796              | missing-upgrade-327796    | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	| start   | -p cert-expiration-021107              | cert-expiration-021107    | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:08 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                   |                           |         |         |                     |                     |
	|         | --driver=docker                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd         |                           |         |         |                     |                     |
	| start   | -p force-systemd-flag-917705           | force-systemd-flag-917705 | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:08 UTC |
	|         | --memory=2048 --force-systemd          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                   |                           |         |         |                     |                     |
	|         | --container-runtime=containerd         |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-846070               | force-systemd-env-846070  | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | ssh cat                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-846070            | force-systemd-env-846070  | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| stop    | -p kubernetes-upgrade-311911           | kubernetes-upgrade-311911 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| start   | -p cert-options-840054                 | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                  |                           |         |         |                     |                     |
	|         | --driver=docker                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd         |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-311911           | kubernetes-upgrade-311911 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1           |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                   |                           |         |         |                     |                     |
	|         | --container-runtime=containerd         |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-917705              | force-systemd-flag-917705 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | ssh cat                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-917705           | force-systemd-flag-917705 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| start   | -p old-k8s-version-371039              | old-k8s-version-371039    | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true          |                           |         |         |                     |                     |
	|         | --kvm-network=default                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                |                           |         |         |                     |                     |
	|         | --keep-context=false                   |                           |         |         |                     |                     |
	|         | --driver=docker                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	| ssh     | cert-options-840054 ssh                | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | openssl x509 -text -noout -in          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-840054 -- sudo         | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | cat /etc/kubernetes/admin.conf         |                           |         |         |                     |                     |
	| delete  | -p cert-options-840054                 | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| start   | -p no-preload-349453                   | no-preload-349453         | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:09 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false            |                           |         |         |                     |                     |
	|         | --driver=docker                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1           |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:08:30
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:08:30.290580  264436 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:08:30.290727  264436 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:08:30.290740  264436 out.go:358] Setting ErrFile to fd 2...
	I0916 11:08:30.290747  264436 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:08:30.291070  264436 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 11:08:30.291765  264436 out.go:352] Setting JSON to false
	I0916 11:08:30.293115  264436 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3054,"bootTime":1726481856,"procs":326,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:08:30.293251  264436 start.go:139] virtualization: kvm guest
	I0916 11:08:30.295658  264436 out.go:177] * [no-preload-349453] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:08:30.297158  264436 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:08:30.297181  264436 notify.go:220] Checking for updates...
	I0916 11:08:30.299671  264436 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:08:30.301189  264436 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:08:30.302491  264436 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 11:08:30.303773  264436 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:08:30.305030  264436 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:08:30.306912  264436 config.go:182] Loaded profile config "cert-expiration-021107": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:08:30.307059  264436 config.go:182] Loaded profile config "kubernetes-upgrade-311911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:08:30.307222  264436 config.go:182] Loaded profile config "old-k8s-version-371039": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0916 11:08:30.307352  264436 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:08:30.342404  264436 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 11:08:30.342617  264436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:08:30.412580  264436 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:76 SystemTime:2024-09-16 11:08:30.399549033 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:08:30.412784  264436 docker.go:318] overlay module found
	I0916 11:08:30.414974  264436 out.go:177] * Using the docker driver based on user configuration
	I0916 11:08:30.416257  264436 start.go:297] selected driver: docker
	I0916 11:08:30.416276  264436 start.go:901] validating driver "docker" against <nil>
	I0916 11:08:30.416296  264436 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:08:30.417426  264436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:08:30.481659  264436 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:76 SystemTime:2024-09-16 11:08:30.467819434 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:08:30.481930  264436 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 11:08:30.482367  264436 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:08:30.484332  264436 out.go:177] * Using Docker driver with root privileges
	I0916 11:08:30.485686  264436 cni.go:84] Creating CNI manager for ""
	I0916 11:08:30.485767  264436 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:08:30.485786  264436 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 11:08:30.485897  264436 start.go:340] cluster config:
	{Name:no-preload-349453 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-349453 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:08:30.487638  264436 out.go:177] * Starting "no-preload-349453" primary control-plane node in "no-preload-349453" cluster
	I0916 11:08:30.489182  264436 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 11:08:30.490994  264436 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 11:08:30.492484  264436 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:08:30.492588  264436 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 11:08:30.492646  264436 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/config.json ...
	I0916 11:08:30.492678  264436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/config.json: {Name:mk7f1330c6b2d92e29945227c336833ff6ffb7fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:30.492798  264436 cache.go:107] acquiring lock: {Name:mk505f3dd823c459cfb83f2d2a39affe63c4c40e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:08:30.492789  264436 cache.go:107] acquiring lock: {Name:mk0f2d9e0670c46fe9eb165a8119acf30531a2e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:08:30.492888  264436 cache.go:107] acquiring lock: {Name:mk0b25b3ebef8c92ed85c693112bf4f2b400d9b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:08:30.492912  264436 cache.go:115] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0916 11:08:30.492874  264436 cache.go:107] acquiring lock: {Name:mkd9c658f7569779b8a27d53e97cc0f70f55a845 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:08:30.492875  264436 cache.go:107] acquiring lock: {Name:mkb7cb231873e7918d3e306be4ec4f6091d91485 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:08:30.492929  264436 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 136.837µs
	I0916 11:08:30.492947  264436 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:08:30.492963  264436 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0916 11:08:30.492986  264436 cache.go:107] acquiring lock: {Name:mk8275b1fd51b04034df297d05c3d74274567a6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:08:30.493018  264436 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:08:30.493066  264436 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:08:30.493091  264436 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:08:30.493102  264436 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0916 11:08:30.493234  264436 cache.go:107] acquiring lock: {Name:mkd90d764df5e26e345f1c24540d37a0e89a5b18 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:08:30.493259  264436 cache.go:107] acquiring lock: {Name:mk612053845ede903900e7b583df14a07089be08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:08:30.493328  264436 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:08:30.493343  264436 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0916 11:08:30.494117  264436 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0916 11:08:30.494618  264436 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:08:30.494682  264436 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:08:30.494622  264436 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:08:30.494909  264436 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:08:30.494695  264436 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0916 11:08:30.496479  264436 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	W0916 11:08:30.521360  264436 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 11:08:30.521384  264436 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 11:08:30.521484  264436 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 11:08:30.521512  264436 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 11:08:30.521521  264436 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 11:08:30.521530  264436 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 11:08:30.521538  264436 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 11:08:30.581569  264436 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 11:08:30.581616  264436 cache.go:194] Successfully downloaded all kic artifacts
	I0916 11:08:30.581661  264436 start.go:360] acquireMachinesLock for no-preload-349453: {Name:mk8558ad422c1a28af392329b5800e6b7ec410a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:08:30.581784  264436 start.go:364] duration metric: took 104.124µs to acquireMachinesLock for "no-preload-349453"
	I0916 11:08:30.581916  264436 start.go:93] Provisioning new machine with config: &{Name:no-preload-349453 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-349453 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMe
trics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 11:08:30.582030  264436 start.go:125] createHost starting for "" (driver="docker")
	I0916 11:08:32.243803  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 11:08:32.243852  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:08:31.292696  260870 containerd.go:563] duration metric: took 1.167769285s to copy over tarball
	I0916 11:08:31.292764  260870 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 11:08:33.986408  260870 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.693618841s)
	I0916 11:08:33.986435  260870 containerd.go:570] duration metric: took 2.693711801s to extract the tarball
	I0916 11:08:33.986442  260870 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 11:08:34.058024  260870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:08:34.129814  260870 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 11:08:34.239782  260870 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:08:34.273790  260870 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0916 11:08:34.273814  260870 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0916 11:08:34.273863  260870 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:08:34.273888  260870 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:08:34.273911  260870 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:08:34.273925  260870 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0916 11:08:34.273939  260870 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:08:34.273984  260870 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0916 11:08:34.273983  260870 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0916 11:08:34.273894  260870 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:08:34.275457  260870 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:08:34.275470  260870 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:08:34.275487  260870 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0916 11:08:34.275487  260870 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:08:34.275498  260870 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:08:34.275465  260870 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:08:34.275780  260870 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0916 11:08:34.275781  260870 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0916 11:08:34.466060  260870 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.4.13-0" and sha "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934"
	I0916 11:08:34.466124  260870 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.4.13-0
	I0916 11:08:34.488460  260870 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0916 11:08:34.488504  260870 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0916 11:08:34.488539  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:08:34.492122  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 11:08:34.498533  260870 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.20.0" and sha "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080"
	I0916 11:08:34.498612  260870 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:08:34.502891  260870 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.2" and sha "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c"
	I0916 11:08:34.502966  260870 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.2
	I0916 11:08:34.507568  260870 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.20.0" and sha "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99"
	I0916 11:08:34.507620  260870 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:08:34.528734  260870 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.20.0" and sha "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899"
	I0916 11:08:34.528802  260870 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:08:34.532124  260870 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0916 11:08:34.532165  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 11:08:34.532165  260870 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:08:34.532250  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:08:34.533288  260870 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns:1.7.0" and sha "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16"
	I0916 11:08:34.533345  260870 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns:1.7.0
	I0916 11:08:34.533812  260870 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0916 11:08:34.533878  260870 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0916 11:08:34.533919  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:08:34.537025  260870 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.20.0" and sha "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc"
	I0916 11:08:34.537100  260870 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:08:34.557448  260870 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0916 11:08:34.557464  260870 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0916 11:08:34.557501  260870 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:08:34.557501  260870 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:08:34.557547  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:08:34.557547  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:08:34.568864  260870 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0916 11:08:34.568898  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 11:08:34.568915  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 11:08:34.568916  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:08:34.568924  260870 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0916 11:08:34.568944  260870 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0916 11:08:34.568958  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:08:34.568969  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:08:34.568978  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:08:34.568978  260870 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:08:34.569018  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:08:34.729417  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:08:34.729479  260870 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0916 11:08:34.729539  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:08:34.729542  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 11:08:34.729639  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:08:34.729679  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:08:34.729692  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 11:08:34.846706  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:08:34.849695  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:08:34.849746  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 11:08:34.849751  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:08:34.849830  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 11:08:34.849855  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:08:35.032207  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:08:35.032853  260870 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0916 11:08:35.037891  260870 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0916 11:08:35.037932  260870 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0916 11:08:35.038023  260870 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0916 11:08:35.038051  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 11:08:35.068211  260870 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0916 11:08:35.124935  260870 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0916 11:08:30.584062  264436 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 11:08:30.584349  264436 start.go:159] libmachine.API.Create for "no-preload-349453" (driver="docker")
	I0916 11:08:30.584376  264436 client.go:168] LocalClient.Create starting
	I0916 11:08:30.584454  264436 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem
	I0916 11:08:30.584501  264436 main.go:141] libmachine: Decoding PEM data...
	I0916 11:08:30.584522  264436 main.go:141] libmachine: Parsing certificate...
	I0916 11:08:30.584586  264436 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem
	I0916 11:08:30.584611  264436 main.go:141] libmachine: Decoding PEM data...
	I0916 11:08:30.584626  264436 main.go:141] libmachine: Parsing certificate...
	I0916 11:08:30.585045  264436 cli_runner.go:164] Run: docker network inspect no-preload-349453 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 11:08:30.610640  264436 cli_runner.go:211] docker network inspect no-preload-349453 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 11:08:30.610749  264436 network_create.go:284] running [docker network inspect no-preload-349453] to gather additional debugging logs...
	I0916 11:08:30.610897  264436 cli_runner.go:164] Run: docker network inspect no-preload-349453
	W0916 11:08:30.633247  264436 cli_runner.go:211] docker network inspect no-preload-349453 returned with exit code 1
	I0916 11:08:30.633283  264436 network_create.go:287] error running [docker network inspect no-preload-349453]: docker network inspect no-preload-349453: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-349453 not found
	I0916 11:08:30.633310  264436 network_create.go:289] output of [docker network inspect no-preload-349453]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-349453 not found
	
	** /stderr **
	I0916 11:08:30.633427  264436 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:08:30.661732  264436 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c95c64bb41bd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:bd:76:ab:c4} reservation:<nil>}
	I0916 11:08:30.663027  264436 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fad43aa9929b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:94:3f:61:fd} reservation:<nil>}
	I0916 11:08:30.664348  264436 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-49585fce923a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:ad:45:94:54} reservation:<nil>}
	I0916 11:08:30.665251  264436 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-45dc384def28 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:95:3e:48:c3} reservation:<nil>}
	I0916 11:08:30.666118  264436 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-b7c76f2e9a1f IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:4a:59:5d:75} reservation:<nil>}
	I0916 11:08:30.667352  264436 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014118f0}
	I0916 11:08:30.667386  264436 network_create.go:124] attempt to create docker network no-preload-349453 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I0916 11:08:30.667448  264436 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-349453 no-preload-349453
	I0916 11:08:30.736241  264436 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0916 11:08:30.758180  264436 network_create.go:108] docker network no-preload-349453 192.168.94.0/24 created
	I0916 11:08:30.758216  264436 kic.go:121] calculated static IP "192.168.94.2" for the "no-preload-349453" container
	I0916 11:08:30.758297  264436 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 11:08:30.767506  264436 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I0916 11:08:30.770224  264436 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0916 11:08:30.784652  264436 cli_runner.go:164] Run: docker volume create no-preload-349453 --label name.minikube.sigs.k8s.io=no-preload-349453 --label created_by.minikube.sigs.k8s.io=true
	I0916 11:08:30.787645  264436 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0916 11:08:30.789687  264436 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0916 11:08:30.791298  264436 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0916 11:08:30.809926  264436 oci.go:103] Successfully created a docker volume no-preload-349453
	I0916 11:08:30.810088  264436 cli_runner.go:164] Run: docker run --rm --name no-preload-349453-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-349453 --entrypoint /usr/bin/test -v no-preload-349453:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 11:08:30.986670  264436 cache.go:157] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0916 11:08:30.986704  264436 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 493.451965ms
	I0916 11:08:30.986721  264436 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0916 11:08:30.992662  264436 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0916 11:08:31.459004  264436 cache.go:157] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0916 11:08:31.459044  264436 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1" took 966.158295ms
	I0916 11:08:31.459071  264436 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0916 11:08:32.902149  264436 cache.go:157] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0916 11:08:32.902263  264436 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1" took 2.409439664s
	I0916 11:08:32.902288  264436 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0916 11:08:32.954934  264436 cache.go:157] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0916 11:08:32.955019  264436 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1" took 2.462197691s
	I0916 11:08:32.955043  264436 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0916 11:08:32.982491  264436 cache.go:157] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0916 11:08:32.982539  264436 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1" took 2.489760683s
	I0916 11:08:32.982557  264436 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0916 11:08:33.008590  264436 cache.go:157] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0916 11:08:33.008619  264436 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 2.515390278s
	I0916 11:08:33.008636  264436 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0916 11:08:33.364029  264436 cache.go:157] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 exists
	I0916 11:08:33.364061  264436 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0" took 2.871077786s
	I0916 11:08:33.364074  264436 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0916 11:08:33.364098  264436 cache.go:87] Successfully saved all images to host disk.
	I0916 11:08:35.392285  260870 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I0916 11:08:35.392370  260870 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:08:35.438527  260870 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0916 11:08:35.438576  260870 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:08:35.438615  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:08:35.442067  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:08:35.527055  260870 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0916 11:08:35.527210  260870 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0916 11:08:35.531022  260870 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0916 11:08:35.531056  260870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0916 11:08:35.609317  260870 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0916 11:08:35.609393  260870 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I0916 11:08:36.042074  260870 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0916 11:08:36.042130  260870 cache_images.go:92] duration metric: took 1.768300894s to LoadCachedImages
	W0916 11:08:36.042205  260870 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0916 11:08:36.042220  260870 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.20.0 containerd true true} ...
	I0916 11:08:36.042328  260870 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-371039 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-371039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:08:36.042388  260870 ssh_runner.go:195] Run: sudo crictl info
	I0916 11:08:36.087682  260870 cni.go:84] Creating CNI manager for ""
	I0916 11:08:36.087706  260870 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:08:36.087715  260870 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:08:36.087732  260870 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-371039 NodeName:old-k8s-version-371039 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0916 11:08:36.087889  260870 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-371039"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:08:36.087956  260870 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0916 11:08:36.096824  260870 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 11:08:36.096888  260870 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:08:36.105501  260870 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (443 bytes)
	I0916 11:08:36.123886  260870 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:08:36.142412  260870 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2128 bytes)
	I0916 11:08:36.160845  260870 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0916 11:08:36.164496  260870 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:08:36.175171  260870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:08:36.270265  260870 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:08:36.288432  260870 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039 for IP: 192.168.103.2
	I0916 11:08:36.288456  260870 certs.go:194] generating shared ca certs ...
	I0916 11:08:36.288476  260870 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:36.288648  260870 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 11:08:36.288704  260870 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 11:08:36.288714  260870 certs.go:256] generating profile certs ...
	I0916 11:08:36.288781  260870 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.key
	I0916 11:08:36.288802  260870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.crt with IP's: []
	I0916 11:08:36.405455  260870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.crt ...
	I0916 11:08:36.405492  260870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.crt: {Name:mk82ea8fcc0c34a14f2e7e173fd4907cf9b8e3c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:36.405667  260870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.key ...
	I0916 11:08:36.405681  260870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.key: {Name:mkae0b2fcb25419f4a74135b55a637382d7b9ec5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:36.405759  260870 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/apiserver.key.2be0dd44
	I0916 11:08:36.405776  260870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/apiserver.crt.2be0dd44 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I0916 11:08:36.459262  260870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/apiserver.crt.2be0dd44 ...
	I0916 11:08:36.459292  260870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/apiserver.crt.2be0dd44: {Name:mk62a33feea446132b32229b845b6bb967faebe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:36.459439  260870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/apiserver.key.2be0dd44 ...
	I0916 11:08:36.459453  260870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/apiserver.key.2be0dd44: {Name:mka88753a9e7441e98fdbaa3acff880db3ae57f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:36.459521  260870 certs.go:381] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/apiserver.crt.2be0dd44 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/apiserver.crt
	I0916 11:08:36.459592  260870 certs.go:385] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/apiserver.key.2be0dd44 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/apiserver.key
	I0916 11:08:36.459649  260870 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/proxy-client.key
	I0916 11:08:36.459664  260870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/proxy-client.crt with IP's: []
	I0916 11:08:36.713401  260870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/proxy-client.crt ...
	I0916 11:08:36.713429  260870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/proxy-client.crt: {Name:mk0c69e2fe4df3505f52bc05b74e3cc3c5f14ccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:36.713612  260870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/proxy-client.key ...
	I0916 11:08:36.713633  260870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/proxy-client.key: {Name:mk505306792a7323c50fbaa6bfa6d39fd8ceb8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:36.713831  260870 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 11:08:36.713869  260870 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 11:08:36.713876  260870 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:08:36.713896  260870 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 11:08:36.713920  260870 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:08:36.713946  260870 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 11:08:36.713982  260870 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:08:36.714511  260870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:08:36.739372  260870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:08:36.765128  260870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:08:36.793852  260870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:08:36.818818  260870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0916 11:08:36.842012  260870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 11:08:36.865358  260870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:08:36.889258  260870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 11:08:36.913024  260870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 11:08:36.939986  260870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 11:08:36.963336  260870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:08:36.986859  260870 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:08:37.003708  260870 ssh_runner.go:195] Run: openssl version
	I0916 11:08:37.009148  260870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 11:08:37.018295  260870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 11:08:37.021964  260870 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 11:08:37.022022  260870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 11:08:37.029281  260870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 11:08:37.038624  260870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 11:08:37.048291  260870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 11:08:37.052395  260870 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 11:08:37.052464  260870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 11:08:37.060420  260870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:08:37.071458  260870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:08:37.082693  260870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:08:37.086499  260870 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:08:37.086575  260870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:08:37.093458  260870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:08:37.103273  260870 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:08:37.106445  260870 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 11:08:37.106492  260870 kubeadm.go:392] StartCluster: {Name:old-k8s-version-371039 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-371039 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMe
trics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:08:37.106586  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 11:08:37.106636  260870 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:08:37.155847  260870 cri.go:89] found id: ""
	I0916 11:08:37.155918  260870 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:08:37.164683  260870 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 11:08:37.173264  260870 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 11:08:37.173334  260870 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 11:08:37.181678  260870 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 11:08:37.181704  260870 kubeadm.go:157] found existing configuration files:
	
	I0916 11:08:37.181753  260870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 11:08:37.190209  260870 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 11:08:37.190268  260870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 11:08:37.198604  260870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 11:08:37.207009  260870 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 11:08:37.207069  260870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 11:08:37.215349  260870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 11:08:37.224252  260870 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 11:08:37.224316  260870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 11:08:37.233091  260870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 11:08:37.241423  260870 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 11:08:37.241484  260870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 11:08:37.249898  260870 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 11:08:37.306344  260870 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0916 11:08:37.306396  260870 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 11:08:37.343524  260870 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 11:08:37.343631  260870 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 11:08:37.343685  260870 kubeadm.go:310] OS: Linux
	I0916 11:08:37.343789  260870 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 11:08:37.343874  260870 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 11:08:37.343965  260870 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 11:08:37.344046  260870 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 11:08:37.344122  260870 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 11:08:37.344202  260870 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 11:08:37.344274  260870 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 11:08:37.344353  260870 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 11:08:37.433846  260870 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 11:08:37.434024  260870 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 11:08:37.434226  260870 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0916 11:08:37.627977  260870 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 11:08:37.244785  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 11:08:37.244822  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:08:37.548910  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:53692->192.168.76.2:8443: read: connection reset by peer
	I0916 11:08:35.539780  264436 cli_runner.go:217] Completed: docker run --rm --name no-preload-349453-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-349453 --entrypoint /usr/bin/test -v no-preload-349453:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib: (4.729567672s)
	I0916 11:08:35.539815  264436 oci.go:107] Successfully prepared a docker volume no-preload-349453
	I0916 11:08:35.539835  264436 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	W0916 11:08:35.539966  264436 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 11:08:35.540080  264436 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 11:08:35.601426  264436 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-349453 --name no-preload-349453 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-349453 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-349453 --network no-preload-349453 --ip 192.168.94.2 --volume no-preload-349453:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 11:08:35.950506  264436 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Running}}
	I0916 11:08:35.975787  264436 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Status}}
	I0916 11:08:35.997694  264436 cli_runner.go:164] Run: docker exec no-preload-349453 stat /var/lib/dpkg/alternatives/iptables
	I0916 11:08:36.047229  264436 oci.go:144] the created container "no-preload-349453" has a running status.
	I0916 11:08:36.047269  264436 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa...
	I0916 11:08:36.201725  264436 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 11:08:36.232588  264436 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Status}}
	I0916 11:08:36.251268  264436 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 11:08:36.251296  264436 kic_runner.go:114] Args: [docker exec --privileged no-preload-349453 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 11:08:36.308796  264436 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Status}}
	I0916 11:08:36.359437  264436 machine.go:93] provisionDockerMachine start ...
	I0916 11:08:36.359543  264436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:08:36.385658  264436 main.go:141] libmachine: Using SSH client type: native
	I0916 11:08:36.385896  264436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I0916 11:08:36.385910  264436 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 11:08:36.568192  264436 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-349453
	
	I0916 11:08:36.568220  264436 ubuntu.go:169] provisioning hostname "no-preload-349453"
	I0916 11:08:36.568291  264436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:08:36.590804  264436 main.go:141] libmachine: Using SSH client type: native
	I0916 11:08:36.591032  264436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I0916 11:08:36.591049  264436 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-349453 && echo "no-preload-349453" | sudo tee /etc/hostname
	I0916 11:08:36.756044  264436 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-349453
	
	I0916 11:08:36.756141  264436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:08:36.777822  264436 main.go:141] libmachine: Using SSH client type: native
	I0916 11:08:36.778002  264436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I0916 11:08:36.778020  264436 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-349453' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-349453/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-349453' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:08:36.911965  264436 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:08:36.911996  264436 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 11:08:36.912019  264436 ubuntu.go:177] setting up certificates
	I0916 11:08:36.912033  264436 provision.go:84] configureAuth start
	I0916 11:08:36.912089  264436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-349453
	I0916 11:08:36.932315  264436 provision.go:143] copyHostCerts
	I0916 11:08:36.932386  264436 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 11:08:36.932399  264436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 11:08:36.932471  264436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 11:08:36.932569  264436 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 11:08:36.932580  264436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 11:08:36.932621  264436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 11:08:36.932706  264436 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 11:08:36.932717  264436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 11:08:36.932753  264436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 11:08:36.932828  264436 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.no-preload-349453 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-349453]
	I0916 11:08:37.209883  264436 provision.go:177] copyRemoteCerts
	I0916 11:08:37.209938  264436 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:08:37.209969  264436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:08:37.228662  264436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:08:37.329001  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 11:08:37.353063  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0916 11:08:37.377321  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 11:08:37.402804  264436 provision.go:87] duration metric: took 490.759265ms to configureAuth
	I0916 11:08:37.402834  264436 ubuntu.go:193] setting minikube options for container-runtime
	I0916 11:08:37.403023  264436 config.go:182] Loaded profile config "no-preload-349453": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:08:37.403037  264436 machine.go:96] duration metric: took 1.043574485s to provisionDockerMachine
	I0916 11:08:37.403043  264436 client.go:171] duration metric: took 6.81866199s to LocalClient.Create
	I0916 11:08:37.403064  264436 start.go:167] duration metric: took 6.818716316s to libmachine.API.Create "no-preload-349453"
	I0916 11:08:37.403076  264436 start.go:293] postStartSetup for "no-preload-349453" (driver="docker")
	I0916 11:08:37.403088  264436 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:08:37.403140  264436 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:08:37.403174  264436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:08:37.422611  264436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:08:37.517150  264436 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:08:37.520935  264436 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 11:08:37.520967  264436 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 11:08:37.520979  264436 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 11:08:37.520988  264436 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 11:08:37.520999  264436 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 11:08:37.521061  264436 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 11:08:37.521153  264436 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 11:08:37.521276  264436 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:08:37.530028  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:08:37.556224  264436 start.go:296] duration metric: took 153.132782ms for postStartSetup
	I0916 11:08:37.556638  264436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-349453
	I0916 11:08:37.580790  264436 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/config.json ...
	I0916 11:08:37.581157  264436 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 11:08:37.581227  264436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:08:37.603557  264436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:08:37.696690  264436 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 11:08:37.700950  264436 start.go:128] duration metric: took 7.118902099s to createHost
	I0916 11:08:37.700981  264436 start.go:83] releasing machines lock for "no-preload-349453", held for 7.119184519s
	I0916 11:08:37.701048  264436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-349453
	I0916 11:08:37.719562  264436 ssh_runner.go:195] Run: cat /version.json
	I0916 11:08:37.719628  264436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:08:37.719633  264436 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:08:37.719749  264436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:08:37.738079  264436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:08:37.739424  264436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:08:37.834189  264436 ssh_runner.go:195] Run: systemctl --version
	I0916 11:08:37.922817  264436 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:08:37.927917  264436 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 11:08:37.952584  264436 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 11:08:37.952658  264436 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:08:37.983959  264436 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 11:08:37.983991  264436 start.go:495] detecting cgroup driver to use...
	I0916 11:08:37.984035  264436 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 11:08:37.984084  264436 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 11:08:37.996632  264436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 11:08:38.008687  264436 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:08:38.008749  264436 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:08:38.022160  264436 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:08:38.035383  264436 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:08:38.121722  264436 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:08:38.206523  264436 docker.go:233] disabling docker service ...
	I0916 11:08:38.206610  264436 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:08:38.227941  264436 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:08:38.240500  264436 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:08:38.314496  264436 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:08:38.393479  264436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:08:38.405005  264436 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:08:38.420776  264436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 11:08:38.431358  264436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 11:08:38.441360  264436 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 11:08:38.441418  264436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 11:08:38.451477  264436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 11:08:38.461117  264436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 11:08:38.470893  264436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 11:08:38.481242  264436 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:08:38.490694  264436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 11:08:38.500709  264436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 11:08:38.510200  264436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 11:08:38.519856  264436 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:08:38.530496  264436 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:08:38.539419  264436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:08:38.617864  264436 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 11:08:38.714406  264436 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 11:08:38.714480  264436 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 11:08:38.718630  264436 start.go:563] Will wait 60s for crictl version
	I0916 11:08:38.718678  264436 ssh_runner.go:195] Run: which crictl
	I0916 11:08:38.722108  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:08:38.756823  264436 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 11:08:38.756917  264436 ssh_runner.go:195] Run: containerd --version
	I0916 11:08:38.780335  264436 ssh_runner.go:195] Run: containerd --version
	I0916 11:08:38.807827  264436 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 11:08:37.630791  260870 out.go:235]   - Generating certificates and keys ...
	I0916 11:08:37.630901  260870 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 11:08:37.630988  260870 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 11:08:37.916130  260870 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 11:08:38.019360  260870 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 11:08:38.158112  260870 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 11:08:38.636583  260870 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 11:08:39.235249  260870 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 11:08:39.235559  260870 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-371039] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0916 11:08:39.445341  260870 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 11:08:39.445561  260870 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-371039] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0916 11:08:39.651806  260870 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 11:08:39.784722  260870 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 11:08:39.962483  260870 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 11:08:39.962681  260870 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 11:08:38.809241  264436 cli_runner.go:164] Run: docker network inspect no-preload-349453 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:08:38.826659  264436 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0916 11:08:38.830468  264436 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:08:38.840961  264436 kubeadm.go:883] updating cluster {Name:no-preload-349453 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-349453 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:08:38.841074  264436 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:08:38.841123  264436 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:08:38.880915  264436 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0916 11:08:38.880944  264436 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0916 11:08:38.881004  264436 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:08:38.881044  264436 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0916 11:08:38.881075  264436 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:08:38.881092  264436 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0916 11:08:38.881101  264436 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:08:38.881114  264436 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:08:38.881057  264436 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:08:38.881079  264436 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:08:38.882295  264436 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:08:38.882294  264436 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0916 11:08:38.882392  264436 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0916 11:08:38.882555  264436 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:08:38.882579  264436 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:08:38.882584  264436 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:08:38.882604  264436 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:08:38.882640  264436 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:08:39.057574  264436 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.11.3" and sha "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6"
	I0916 11:08:39.057644  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:08:39.079273  264436 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0916 11:08:39.079331  264436 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:08:39.079378  264436 ssh_runner.go:195] Run: which crictl
	I0916 11:08:39.082866  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:08:39.087405  264436 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.31.1" and sha "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561"
	I0916 11:08:39.087451  264436 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10" and sha "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136"
	I0916 11:08:39.087473  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:08:39.087504  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10
	I0916 11:08:39.098221  264436 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.31.1" and sha "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee"
	I0916 11:08:39.098303  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:08:39.099842  264436 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.31.1" and sha "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b"
	I0916 11:08:39.099923  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:08:39.104576  264436 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.31.1" and sha "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1"
	I0916 11:08:39.104653  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:08:39.112051  264436 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.5.15-0" and sha "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4"
	I0916 11:08:39.112113  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.5.15-0
	I0916 11:08:39.134734  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:08:39.134733  264436 cache_images.go:116] "registry.k8s.io/pause:3.10" needs transfer: "registry.k8s.io/pause:3.10" does not exist at hash "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136" in container runtime
	I0916 11:08:39.134813  264436 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0916 11:08:39.134858  264436 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0916 11:08:39.134908  264436 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0916 11:08:39.134931  264436 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:08:39.134948  264436 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0916 11:08:39.134970  264436 ssh_runner.go:195] Run: which crictl
	I0916 11:08:39.134979  264436 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:08:39.134864  264436 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:08:39.135036  264436 ssh_runner.go:195] Run: which crictl
	I0916 11:08:39.135077  264436 ssh_runner.go:195] Run: which crictl
	I0916 11:08:39.134913  264436 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:08:39.135127  264436 ssh_runner.go:195] Run: which crictl
	I0916 11:08:39.134827  264436 cri.go:218] Removing image: registry.k8s.io/pause:3.10
	I0916 11:08:39.135203  264436 ssh_runner.go:195] Run: which crictl
	I0916 11:08:39.143907  264436 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0916 11:08:39.143963  264436 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0916 11:08:39.144023  264436 ssh_runner.go:195] Run: which crictl
	I0916 11:08:39.169982  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:08:39.170019  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:08:39.170040  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:08:39.170093  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I0916 11:08:39.170098  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:08:39.170142  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:08:39.170202  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0916 11:08:39.354583  264436 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0916 11:08:39.354683  264436 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0916 11:08:39.354784  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I0916 11:08:39.354865  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:08:39.354955  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0916 11:08:39.355274  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:08:39.355389  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:08:39.355478  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:08:39.541651  264436 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.11.3: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.11.3': No such file or directory
	I0916 11:08:39.541683  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0916 11:08:39.541688  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 --> /var/lib/minikube/images/coredns_v1.11.3 (18571264 bytes)
	I0916 11:08:39.541724  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I0916 11:08:39.541800  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:08:39.541868  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:08:39.541804  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:08:39.541947  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:08:39.775749  264436 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0916 11:08:39.775784  264436 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I0916 11:08:39.775871  264436 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0916 11:08:39.775871  264436 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10
	I0916 11:08:39.775955  264436 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0916 11:08:39.775968  264436 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0916 11:08:39.775918  264436 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0916 11:08:39.776028  264436 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0916 11:08:39.776041  264436 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0916 11:08:39.776053  264436 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0916 11:08:39.776071  264436 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0916 11:08:39.776108  264436 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0916 11:08:39.802405  264436 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.31.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.31.1': No such file or directory
	I0916 11:08:39.802441  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 --> /var/lib/minikube/images/kube-apiserver_v1.31.1 (28057088 bytes)
	I0916 11:08:39.802507  264436 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.31.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.31.1': No such file or directory
	I0916 11:08:39.802523  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 --> /var/lib/minikube/images/kube-scheduler_v1.31.1 (20187136 bytes)
	I0916 11:08:39.803116  264436 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.31.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.31.1': No such file or directory
	I0916 11:08:39.803143  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 --> /var/lib/minikube/images/kube-proxy_v1.31.1 (30214144 bytes)
	I0916 11:08:39.824892  264436 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10: stat -c "%s %y" /var/lib/minikube/images/pause_3.10: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10': No such file or directory
	I0916 11:08:39.824933  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 --> /var/lib/minikube/images/pause_3.10 (321024 bytes)
	I0916 11:08:39.825041  264436 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.31.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.31.1': No such file or directory
	I0916 11:08:39.825061  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 --> /var/lib/minikube/images/kube-controller-manager_v1.31.1 (26231808 bytes)
	I0916 11:08:39.825117  264436 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.15-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.15-0': No such file or directory
	I0916 11:08:39.825133  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 --> /var/lib/minikube/images/etcd_3.5.15-0 (56918528 bytes)
	I0916 11:08:39.959272  264436 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10
	I0916 11:08:39.959408  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10
	I0916 11:08:40.023367  264436 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I0916 11:08:40.023457  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:08:40.164705  264436 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0916 11:08:40.164748  264436 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:08:40.164791  264436 ssh_runner.go:195] Run: which crictl
	I0916 11:08:40.164996  264436 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 from cache
	I0916 11:08:40.165039  264436 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0916 11:08:40.165080  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.11.3
	I0916 11:08:40.197926  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:08:40.241204  260870 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 11:08:40.317576  260870 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 11:08:40.426492  260870 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 11:08:40.596293  260870 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 11:08:40.608073  260870 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 11:08:40.609253  260870 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 11:08:40.609315  260870 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 11:08:40.694187  260870 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 11:08:37.733427  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:08:37.733912  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:08:38.232918  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:08:40.696082  260870 out.go:235]   - Booting up control plane ...
	I0916 11:08:40.696191  260870 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 11:08:40.702656  260870 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 11:08:40.704099  260870 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 11:08:40.705275  260870 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 11:08:40.708468  260870 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0916 11:08:41.423354  264436 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.11.3: (1.25824846s)
	I0916 11:08:41.423382  264436 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0916 11:08:41.423399  264436 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.225442266s)
	I0916 11:08:41.423474  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:08:41.423406  264436 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0916 11:08:41.423554  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0916 11:08:41.458101  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:08:42.482721  264436 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.059134257s)
	I0916 11:08:42.482753  264436 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0916 11:08:42.482774  264436 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0916 11:08:42.482776  264436 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.024643374s)
	I0916 11:08:42.482817  264436 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0916 11:08:42.482820  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0916 11:08:42.482894  264436 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0916 11:08:43.495795  264436 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.012950946s)
	I0916 11:08:43.495827  264436 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0916 11:08:43.495859  264436 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0916 11:08:43.495876  264436 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.01296017s)
	I0916 11:08:43.495905  264436 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0916 11:08:43.495919  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0916 11:08:43.495923  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0916 11:08:44.472580  264436 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0916 11:08:44.472626  264436 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0916 11:08:44.472679  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.31.1
	I0916 11:08:43.233973  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 11:08:43.234020  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:08:45.540795  264436 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.31.1: (1.068091792s)
	I0916 11:08:45.540818  264436 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0916 11:08:45.540840  264436 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0916 11:08:45.540887  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.15-0
	I0916 11:08:47.901181  264436 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.15-0: (2.360264084s)
	I0916 11:08:47.901218  264436 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0916 11:08:47.901243  264436 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0916 11:08:47.901300  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I0916 11:08:48.984630  264436 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5: (1.083298899s)
	I0916 11:08:48.984663  264436 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0916 11:08:48.984689  264436 cache_images.go:123] Successfully loaded all cached images
	I0916 11:08:48.984695  264436 cache_images.go:92] duration metric: took 10.103732508s to LoadCachedImages
	I0916 11:08:48.984709  264436 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.31.1 containerd true true} ...
	I0916 11:08:48.984835  264436 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-349453 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-349453 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:08:48.984901  264436 ssh_runner.go:195] Run: sudo crictl info
	I0916 11:08:49.032116  264436 cni.go:84] Creating CNI manager for ""
	I0916 11:08:49.032193  264436 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:08:49.032211  264436 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:08:49.032240  264436 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-349453 NodeName:no-preload-349453 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 11:08:49.032400  264436 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-349453"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:08:49.032472  264436 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 11:08:49.044890  264436 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0916 11:08:49.045024  264436 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0916 11:08:49.056347  264436 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0916 11:08:49.056466  264436 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 11:08:49.056673  264436 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0916 11:08:49.057166  264436 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0916 11:08:49.066816  264436 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0916 11:08:49.066853  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0916 11:08:49.943393  264436 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 11:08:49.947835  264436 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0916 11:08:49.947869  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0916 11:08:50.181687  264436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:08:50.194184  264436 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 11:08:50.197931  264436 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0916 11:08:50.197959  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0916 11:08:50.395973  264436 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:08:50.404517  264436 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0916 11:08:50.422561  264436 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:08:50.445036  264436 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2171 bytes)
	I0916 11:08:50.465483  264436 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0916 11:08:50.470084  264436 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:08:50.482485  264436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:08:50.547958  264436 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:08:50.563251  264436 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453 for IP: 192.168.94.2
	I0916 11:08:50.563273  264436 certs.go:194] generating shared ca certs ...
	I0916 11:08:50.563298  264436 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:50.563456  264436 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 11:08:50.563505  264436 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 11:08:50.563517  264436 certs.go:256] generating profile certs ...
	I0916 11:08:50.563627  264436 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.key
	I0916 11:08:50.563648  264436 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.crt with IP's: []
	I0916 11:08:50.618540  264436 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.crt ...
	I0916 11:08:50.618569  264436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.crt: {Name:mk337746002b2836356861444fb583afa57b1d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:50.618748  264436 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.key ...
	I0916 11:08:50.618771  264436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.key: {Name:mk9c5aa9e774198cfcb02ec0058188ab8edfaed1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:50.618845  264436 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.key.85f7849d
	I0916 11:08:50.618860  264436 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.crt.85f7849d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I0916 11:08:50.875559  264436 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.crt.85f7849d ...
	I0916 11:08:50.875598  264436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.crt.85f7849d: {Name:mk481f9ec5bc5101be906a4ddce3a071783b2c96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:50.875829  264436 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.key.85f7849d ...
	I0916 11:08:50.875849  264436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.key.85f7849d: {Name:mk4e723f8d9625ad4b4558240421f0210105e957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:50.875954  264436 certs.go:381] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.crt.85f7849d -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.crt
	I0916 11:08:50.876051  264436 certs.go:385] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.key.85f7849d -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.key
	I0916 11:08:50.876127  264436 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/proxy-client.key
	I0916 11:08:50.876147  264436 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/proxy-client.crt with IP's: []
	I0916 11:08:51.303691  264436 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/proxy-client.crt ...
	I0916 11:08:51.303759  264436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/proxy-client.crt: {Name:mk2a9791d1a10304f96ba7678b9c3811d30b3fb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:51.303945  264436 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/proxy-client.key ...
	I0916 11:08:51.303961  264436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/proxy-client.key: {Name:mka24f10f8b232c8b84bdf799b45958f97693ca9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:51.304131  264436 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 11:08:51.304175  264436 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 11:08:51.304185  264436 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:08:51.304215  264436 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 11:08:51.304238  264436 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:08:51.304268  264436 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 11:08:51.304303  264436 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:08:51.304858  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:08:51.329508  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:08:51.353418  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:08:51.377163  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:08:51.401708  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 11:08:51.428477  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 11:08:51.452154  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:08:51.475382  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 11:08:51.498240  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:08:51.521123  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 11:08:51.543771  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 11:08:51.574546  264436 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:08:51.591543  264436 ssh_runner.go:195] Run: openssl version
	I0916 11:08:51.597060  264436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:08:51.606641  264436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:08:51.610471  264436 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:08:51.610524  264436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:08:51.617457  264436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:08:51.626770  264436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 11:08:51.636218  264436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 11:08:51.640059  264436 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 11:08:51.640119  264436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 11:08:51.646939  264436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 11:08:51.657727  264436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 11:08:51.667722  264436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 11:08:51.671519  264436 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 11:08:51.671587  264436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 11:08:51.678428  264436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:08:51.687852  264436 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:08:51.691310  264436 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 11:08:51.691367  264436 kubeadm.go:392] StartCluster: {Name:no-preload-349453 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-349453 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:08:51.691439  264436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 11:08:51.691486  264436 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:08:51.724621  264436 cri.go:89] found id: ""
	I0916 11:08:51.724695  264436 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:08:51.734987  264436 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 11:08:51.744004  264436 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 11:08:51.744075  264436 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 11:08:51.755258  264436 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 11:08:51.755283  264436 kubeadm.go:157] found existing configuration files:
	
	I0916 11:08:51.755333  264436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 11:08:51.768412  264436 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 11:08:51.768474  264436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 11:08:51.777349  264436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 11:08:51.785929  264436 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 11:08:51.786003  264436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 11:08:51.794532  264436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 11:08:51.803220  264436 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 11:08:51.803342  264436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 11:08:51.812093  264436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 11:08:51.820809  264436 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 11:08:51.820873  264436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 11:08:51.829429  264436 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 11:08:51.865931  264436 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 11:08:51.865989  264436 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 11:08:51.885115  264436 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 11:08:51.885236  264436 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 11:08:51.885298  264436 kubeadm.go:310] OS: Linux
	I0916 11:08:51.885387  264436 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 11:08:51.885459  264436 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 11:08:51.885534  264436 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 11:08:51.885607  264436 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 11:08:51.885679  264436 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 11:08:51.885763  264436 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 11:08:51.885838  264436 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 11:08:51.885903  264436 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 11:08:51.885972  264436 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 11:08:51.941753  264436 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 11:08:51.941901  264436 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 11:08:51.942020  264436 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 11:08:51.947090  264436 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 11:08:48.234717  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 11:08:48.234765  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:08:51.949776  264436 out.go:235]   - Generating certificates and keys ...
	I0916 11:08:51.949877  264436 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 11:08:51.949940  264436 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 11:08:52.122699  264436 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 11:08:52.249550  264436 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 11:08:52.352028  264436 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 11:08:52.445139  264436 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 11:08:52.652691  264436 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 11:08:52.652923  264436 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-349453] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0916 11:08:52.751947  264436 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 11:08:52.752095  264436 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-349453] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0916 11:08:52.932640  264436 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 11:08:53.294351  264436 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 11:08:53.505338  264436 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 11:08:53.505405  264436 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 11:08:53.576935  264436 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 11:08:53.665445  264436 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 11:08:53.781881  264436 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 11:08:54.142742  264436 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 11:08:54.452184  264436 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 11:08:54.452959  264436 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 11:08:54.456552  264436 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 11:08:55.210981  260870 kubeadm.go:310] [apiclient] All control plane components are healthy after 14.502545 seconds
	I0916 11:08:55.211125  260870 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 11:08:54.460019  264436 out.go:235]   - Booting up control plane ...
	I0916 11:08:54.460188  264436 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 11:08:54.460277  264436 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 11:08:54.460605  264436 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 11:08:54.473017  264436 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 11:08:54.480142  264436 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 11:08:54.480269  264436 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 11:08:54.584649  264436 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 11:08:54.584816  264436 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 11:08:55.085943  264436 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.441739ms
	I0916 11:08:55.086058  264436 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 11:08:55.222604  260870 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 11:08:55.747349  260870 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 11:08:55.747575  260870 kubeadm.go:310] [mark-control-plane] Marking the node old-k8s-version-371039 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
	I0916 11:08:56.255515  260870 kubeadm.go:310] [bootstrap-token] Using token: 7575lv.7anw6bs48k43jhje
	I0916 11:08:56.257005  260870 out.go:235]   - Configuring RBAC rules ...
	I0916 11:08:56.257190  260870 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 11:08:56.261944  260870 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 11:08:56.268917  260870 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 11:08:56.271036  260870 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 11:08:56.273203  260870 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 11:08:56.275371  260870 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 11:08:56.282938  260870 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 11:08:56.505496  260870 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 11:08:56.674523  260870 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 11:08:56.675435  260870 kubeadm.go:310] 
	I0916 11:08:56.675511  260870 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 11:08:56.675550  260870 kubeadm.go:310] 
	I0916 11:08:56.675666  260870 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 11:08:56.675679  260870 kubeadm.go:310] 
	I0916 11:08:56.675769  260870 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 11:08:56.675860  260870 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 11:08:56.675953  260870 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 11:08:56.675963  260870 kubeadm.go:310] 
	I0916 11:08:56.676057  260870 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 11:08:56.676076  260870 kubeadm.go:310] 
	I0916 11:08:56.676146  260870 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 11:08:56.676157  260870 kubeadm.go:310] 
	I0916 11:08:56.676232  260870 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 11:08:56.676346  260870 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 11:08:56.676449  260870 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 11:08:56.676459  260870 kubeadm.go:310] 
	I0916 11:08:56.676577  260870 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 11:08:56.676690  260870 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 11:08:56.676702  260870 kubeadm.go:310] 
	I0916 11:08:56.676805  260870 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7575lv.7anw6bs48k43jhje \
	I0916 11:08:56.676964  260870 kubeadm.go:310]     --discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 \
	I0916 11:08:56.676999  260870 kubeadm.go:310]     --control-plane 
	I0916 11:08:56.677008  260870 kubeadm.go:310] 
	I0916 11:08:56.677141  260870 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 11:08:56.677154  260870 kubeadm.go:310] 
	I0916 11:08:56.677267  260870 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7575lv.7anw6bs48k43jhje \
	I0916 11:08:56.677407  260870 kubeadm.go:310]     --discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 
	I0916 11:08:56.679220  260870 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 11:08:56.679366  260870 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 11:08:56.679403  260870 cni.go:84] Creating CNI manager for ""
	I0916 11:08:56.679418  260870 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:08:56.681153  260870 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 11:08:53.235786  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 11:08:53.235837  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:00.087505  264436 kubeadm.go:310] [api-check] The API server is healthy after 5.001488031s
	I0916 11:09:00.098392  264436 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 11:09:00.110362  264436 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 11:09:00.128932  264436 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 11:09:00.129187  264436 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-349453 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 11:09:00.137036  264436 kubeadm.go:310] [bootstrap-token] Using token: 7hha87.1fmccqtk5mel1d08
	I0916 11:08:56.682324  260870 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 11:08:56.686207  260870 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.20.0/kubectl ...
	I0916 11:08:56.686225  260870 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 11:08:56.703974  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 11:08:57.087171  260870 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 11:08:57.087286  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:08:57.087327  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-371039 minikube.k8s.io/updated_at=2024_09_16T11_08_57_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=old-k8s-version-371039 minikube.k8s.io/primary=true
	I0916 11:08:57.094951  260870 ops.go:34] apiserver oom_adj: -16
	I0916 11:08:57.203677  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:08:57.703899  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:08:58.204371  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:08:58.703936  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:08:59.203918  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:08:59.704356  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:00.204155  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:00.138673  264436 out.go:235]   - Configuring RBAC rules ...
	I0916 11:09:00.138843  264436 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 11:09:00.143189  264436 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 11:09:00.149188  264436 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 11:09:00.151958  264436 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 11:09:00.154792  264436 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 11:09:00.158528  264436 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 11:09:00.493607  264436 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 11:09:00.933899  264436 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 11:09:01.494256  264436 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 11:09:01.495468  264436 kubeadm.go:310] 
	I0916 11:09:01.495563  264436 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 11:09:01.495578  264436 kubeadm.go:310] 
	I0916 11:09:01.495691  264436 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 11:09:01.495707  264436 kubeadm.go:310] 
	I0916 11:09:01.495784  264436 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 11:09:01.495872  264436 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 11:09:01.495955  264436 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 11:09:01.495973  264436 kubeadm.go:310] 
	I0916 11:09:01.496023  264436 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 11:09:01.496031  264436 kubeadm.go:310] 
	I0916 11:09:01.496072  264436 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 11:09:01.496104  264436 kubeadm.go:310] 
	I0916 11:09:01.496187  264436 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 11:09:01.496302  264436 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 11:09:01.496394  264436 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 11:09:01.496403  264436 kubeadm.go:310] 
	I0916 11:09:01.496503  264436 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 11:09:01.496612  264436 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 11:09:01.496625  264436 kubeadm.go:310] 
	I0916 11:09:01.496698  264436 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7hha87.1fmccqtk5mel1d08 \
	I0916 11:09:01.496843  264436 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 \
	I0916 11:09:01.496894  264436 kubeadm.go:310] 	--control-plane 
	I0916 11:09:01.496904  264436 kubeadm.go:310] 
	I0916 11:09:01.497000  264436 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 11:09:01.497009  264436 kubeadm.go:310] 
	I0916 11:09:01.497108  264436 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7hha87.1fmccqtk5mel1d08 \
	I0916 11:09:01.497239  264436 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 
	I0916 11:09:01.499128  264436 kubeadm.go:310] W0916 11:08:51.862879    1806 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:09:01.499457  264436 kubeadm.go:310] W0916 11:08:51.863553    1806 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:09:01.499768  264436 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 11:09:01.499953  264436 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 11:09:01.499988  264436 cni.go:84] Creating CNI manager for ""
	I0916 11:09:01.500000  264436 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:09:01.501798  264436 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 11:08:58.236473  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 11:08:58.236522  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:08:58.646312  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:53610->192.168.76.2:8443: read: connection reset by peer
	I0916 11:08:58.733440  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:08:58.733905  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:08:59.233571  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:08:59.234025  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:08:59.733738  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:08:59.734160  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:00.232786  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:00.233148  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:00.732769  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:00.733156  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:01.232818  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:01.233245  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:01.732791  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:01.733233  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:02.232778  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:02.233205  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:00.704323  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:01.204668  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:01.703878  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:02.204580  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:02.704540  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:03.203853  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:03.703804  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:04.204076  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:04.703894  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:05.204018  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:01.503075  264436 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 11:09:01.507256  264436 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 11:09:01.507277  264436 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 11:09:01.524545  264436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 11:09:01.727673  264436 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 11:09:01.727825  264436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-349453 minikube.k8s.io/updated_at=2024_09_16T11_09_01_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=no-preload-349453 minikube.k8s.io/primary=true
	I0916 11:09:01.728021  264436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:01.738577  264436 ops.go:34] apiserver oom_adj: -16
	I0916 11:09:01.822449  264436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:02.323484  264436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:02.822776  264436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:03.322974  264436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:03.823263  264436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:04.323195  264436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:04.822824  264436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:05.323453  264436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:05.822962  264436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:05.891978  264436 kubeadm.go:1113] duration metric: took 4.164004406s to wait for elevateKubeSystemPrivileges
	I0916 11:09:05.892013  264436 kubeadm.go:394] duration metric: took 14.200646498s to StartCluster
	I0916 11:09:05.892048  264436 settings.go:142] acquiring lock: {Name:mk5f140da961bb985c566f732d611f3e238f1f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:09:05.892129  264436 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:09:05.895884  264436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:09:05.896177  264436 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 11:09:05.896353  264436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 11:09:05.896448  264436 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:09:05.896535  264436 addons.go:69] Setting storage-provisioner=true in profile "no-preload-349453"
	I0916 11:09:05.896553  264436 addons.go:69] Setting default-storageclass=true in profile "no-preload-349453"
	I0916 11:09:05.896597  264436 config.go:182] Loaded profile config "no-preload-349453": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:09:05.896617  264436 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-349453"
	I0916 11:09:05.896562  264436 addons.go:234] Setting addon storage-provisioner=true in "no-preload-349453"
	I0916 11:09:05.896721  264436 host.go:66] Checking if "no-preload-349453" exists ...
	I0916 11:09:05.896991  264436 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Status}}
	I0916 11:09:05.897173  264436 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Status}}
	I0916 11:09:05.899195  264436 out.go:177] * Verifying Kubernetes components...
	I0916 11:09:05.900632  264436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:09:05.920822  264436 addons.go:234] Setting addon default-storageclass=true in "no-preload-349453"
	I0916 11:09:05.920872  264436 host.go:66] Checking if "no-preload-349453" exists ...
	I0916 11:09:05.921227  264436 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Status}}
	I0916 11:09:05.922853  264436 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:09:05.924578  264436 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:09:05.924598  264436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:09:05.924661  264436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:05.953061  264436 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:09:05.953083  264436 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:09:05.953143  264436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:05.957772  264436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:09:05.975394  264436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:09:06.034923  264436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 11:09:06.040755  264436 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:09:06.143479  264436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:09:06.240584  264436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:09:06.536048  264436 start.go:971] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I0916 11:09:06.539111  264436 node_ready.go:35] waiting up to 6m0s for node "no-preload-349453" to be "Ready" ...
	I0916 11:09:06.547015  264436 node_ready.go:49] node "no-preload-349453" has status "Ready":"True"
	I0916 11:09:06.547042  264436 node_ready.go:38] duration metric: took 7.901547ms for node "no-preload-349453" to be "Ready" ...
	I0916 11:09:06.547095  264436 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:09:06.555838  264436 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:06.932212  264436 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0916 11:09:02.733678  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:02.734077  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:03.233262  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:03.233718  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:03.733114  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:03.733576  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:04.233410  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:04.233881  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:04.733574  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:04.733949  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:05.233532  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:05.233933  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:05.733512  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:05.733953  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:06.233584  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:06.234044  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:06.733637  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:06.734106  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:07.233844  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:07.234332  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:05.703927  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:06.204762  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:06.704365  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:07.204577  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:07.704243  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:08.204636  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:08.703906  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:09.204497  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:09.704711  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:10.204447  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:06.933504  264436 addons.go:510] duration metric: took 1.037058154s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 11:09:07.040392  264436 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-349453" context rescaled to 1 replicas
	I0916 11:09:08.563999  264436 pod_ready.go:103] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:10.703866  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:11.204455  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:11.703883  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:12.204359  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:12.332820  260870 kubeadm.go:1113] duration metric: took 15.245596472s to wait for elevateKubeSystemPrivileges
	I0916 11:09:12.332850  260870 kubeadm.go:394] duration metric: took 35.226361301s to StartCluster
	I0916 11:09:12.332867  260870 settings.go:142] acquiring lock: {Name:mk5f140da961bb985c566f732d611f3e238f1f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:09:12.332941  260870 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:09:12.334200  260870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:09:12.334409  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 11:09:12.334422  260870 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 11:09:12.334489  260870 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:09:12.334595  260870 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-371039"
	I0916 11:09:12.334614  260870 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-371039"
	I0916 11:09:12.334633  260870 config.go:182] Loaded profile config "old-k8s-version-371039": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0916 11:09:12.334646  260870 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-371039"
	I0916 11:09:12.334621  260870 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-371039"
	I0916 11:09:12.334766  260870 host.go:66] Checking if "old-k8s-version-371039" exists ...
	I0916 11:09:12.335022  260870 cli_runner.go:164] Run: docker container inspect old-k8s-version-371039 --format={{.State.Status}}
	I0916 11:09:12.335157  260870 cli_runner.go:164] Run: docker container inspect old-k8s-version-371039 --format={{.State.Status}}
	I0916 11:09:12.336297  260870 out.go:177] * Verifying Kubernetes components...
	I0916 11:09:12.337718  260870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:09:12.357086  260870 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:09:07.733738  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:07.734147  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:08.233742  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:08.234148  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:08.733771  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:08.734260  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:09.233808  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:12.358665  260870 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:09:12.358689  260870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:09:12.358754  260870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-371039
	I0916 11:09:12.359729  260870 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-371039"
	I0916 11:09:12.359827  260870 host.go:66] Checking if "old-k8s-version-371039" exists ...
	I0916 11:09:12.360343  260870 cli_runner.go:164] Run: docker container inspect old-k8s-version-371039 --format={{.State.Status}}
	I0916 11:09:12.383998  260870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/old-k8s-version-371039/id_rsa Username:docker}
	I0916 11:09:12.389783  260870 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:09:12.389805  260870 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:09:12.389868  260870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-371039
	I0916 11:09:12.408070  260870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/old-k8s-version-371039/id_rsa Username:docker}
	I0916 11:09:12.546475  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 11:09:12.553288  260870 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:09:12.647594  260870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:09:12.648622  260870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:09:13.259944  260870 start.go:971] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0916 11:09:13.261675  260870 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-371039" to be "Ready" ...
	I0916 11:09:13.325576  260870 node_ready.go:49] node "old-k8s-version-371039" has status "Ready":"True"
	I0916 11:09:13.325600  260870 node_ready.go:38] duration metric: took 63.887515ms for node "old-k8s-version-371039" to be "Ready" ...
	I0916 11:09:13.325612  260870 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:09:13.335290  260870 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-78djj" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:13.528295  260870 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0916 11:09:13.530325  260870 addons.go:510] duration metric: took 1.195834763s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 11:09:13.764048  260870 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-371039" context rescaled to 1 replicas
	I0916 11:09:11.062167  264436 pod_ready.go:103] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:13.062678  264436 pod_ready.go:103] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:15.063223  264436 pod_ready.go:103] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:14.234494  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 11:09:14.234540  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:15.342274  260870 pod_ready.go:103] pod "coredns-74ff55c5b-78djj" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:17.841129  260870 pod_ready.go:103] pod "coredns-74ff55c5b-78djj" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:17.560912  264436 pod_ready.go:103] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:19.562845  264436 pod_ready.go:103] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:19.235598  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 11:09:19.235680  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:09:19.235754  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:09:19.269692  254463 cri.go:89] found id: "f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd"
	I0916 11:09:19.269715  254463 cri.go:89] found id: "78f1386658cccbd9f65d9408afecb31c6db52bb84d72237168313ed1e03541f7"
	I0916 11:09:19.269720  254463 cri.go:89] found id: ""
	I0916 11:09:19.269729  254463 logs.go:276] 2 containers: [f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd 78f1386658cccbd9f65d9408afecb31c6db52bb84d72237168313ed1e03541f7]
	I0916 11:09:19.269789  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:19.273402  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:19.276885  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:09:19.276963  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:09:19.308719  254463 cri.go:89] found id: ""
	I0916 11:09:19.308746  254463 logs.go:276] 0 containers: []
	W0916 11:09:19.308755  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:09:19.308771  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:09:19.308830  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:09:19.342334  254463 cri.go:89] found id: ""
	I0916 11:09:19.342361  254463 logs.go:276] 0 containers: []
	W0916 11:09:19.342372  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:09:19.342379  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:09:19.342437  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:09:19.375316  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:09:19.375337  254463 cri.go:89] found id: ""
	I0916 11:09:19.375343  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:09:19.375391  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:19.378835  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:09:19.378904  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:09:19.411345  254463 cri.go:89] found id: ""
	I0916 11:09:19.411370  254463 logs.go:276] 0 containers: []
	W0916 11:09:19.411378  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:09:19.411384  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:09:19.411441  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:09:19.445048  254463 cri.go:89] found id: "b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9"
	I0916 11:09:19.445068  254463 cri.go:89] found id: "d134862346ef7847a5bec871679eb22f4238875f0baf507bd5b7da3db1391339"
	I0916 11:09:19.445072  254463 cri.go:89] found id: ""
	I0916 11:09:19.445079  254463 logs.go:276] 2 containers: [b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9 d134862346ef7847a5bec871679eb22f4238875f0baf507bd5b7da3db1391339]
	I0916 11:09:19.445131  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:19.448637  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:19.451955  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:09:19.452028  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:09:19.485223  254463 cri.go:89] found id: ""
	I0916 11:09:19.485248  254463 logs.go:276] 0 containers: []
	W0916 11:09:19.485257  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:09:19.485263  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:09:19.485337  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:09:19.517574  254463 cri.go:89] found id: ""
	I0916 11:09:19.517608  254463 logs.go:276] 0 containers: []
	W0916 11:09:19.517618  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:09:19.517650  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:09:19.517669  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:09:19.557222  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:09:19.557264  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:09:19.594969  254463 logs.go:123] Gathering logs for kube-apiserver [78f1386658cccbd9f65d9408afecb31c6db52bb84d72237168313ed1e03541f7] ...
	I0916 11:09:19.595000  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78f1386658cccbd9f65d9408afecb31c6db52bb84d72237168313ed1e03541f7"
	I0916 11:09:19.630078  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:09:19.630121  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:09:19.681369  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:09:19.681400  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:09:20.341144  260870 pod_ready.go:103] pod "coredns-74ff55c5b-78djj" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:22.840781  260870 pod_ready.go:103] pod "coredns-74ff55c5b-78djj" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:24.840812  260870 pod_ready.go:103] pod "coredns-74ff55c5b-78djj" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:22.060907  264436 pod_ready.go:103] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:24.062298  264436 pod_ready.go:103] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:26.841099  260870 pod_ready.go:103] pod "coredns-74ff55c5b-78djj" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:29.341391  260870 pod_ready.go:103] pod "coredns-74ff55c5b-78djj" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:29.841004  260870 pod_ready.go:93] pod "coredns-74ff55c5b-78djj" in "kube-system" namespace has status "Ready":"True"
	I0916 11:09:29.841028  260870 pod_ready.go:82] duration metric: took 16.505708515s for pod "coredns-74ff55c5b-78djj" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:29.841039  260870 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-lgf42" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:29.842812  260870 pod_ready.go:98] error getting pod "coredns-74ff55c5b-lgf42" in "kube-system" namespace (skipping!): pods "coredns-74ff55c5b-lgf42" not found
	I0916 11:09:29.842836  260870 pod_ready.go:82] duration metric: took 1.790096ms for pod "coredns-74ff55c5b-lgf42" in "kube-system" namespace to be "Ready" ...
	E0916 11:09:29.842848  260870 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-74ff55c5b-lgf42" in "kube-system" namespace (skipping!): pods "coredns-74ff55c5b-lgf42" not found
	I0916 11:09:29.842857  260870 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-371039" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:26.562286  264436 pod_ready.go:103] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:29.061948  264436 pod_ready.go:103] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:30.186175  254463 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.504756872s)
	W0916 11:09:30.186209  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:33322->[::1]:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:33322->[::1]:8443: read: connection reset by peer
	
	** /stderr **
	I0916 11:09:30.186217  254463 logs.go:123] Gathering logs for kube-apiserver [f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd] ...
	I0916 11:09:30.186233  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd"
	I0916 11:09:30.223830  254463 logs.go:123] Gathering logs for kube-controller-manager [b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9] ...
	I0916 11:09:30.223863  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9"
	I0916 11:09:30.256977  254463 logs.go:123] Gathering logs for kube-controller-manager [d134862346ef7847a5bec871679eb22f4238875f0baf507bd5b7da3db1391339] ...
	I0916 11:09:30.257004  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d134862346ef7847a5bec871679eb22f4238875f0baf507bd5b7da3db1391339"
	I0916 11:09:30.292614  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:09:30.292649  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:09:30.353308  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:09:30.353345  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:09:31.848871  260870 pod_ready.go:103] pod "etcd-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:33.849476  260870 pod_ready.go:103] pod "etcd-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:31.561693  264436 pod_ready.go:103] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:34.061654  264436 pod_ready.go:103] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:35.061879  264436 pod_ready.go:93] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"True"
	I0916 11:09:35.061902  264436 pod_ready.go:82] duration metric: took 28.506020354s for pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:35.061911  264436 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mvlrh" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:35.063656  264436 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-mvlrh" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-mvlrh" not found
	I0916 11:09:35.063679  264436 pod_ready.go:82] duration metric: took 1.762521ms for pod "coredns-7c65d6cfc9-mvlrh" in "kube-system" namespace to be "Ready" ...
	E0916 11:09:35.063692  264436 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-mvlrh" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-mvlrh" not found
	I0916 11:09:35.063701  264436 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:35.068205  264436 pod_ready.go:93] pod "etcd-no-preload-349453" in "kube-system" namespace has status "Ready":"True"
	I0916 11:09:35.068227  264436 pod_ready.go:82] duration metric: took 4.517527ms for pod "etcd-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:35.068239  264436 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:35.072552  264436 pod_ready.go:93] pod "kube-apiserver-no-preload-349453" in "kube-system" namespace has status "Ready":"True"
	I0916 11:09:35.072576  264436 pod_ready.go:82] duration metric: took 4.327352ms for pod "kube-apiserver-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:35.072586  264436 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:35.076783  264436 pod_ready.go:93] pod "kube-controller-manager-no-preload-349453" in "kube-system" namespace has status "Ready":"True"
	I0916 11:09:35.076810  264436 pod_ready.go:82] duration metric: took 4.217917ms for pod "kube-controller-manager-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:35.076820  264436 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-n7m28" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:35.260357  264436 pod_ready.go:93] pod "kube-proxy-n7m28" in "kube-system" namespace has status "Ready":"True"
	I0916 11:09:35.260383  264436 pod_ready.go:82] duration metric: took 183.557365ms for pod "kube-proxy-n7m28" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:35.260393  264436 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:35.660221  264436 pod_ready.go:93] pod "kube-scheduler-no-preload-349453" in "kube-system" namespace has status "Ready":"True"
	I0916 11:09:35.660246  264436 pod_ready.go:82] duration metric: took 399.846457ms for pod "kube-scheduler-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:35.660257  264436 pod_ready.go:39] duration metric: took 29.113141917s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:09:35.660274  264436 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:09:35.660348  264436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:09:35.673043  264436 api_server.go:72] duration metric: took 29.776823258s to wait for apiserver process to appear ...
	I0916 11:09:35.673068  264436 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:09:35.673092  264436 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0916 11:09:35.676860  264436 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0916 11:09:35.677763  264436 api_server.go:141] control plane version: v1.31.1
	I0916 11:09:35.677787  264436 api_server.go:131] duration metric: took 4.712796ms to wait for apiserver health ...
	I0916 11:09:35.677800  264436 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 11:09:35.862606  264436 system_pods.go:59] 8 kube-system pods found
	I0916 11:09:35.862640  264436 system_pods.go:61] "coredns-7c65d6cfc9-9zbwk" [427a37dd-9a56-455f-bd9e-3ee604164481] Running
	I0916 11:09:35.862646  264436 system_pods.go:61] "etcd-no-preload-349453" [11377b15-3f0d-463f-8b6f-8eae19108040] Running
	I0916 11:09:35.862651  264436 system_pods.go:61] "kindnet-qbh58" [b43e889b-3179-48c6-b6c4-e6c42131c50c] Running
	I0916 11:09:35.862655  264436 system_pods.go:61] "kube-apiserver-no-preload-349453" [06a64195-a484-412f-930b-310199ce4d80] Running
	I0916 11:09:35.862660  264436 system_pods.go:61] "kube-controller-manager-no-preload-349453" [f31864d9-a8cc-4d04-8f94-497ae381d9ca] Running
	I0916 11:09:35.862664  264436 system_pods.go:61] "kube-proxy-n7m28" [a0580caf-fdc3-483b-a5b9-f29db25b8ef6] Running
	I0916 11:09:35.862667  264436 system_pods.go:61] "kube-scheduler-no-preload-349453" [94505c46-f9fd-4bc6-8287-27c30e4696f0] Running
	I0916 11:09:35.862672  264436 system_pods.go:61] "storage-provisioner" [2f218f7f-9232-4d85-bd8d-6cdc6516c83f] Running
	I0916 11:09:35.862678  264436 system_pods.go:74] duration metric: took 184.872639ms to wait for pod list to return data ...
	I0916 11:09:35.862685  264436 default_sa.go:34] waiting for default service account to be created ...
	I0916 11:09:36.061081  264436 default_sa.go:45] found service account: "default"
	I0916 11:09:36.061114  264436 default_sa.go:55] duration metric: took 198.421124ms for default service account to be created ...
	I0916 11:09:36.061127  264436 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 11:09:36.262420  264436 system_pods.go:86] 8 kube-system pods found
	I0916 11:09:36.262457  264436 system_pods.go:89] "coredns-7c65d6cfc9-9zbwk" [427a37dd-9a56-455f-bd9e-3ee604164481] Running
	I0916 11:09:36.262466  264436 system_pods.go:89] "etcd-no-preload-349453" [11377b15-3f0d-463f-8b6f-8eae19108040] Running
	I0916 11:09:36.262471  264436 system_pods.go:89] "kindnet-qbh58" [b43e889b-3179-48c6-b6c4-e6c42131c50c] Running
	I0916 11:09:36.262477  264436 system_pods.go:89] "kube-apiserver-no-preload-349453" [06a64195-a484-412f-930b-310199ce4d80] Running
	I0916 11:09:36.262483  264436 system_pods.go:89] "kube-controller-manager-no-preload-349453" [f31864d9-a8cc-4d04-8f94-497ae381d9ca] Running
	I0916 11:09:36.262489  264436 system_pods.go:89] "kube-proxy-n7m28" [a0580caf-fdc3-483b-a5b9-f29db25b8ef6] Running
	I0916 11:09:36.262494  264436 system_pods.go:89] "kube-scheduler-no-preload-349453" [94505c46-f9fd-4bc6-8287-27c30e4696f0] Running
	I0916 11:09:36.262500  264436 system_pods.go:89] "storage-provisioner" [2f218f7f-9232-4d85-bd8d-6cdc6516c83f] Running
	I0916 11:09:36.262508  264436 system_pods.go:126] duration metric: took 201.374457ms to wait for k8s-apps to be running ...
	I0916 11:09:36.262526  264436 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 11:09:36.262581  264436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:09:36.276938  264436 system_svc.go:56] duration metric: took 14.399242ms WaitForService to wait for kubelet
	I0916 11:09:36.276973  264436 kubeadm.go:582] duration metric: took 30.380758589s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:09:36.277002  264436 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:09:36.460520  264436 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 11:09:36.460557  264436 node_conditions.go:123] node cpu capacity is 8
	I0916 11:09:36.460575  264436 node_conditions.go:105] duration metric: took 183.566872ms to run NodePressure ...
	I0916 11:09:36.460589  264436 start.go:241] waiting for startup goroutines ...
	I0916 11:09:36.460599  264436 start.go:246] waiting for cluster config update ...
	I0916 11:09:36.460617  264436 start.go:255] writing updated cluster config ...
	I0916 11:09:36.460929  264436 ssh_runner.go:195] Run: rm -f paused
	I0916 11:09:36.468132  264436 out.go:177] * Done! kubectl is now configured to use "no-preload-349453" cluster and "default" namespace by default
	E0916 11:09:36.469497  264436 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	30acbc7b45e29       c69fa2e9cbf5f       3 seconds ago       Running             coredns                   0                   290db8b125607       coredns-7c65d6cfc9-9zbwk
	b30641ccb64e3       12968670680f4       27 seconds ago      Running             kindnet-cni               0                   06502caa119d4       kindnet-qbh58
	6fe6dedc21740       6e38f40d628db       30 seconds ago      Running             storage-provisioner       0                   0e0c238d616bc       storage-provisioner
	49542fa155836       60c005f310ff3       30 seconds ago      Running             kube-proxy                0                   0072787e29726       kube-proxy-n7m28
	a4b95a39232c2       175ffd71cce3d       41 seconds ago      Running             kube-controller-manager   0                   8aeec0e766fdb       kube-controller-manager-no-preload-349453
	5c82d38a57c77       9aa1fad941575       41 seconds ago      Running             kube-scheduler            0                   8200d83c8723c       kube-scheduler-no-preload-349453
	0b8b34459e371       2e96e5913fc06       41 seconds ago      Running             etcd                      0                   151cda393a927       etcd-no-preload-349453
	5d35346ecb3ed       6bab7719df100       41 seconds ago      Running             kube-apiserver            0                   4db1422602ab8       kube-apiserver-no-preload-349453
	
	
	==> containerd <==
	Sep 16 11:09:07 no-preload-349453 containerd[860]: time="2024-09-16T11:09:07.362357609Z" level=info msg="StartContainer for \"6fe6dedc217401e73f5795b6cd5cfdd5d65a4314df29b9b5be8775dc661cbffa\" returns successfully"
	Sep 16 11:09:07 no-preload-349453 containerd[860]: time="2024-09-16T11:09:07.545781963Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 16 11:09:09 no-preload-349453 containerd[860]: time="2024-09-16T11:09:09.755702767Z" level=info msg="ImageCreate event name:\"docker.io/kindest/kindnetd:v20240813-c6f155d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 11:09:09 no-preload-349453 containerd[860]: time="2024-09-16T11:09:09.756499468Z" level=info msg="stop pulling image docker.io/kindest/kindnetd:v20240813-c6f155d6: active requests=0, bytes read=36804223"
	Sep 16 11:09:09 no-preload-349453 containerd[860]: time="2024-09-16T11:09:09.757807119Z" level=info msg="ImageCreate event name:\"sha256:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 11:09:09 no-preload-349453 containerd[860]: time="2024-09-16T11:09:09.760234610Z" level=info msg="ImageCreate event name:\"docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 11:09:09 no-preload-349453 containerd[860]: time="2024-09-16T11:09:09.760712943Z" level=info msg="Pulled image \"docker.io/kindest/kindnetd:v20240813-c6f155d6\" with image id \"sha256:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f\", repo tag \"docker.io/kindest/kindnetd:v20240813-c6f155d6\", repo digest \"docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166\", size \"36793393\" in 2.901543409s"
	Sep 16 11:09:09 no-preload-349453 containerd[860]: time="2024-09-16T11:09:09.760776822Z" level=info msg="PullImage \"docker.io/kindest/kindnetd:v20240813-c6f155d6\" returns image reference \"sha256:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f\""
	Sep 16 11:09:09 no-preload-349453 containerd[860]: time="2024-09-16T11:09:09.764672858Z" level=info msg="CreateContainer within sandbox \"06502caa119d42a5346554004e633bc20fb46b393d2a00987f03e1f4604bb0cb\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Sep 16 11:09:09 no-preload-349453 containerd[860]: time="2024-09-16T11:09:09.778359852Z" level=info msg="CreateContainer within sandbox \"06502caa119d42a5346554004e633bc20fb46b393d2a00987f03e1f4604bb0cb\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"b30641ccb64e362b81a950f5292970ba402b0ac24d54dd1f6caba97fe10efe1a\""
	Sep 16 11:09:09 no-preload-349453 containerd[860]: time="2024-09-16T11:09:09.779047237Z" level=info msg="StartContainer for \"b30641ccb64e362b81a950f5292970ba402b0ac24d54dd1f6caba97fe10efe1a\""
	Sep 16 11:09:09 no-preload-349453 containerd[860]: time="2024-09-16T11:09:09.837438306Z" level=info msg="StartContainer for \"b30641ccb64e362b81a950f5292970ba402b0ac24d54dd1f6caba97fe10efe1a\" returns successfully"
	Sep 16 11:09:11 no-preload-349453 containerd[860]: time="2024-09-16T11:09:11.254668504Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Sep 16 11:09:19 no-preload-349453 containerd[860]: time="2024-09-16T11:09:19.751075294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9zbwk,Uid:427a37dd-9a56-455f-bd9e-3ee604164481,Namespace:kube-system,Attempt:0,}"
	Sep 16 11:09:19 no-preload-349453 containerd[860]: time="2024-09-16T11:09:19.776142115Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9zbwk,Uid:427a37dd-9a56-455f-bd9e-3ee604164481,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5cf5f98e50f3f78d6413dc73a990393f6b55cca1a6aeabc7984f5b2b95148f50\": failed to find network info for sandbox \"5cf5f98e50f3f78d6413dc73a990393f6b55cca1a6aeabc7984f5b2b95148f50\""
	Sep 16 11:09:33 no-preload-349453 containerd[860]: time="2024-09-16T11:09:33.751187315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9zbwk,Uid:427a37dd-9a56-455f-bd9e-3ee604164481,Namespace:kube-system,Attempt:0,}"
	Sep 16 11:09:33 no-preload-349453 containerd[860]: time="2024-09-16T11:09:33.785477296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 11:09:33 no-preload-349453 containerd[860]: time="2024-09-16T11:09:33.785552025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 11:09:33 no-preload-349453 containerd[860]: time="2024-09-16T11:09:33.785563820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:09:33 no-preload-349453 containerd[860]: time="2024-09-16T11:09:33.785650565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:09:33 no-preload-349453 containerd[860]: time="2024-09-16T11:09:33.833330346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9zbwk,Uid:427a37dd-9a56-455f-bd9e-3ee604164481,Namespace:kube-system,Attempt:0,} returns sandbox id \"290db8b125607d520b2543935c4df4e547514fdab3734aeba6c5c7a161992240\""
	Sep 16 11:09:33 no-preload-349453 containerd[860]: time="2024-09-16T11:09:33.836077045Z" level=info msg="CreateContainer within sandbox \"290db8b125607d520b2543935c4df4e547514fdab3734aeba6c5c7a161992240\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Sep 16 11:09:33 no-preload-349453 containerd[860]: time="2024-09-16T11:09:33.851638309Z" level=info msg="CreateContainer within sandbox \"290db8b125607d520b2543935c4df4e547514fdab3734aeba6c5c7a161992240\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"30acbc7b45e29f44070b3c2cd9e349609594ebbaf39bfad9077eca4ac3da4b6d\""
	Sep 16 11:09:33 no-preload-349453 containerd[860]: time="2024-09-16T11:09:33.852272763Z" level=info msg="StartContainer for \"30acbc7b45e29f44070b3c2cd9e349609594ebbaf39bfad9077eca4ac3da4b6d\""
	Sep 16 11:09:33 no-preload-349453 containerd[860]: time="2024-09-16T11:09:33.896573615Z" level=info msg="StartContainer for \"30acbc7b45e29f44070b3c2cd9e349609594ebbaf39bfad9077eca4ac3da4b6d\" returns successfully"
	
	
	==> coredns [30acbc7b45e29f44070b3c2cd9e349609594ebbaf39bfad9077eca4ac3da4b6d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:57592 - 13339 "HINFO IN 8962497822399797364.2477591037072266195. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011748401s
	
	
	==> describe nodes <==
	Name:               no-preload-349453
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-349453
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=no-preload-349453
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T11_09_01_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:08:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-349453
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:09:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:09:31 +0000   Mon, 16 Sep 2024 11:08:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:09:31 +0000   Mon, 16 Sep 2024 11:08:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:09:31 +0000   Mon, 16 Sep 2024 11:08:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:09:31 +0000   Mon, 16 Sep 2024 11:08:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-349453
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 7ac769ff9aa04aaf92b2dd2bf68f2f82
	  System UUID:                28dd4bdd-2700-4b67-8389-386a38b68a64
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-9zbwk                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     31s
	  kube-system                 etcd-no-preload-349453                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         36s
	  kube-system                 kindnet-qbh58                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      32s
	  kube-system                 kube-apiserver-no-preload-349453             250m (3%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-no-preload-349453    200m (2%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-n7m28                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-scheduler-no-preload-349453             100m (1%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 30s   kube-proxy       
	  Normal   Starting                 37s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 37s   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  37s   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  37s   kubelet          Node no-preload-349453 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    37s   kubelet          Node no-preload-349453 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     37s   kubelet          Node no-preload-349453 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           33s   node-controller  Node no-preload-349453 event: Registered Node no-preload-349453 in Controller
	
	
	==> dmesg <==
	[Sep16 11:00] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000006] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000002] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +0.000040] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000004] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +1.028430] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000006] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +0.004229] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000004] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +2.011572] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000009] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +4.031652] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000006] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +0.000018] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +8.195254] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000007] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[Sep16 11:03] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-26a107bdc9bf
	[  +0.000006] ll header: 00000000: 02 42 7f 04 0c 59 02 42 c0 a8 4c 02 08 00
	[  +1.005595] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-26a107bdc9bf
	[  +0.000005] ll header: 00000000: 02 42 7f 04 0c 59 02 42 c0 a8 4c 02 08 00
	[Sep16 11:07] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [0b8b34459e3719308549f98b1bdb7656a632bebe98e50614463b773f213a735d] <==
	{"level":"info","ts":"2024-09-16T11:08:56.341999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T11:08:56.342053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T11:08:56.342083Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 1"}
	{"level":"info","ts":"2024-09-16T11:08:56.342100Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T11:08:56.342109Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2024-09-16T11:08:56.342122Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 2"}
	{"level":"info","ts":"2024-09-16T11:08:56.342153Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2024-09-16T11:08:56.343021Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:08:56.343585Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:08:56.343582Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:no-preload-349453 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T11:08:56.343760Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:08:56.343891Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T11:08:56.343954Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T11:08:56.344739Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:08:56.344836Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:08:56.344861Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:08:56.344977Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:08:56.345568Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:08:56.346072Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T11:08:56.346688Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2024-09-16T11:08:59.251819Z","caller":"traceutil/trace.go:171","msg":"trace[909223504] linearizableReadLoop","detail":"{readStateIndex:78; appliedIndex:77; }","duration":"124.299534ms","start":"2024-09-16T11:08:59.127499Z","end":"2024-09-16T11:08:59.251798Z","steps":["trace[909223504] 'read index received'  (duration: 61.163504ms)","trace[909223504] 'applied index is now lower than readState.Index'  (duration: 63.13541ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T11:08:59.251872Z","caller":"traceutil/trace.go:171","msg":"trace[1280881910] transaction","detail":"{read_only:false; response_revision:74; number_of_response:1; }","duration":"128.600617ms","start":"2024-09-16T11:08:59.123247Z","end":"2024-09-16T11:08:59.251847Z","steps":["trace[1280881910] 'process raft request'  (duration: 65.397729ms)","trace[1280881910] 'compare'  (duration: 63.021346ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T11:08:59.251948Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.433124ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2024-09-16T11:08:59.252009Z","caller":"traceutil/trace.go:171","msg":"trace[1202054448] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:74; }","duration":"124.508287ms","start":"2024-09-16T11:08:59.127491Z","end":"2024-09-16T11:08:59.251999Z","steps":["trace[1202054448] 'agreement among raft nodes before linearized reading'  (duration: 124.386955ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T11:08:59.439373Z","caller":"traceutil/trace.go:171","msg":"trace[1221868137] transaction","detail":"{read_only:false; response_revision:75; number_of_response:1; }","duration":"183.565022ms","start":"2024-09-16T11:08:59.255790Z","end":"2024-09-16T11:08:59.439355Z","steps":["trace[1221868137] 'process raft request'  (duration: 120.890221ms)","trace[1221868137] 'compare'  (duration: 62.56898ms)"],"step_count":2}
	
	
	==> kernel <==
	 11:09:37 up 52 min,  0 users,  load average: 3.85, 3.56, 2.17
	Linux no-preload-349453 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [b30641ccb64e362b81a950f5292970ba402b0ac24d54dd1f6caba97fe10efe1a] <==
	I0916 11:09:10.022282       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 11:09:10.022538       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I0916 11:09:10.022724       1 main.go:148] setting mtu 1500 for CNI 
	I0916 11:09:10.022743       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 11:09:10.022773       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 11:09:10.420723       1 controller.go:334] Starting controller kube-network-policies
	I0916 11:09:10.421181       1 controller.go:338] Waiting for informer caches to sync
	I0916 11:09:10.421189       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 11:09:10.721709       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 11:09:10.721737       1 metrics.go:61] Registering metrics
	I0916 11:09:10.721785       1 controller.go:374] Syncing nftables rules
	I0916 11:09:20.425801       1 main.go:295] Handling node with IPs: map[192.168.94.2:{}]
	I0916 11:09:20.425835       1 main.go:299] handling current node
	I0916 11:09:30.427819       1 main.go:295] Handling node with IPs: map[192.168.94.2:{}]
	I0916 11:09:30.427851       1 main.go:299] handling current node
	
	
	==> kube-apiserver [5d35346ecb3ed281435941a815ae449eb1e54a96a4798b3c0a33aa91f7f98817] <==
	I0916 11:08:58.121126       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 11:08:58.121202       1 aggregator.go:171] initial CRD sync complete...
	I0916 11:08:58.121292       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 11:08:58.121348       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 11:08:58.121378       1 cache.go:39] Caches are synced for autoregister controller
	I0916 11:08:58.125871       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 11:08:58.125896       1 policy_source.go:224] refreshing policies
	E0916 11:08:58.127837       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0916 11:08:58.128408       1 controller.go:615] quota admission added evaluator for: namespaces
	I0916 11:08:58.330384       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 11:08:59.052089       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0916 11:08:59.116570       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0916 11:08:59.116590       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 11:08:59.898521       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 11:08:59.934152       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 11:09:00.034219       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0916 11:09:00.046397       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I0916 11:09:00.047830       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 11:09:00.052313       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 11:09:00.132789       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 11:09:00.923235       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 11:09:00.932632       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 11:09:00.942148       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 11:09:05.485210       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0916 11:09:05.785724       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [a4b95a39232c279fb958cb4f48f6828232143f571fa31629227ec0c0bfd79969] <==
	I0916 11:09:04.958862       1 shared_informer.go:320] Caches are synced for disruption
	I0916 11:09:05.033681       1 shared_informer.go:320] Caches are synced for service account
	I0916 11:09:05.036943       1 shared_informer.go:320] Caches are synced for namespace
	I0916 11:09:05.044777       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 11:09:05.088060       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 11:09:05.501553       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:09:05.582619       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:09:05.582651       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 11:09:05.590465       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-349453"
	I0916 11:09:06.045501       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="255.463843ms"
	I0916 11:09:06.052468       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.901285ms"
	I0916 11:09:06.052558       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="51.667µs"
	I0916 11:09:06.053697       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="66.481µs"
	I0916 11:09:06.131407       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="100.516µs"
	I0916 11:09:06.647300       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="17.526542ms"
	I0916 11:09:06.654990       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="7.635755ms"
	I0916 11:09:06.655120       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="81.851µs"
	I0916 11:09:07.881805       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="70.434µs"
	I0916 11:09:07.887535       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="88.293µs"
	I0916 11:09:07.891032       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="71.532µs"
	I0916 11:09:11.264980       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-349453"
	I0916 11:09:31.598112       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-349453"
	I0916 11:09:34.905630       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="79.673µs"
	I0916 11:09:34.923877       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.97092ms"
	I0916 11:09:34.923984       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="71.271µs"
	
	
	==> kube-proxy [49542fa155836c3fde0549a1cb7c27e6491fe3d03f704b1dd6194e52b361cf04] <==
	I0916 11:09:06.867943       1 server_linux.go:66] "Using iptables proxy"
	I0916 11:09:06.995156       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.94.2"]
	E0916 11:09:06.995228       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 11:09:07.016693       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 11:09:07.016755       1 server_linux.go:169] "Using iptables Proxier"
	I0916 11:09:07.018577       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 11:09:07.018989       1 server.go:483] "Version info" version="v1.31.1"
	I0916 11:09:07.019027       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:09:07.020423       1 config.go:105] "Starting endpoint slice config controller"
	I0916 11:09:07.020505       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 11:09:07.020533       1 config.go:328] "Starting node config controller"
	I0916 11:09:07.020679       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 11:09:07.020603       1 config.go:199] "Starting service config controller"
	I0916 11:09:07.020757       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 11:09:07.121453       1 shared_informer.go:320] Caches are synced for service config
	I0916 11:09:07.121498       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 11:09:07.121503       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5c82d38a57c77763e3457d1b66ad9e5b564c5c6137cef3dc16a48cd5d44dcf69] <==
	W0916 11:08:59.221741       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 11:08:59.221790       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:59.261959       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 11:08:59.262001       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:59.265606       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:08:59.265658       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:59.490611       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 11:08:59.490652       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:59.579438       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 11:08:59.579489       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:59.585912       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 11:08:59.585982       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:59.629574       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 11:08:59.629617       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:59.663059       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:08:59.663100       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0916 11:08:59.685631       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 11:08:59.685685       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:59.695015       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 11:08:59.695064       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:59.697126       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 11:08:59.697157       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:59.699134       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 11:08:59.699171       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 11:09:02.728201       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 11:09:06 no-preload-349453 kubelet[2271]: E0916 11:09:06.435017    2271 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"190e287f0460cfd292e37ea473da8054fb99e136fc9d7a7ad33a51e55404f6e1\": failed to find network info for sandbox \"190e287f0460cfd292e37ea473da8054fb99e136fc9d7a7ad33a51e55404f6e1\"" pod="kube-system/coredns-7c65d6cfc9-mvlrh"
	Sep 16 11:09:06 no-preload-349453 kubelet[2271]: E0916 11:09:06.435076    2271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-mvlrh_kube-system(42523754-f961-412c-9c6a-2ad437fadc08)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-mvlrh_kube-system(42523754-f961-412c-9c6a-2ad437fadc08)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"190e287f0460cfd292e37ea473da8054fb99e136fc9d7a7ad33a51e55404f6e1\\\": failed to find network info for sandbox \\\"190e287f0460cfd292e37ea473da8054fb99e136fc9d7a7ad33a51e55404f6e1\\\"\"" pod="kube-system/coredns-7c65d6cfc9-mvlrh" podUID="42523754-f961-412c-9c6a-2ad437fadc08"
	Sep 16 11:09:06 no-preload-349453 kubelet[2271]: E0916 11:09:06.443968    2271 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdef68bab8cc4b9ea1fa5e97dd3485840d4687181c308884f056b386fcf8e330\": failed to find network info for sandbox \"fdef68bab8cc4b9ea1fa5e97dd3485840d4687181c308884f056b386fcf8e330\""
	Sep 16 11:09:06 no-preload-349453 kubelet[2271]: E0916 11:09:06.444042    2271 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdef68bab8cc4b9ea1fa5e97dd3485840d4687181c308884f056b386fcf8e330\": failed to find network info for sandbox \"fdef68bab8cc4b9ea1fa5e97dd3485840d4687181c308884f056b386fcf8e330\"" pod="kube-system/coredns-7c65d6cfc9-9zbwk"
	Sep 16 11:09:06 no-preload-349453 kubelet[2271]: E0916 11:09:06.444070    2271 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdef68bab8cc4b9ea1fa5e97dd3485840d4687181c308884f056b386fcf8e330\": failed to find network info for sandbox \"fdef68bab8cc4b9ea1fa5e97dd3485840d4687181c308884f056b386fcf8e330\"" pod="kube-system/coredns-7c65d6cfc9-9zbwk"
	Sep 16 11:09:06 no-preload-349453 kubelet[2271]: E0916 11:09:06.444119    2271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-9zbwk_kube-system(427a37dd-9a56-455f-bd9e-3ee604164481)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-9zbwk_kube-system(427a37dd-9a56-455f-bd9e-3ee604164481)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fdef68bab8cc4b9ea1fa5e97dd3485840d4687181c308884f056b386fcf8e330\\\": failed to find network info for sandbox \\\"fdef68bab8cc4b9ea1fa5e97dd3485840d4687181c308884f056b386fcf8e330\\\"\"" pod="kube-system/coredns-7c65d6cfc9-9zbwk" podUID="427a37dd-9a56-455f-bd9e-3ee604164481"
	Sep 16 11:09:06 no-preload-349453 kubelet[2271]: I0916 11:09:06.858135    2271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n7m28" podStartSLOduration=1.858118054 podStartE2EDuration="1.858118054s" podCreationTimestamp="2024-09-16 11:09:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:09:06.857384714 +0000 UTC m=+6.194635549" watchObservedRunningTime="2024-09-16 11:09:06.858118054 +0000 UTC m=+6.195368888"
	Sep 16 11:09:06 no-preload-349453 kubelet[2271]: I0916 11:09:06.925656    2271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42523754-f961-412c-9c6a-2ad437fadc08-config-volume\") pod \"42523754-f961-412c-9c6a-2ad437fadc08\" (UID: \"42523754-f961-412c-9c6a-2ad437fadc08\") "
	Sep 16 11:09:06 no-preload-349453 kubelet[2271]: I0916 11:09:06.925713    2271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggz6t\" (UniqueName: \"kubernetes.io/projected/42523754-f961-412c-9c6a-2ad437fadc08-kube-api-access-ggz6t\") pod \"42523754-f961-412c-9c6a-2ad437fadc08\" (UID: \"42523754-f961-412c-9c6a-2ad437fadc08\") "
	Sep 16 11:09:06 no-preload-349453 kubelet[2271]: I0916 11:09:06.925791    2271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96zdr\" (UniqueName: \"kubernetes.io/projected/2f218f7f-9232-4d85-bd8d-6cdc6516c83f-kube-api-access-96zdr\") pod \"storage-provisioner\" (UID: \"2f218f7f-9232-4d85-bd8d-6cdc6516c83f\") " pod="kube-system/storage-provisioner"
	Sep 16 11:09:06 no-preload-349453 kubelet[2271]: I0916 11:09:06.925872    2271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2f218f7f-9232-4d85-bd8d-6cdc6516c83f-tmp\") pod \"storage-provisioner\" (UID: \"2f218f7f-9232-4d85-bd8d-6cdc6516c83f\") " pod="kube-system/storage-provisioner"
	Sep 16 11:09:06 no-preload-349453 kubelet[2271]: I0916 11:09:06.926063    2271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42523754-f961-412c-9c6a-2ad437fadc08-config-volume" (OuterVolumeSpecName: "config-volume") pod "42523754-f961-412c-9c6a-2ad437fadc08" (UID: "42523754-f961-412c-9c6a-2ad437fadc08"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Sep 16 11:09:06 no-preload-349453 kubelet[2271]: I0916 11:09:06.928599    2271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42523754-f961-412c-9c6a-2ad437fadc08-kube-api-access-ggz6t" (OuterVolumeSpecName: "kube-api-access-ggz6t") pod "42523754-f961-412c-9c6a-2ad437fadc08" (UID: "42523754-f961-412c-9c6a-2ad437fadc08"). InnerVolumeSpecName "kube-api-access-ggz6t". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 11:09:07 no-preload-349453 kubelet[2271]: I0916 11:09:07.026104    2271 reconciler_common.go:288] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42523754-f961-412c-9c6a-2ad437fadc08-config-volume\") on node \"no-preload-349453\" DevicePath \"\""
	Sep 16 11:09:07 no-preload-349453 kubelet[2271]: I0916 11:09:07.026141    2271 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-ggz6t\" (UniqueName: \"kubernetes.io/projected/42523754-f961-412c-9c6a-2ad437fadc08-kube-api-access-ggz6t\") on node \"no-preload-349453\" DevicePath \"\""
	Sep 16 11:09:07 no-preload-349453 kubelet[2271]: I0916 11:09:07.848853    2271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.848828737 podStartE2EDuration="1.848828737s" podCreationTimestamp="2024-09-16 11:09:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:09:07.848577382 +0000 UTC m=+7.185828216" watchObservedRunningTime="2024-09-16 11:09:07.848828737 +0000 UTC m=+7.186079571"
	Sep 16 11:09:08 no-preload-349453 kubelet[2271]: I0916 11:09:08.753518    2271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42523754-f961-412c-9c6a-2ad437fadc08" path="/var/lib/kubelet/pods/42523754-f961-412c-9c6a-2ad437fadc08/volumes"
	Sep 16 11:09:09 no-preload-349453 kubelet[2271]: I0916 11:09:09.856622    2271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-qbh58" podStartSLOduration=1.952501992 podStartE2EDuration="4.856602316s" podCreationTimestamp="2024-09-16 11:09:05 +0000 UTC" firstStartedPulling="2024-09-16 11:09:06.857718399 +0000 UTC m=+6.194969216" lastFinishedPulling="2024-09-16 11:09:09.761818723 +0000 UTC m=+9.099069540" observedRunningTime="2024-09-16 11:09:09.856516169 +0000 UTC m=+9.193767018" watchObservedRunningTime="2024-09-16 11:09:09.856602316 +0000 UTC m=+9.193853150"
	Sep 16 11:09:11 no-preload-349453 kubelet[2271]: I0916 11:09:11.254039    2271 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 16 11:09:11 no-preload-349453 kubelet[2271]: I0916 11:09:11.255017    2271 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 16 11:09:19 no-preload-349453 kubelet[2271]: E0916 11:09:19.776626    2271 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cf5f98e50f3f78d6413dc73a990393f6b55cca1a6aeabc7984f5b2b95148f50\": failed to find network info for sandbox \"5cf5f98e50f3f78d6413dc73a990393f6b55cca1a6aeabc7984f5b2b95148f50\""
	Sep 16 11:09:19 no-preload-349453 kubelet[2271]: E0916 11:09:19.776742    2271 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cf5f98e50f3f78d6413dc73a990393f6b55cca1a6aeabc7984f5b2b95148f50\": failed to find network info for sandbox \"5cf5f98e50f3f78d6413dc73a990393f6b55cca1a6aeabc7984f5b2b95148f50\"" pod="kube-system/coredns-7c65d6cfc9-9zbwk"
	Sep 16 11:09:19 no-preload-349453 kubelet[2271]: E0916 11:09:19.776774    2271 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cf5f98e50f3f78d6413dc73a990393f6b55cca1a6aeabc7984f5b2b95148f50\": failed to find network info for sandbox \"5cf5f98e50f3f78d6413dc73a990393f6b55cca1a6aeabc7984f5b2b95148f50\"" pod="kube-system/coredns-7c65d6cfc9-9zbwk"
	Sep 16 11:09:19 no-preload-349453 kubelet[2271]: E0916 11:09:19.776838    2271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-9zbwk_kube-system(427a37dd-9a56-455f-bd9e-3ee604164481)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-9zbwk_kube-system(427a37dd-9a56-455f-bd9e-3ee604164481)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5cf5f98e50f3f78d6413dc73a990393f6b55cca1a6aeabc7984f5b2b95148f50\\\": failed to find network info for sandbox \\\"5cf5f98e50f3f78d6413dc73a990393f6b55cca1a6aeabc7984f5b2b95148f50\\\"\"" pod="kube-system/coredns-7c65d6cfc9-9zbwk" podUID="427a37dd-9a56-455f-bd9e-3ee604164481"
	Sep 16 11:09:34 no-preload-349453 kubelet[2271]: I0916 11:09:34.905623    2271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-9zbwk" podStartSLOduration=28.905602056 podStartE2EDuration="28.905602056s" podCreationTimestamp="2024-09-16 11:09:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:09:34.905581503 +0000 UTC m=+34.242832342" watchObservedRunningTime="2024-09-16 11:09:34.905602056 +0000 UTC m=+34.242852892"
	
	
	==> storage-provisioner [6fe6dedc217401e73f5795b6cd5cfdd5d65a4314df29b9b5be8775dc661cbffa] <==
	I0916 11:09:07.370432       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 11:09:07.378006       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 11:09:07.378048       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 11:09:07.384602       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 11:09:07.384718       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6ddd7c41-8f63-47a8-9650-2ec5bbdf92e6", APIVersion:"v1", ResourceVersion:"382", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-349453_a47d0e4b-78c2-4200-80ff-c828e8be01d0 became leader
	I0916 11:09:07.384766       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-349453_a47d0e4b-78c2-4200-80ff-c828e8be01d0!
	I0916 11:09:07.485942       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-349453_a47d0e4b-78c2-4200-80ff-c828e8be01d0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-349453 -n no-preload-349453
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-349453 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context no-preload-349453 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (469.936µs)
helpers_test.go:263: kubectl --context no-preload-349453 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-349453
helpers_test.go:235: (dbg) docker inspect no-preload-349453:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d44e8cc5581d6fb816ecfca3e6fae57d456bc84775e39f1a8ca84f29fd4dc0a3",
	        "Created": "2024-09-16T11:08:35.617729941Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 265359,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T11:08:35.76202248Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/d44e8cc5581d6fb816ecfca3e6fae57d456bc84775e39f1a8ca84f29fd4dc0a3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d44e8cc5581d6fb816ecfca3e6fae57d456bc84775e39f1a8ca84f29fd4dc0a3/hostname",
	        "HostsPath": "/var/lib/docker/containers/d44e8cc5581d6fb816ecfca3e6fae57d456bc84775e39f1a8ca84f29fd4dc0a3/hosts",
	        "LogPath": "/var/lib/docker/containers/d44e8cc5581d6fb816ecfca3e6fae57d456bc84775e39f1a8ca84f29fd4dc0a3/d44e8cc5581d6fb816ecfca3e6fae57d456bc84775e39f1a8ca84f29fd4dc0a3-json.log",
	        "Name": "/no-preload-349453",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-349453:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-349453",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1f16bfb5a739593a3a28aa0d43852a8530e8255fd42575cca67f10b48c3d69d7-init/diff:/var/lib/docker/overlay2/e391a31d00d345931d18da68f8a7d7338830f6d9b98fe203461225d03a65d594/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1f16bfb5a739593a3a28aa0d43852a8530e8255fd42575cca67f10b48c3d69d7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1f16bfb5a739593a3a28aa0d43852a8530e8255fd42575cca67f10b48c3d69d7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1f16bfb5a739593a3a28aa0d43852a8530e8255fd42575cca67f10b48c3d69d7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-349453",
	                "Source": "/var/lib/docker/volumes/no-preload-349453/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-349453",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-349453",
	                "name.minikube.sigs.k8s.io": "no-preload-349453",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "de544e1372d8cb8fd0e1807ad2b8bb665590a19816c7b2adbc56336e3321ad31",
	            "SandboxKey": "/var/run/docker/netns/de544e1372d8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-349453": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "DriverOpts": null,
	                    "NetworkID": "2cc59d4eff808c995119ae607628ad9854df9618b8c5cd5213cb8d98e98ab4f4",
	                    "EndpointID": "afac10d13376be205fe178b7e126e3c65a6479a99b3db779bc1b7fa1828380a8",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-349453",
	                        "d44e8cc5581d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-349453 -n no-preload-349453
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-349453 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-349453 logs -n 25: (1.146486482s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-771611 sudo                  | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | systemctl status containerd            |                           |         |         |                     |                     |
	|         | --all --full --no-pager                |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo                  | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | systemctl cat containerd               |                           |         |         |                     |                     |
	|         | --no-pager                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo cat              | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo cat              | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo                  | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo                  | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo                  | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo find             | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo crio             | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-771611                       | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	| delete  | -p missing-upgrade-327796              | missing-upgrade-327796    | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	| start   | -p cert-expiration-021107              | cert-expiration-021107    | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:08 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                   |                           |         |         |                     |                     |
	|         | --driver=docker                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd         |                           |         |         |                     |                     |
	| start   | -p force-systemd-flag-917705           | force-systemd-flag-917705 | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:08 UTC |
	|         | --memory=2048 --force-systemd          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                   |                           |         |         |                     |                     |
	|         | --container-runtime=containerd         |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-846070               | force-systemd-env-846070  | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | ssh cat                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-846070            | force-systemd-env-846070  | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| stop    | -p kubernetes-upgrade-311911           | kubernetes-upgrade-311911 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| start   | -p cert-options-840054                 | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                  |                           |         |         |                     |                     |
	|         | --driver=docker                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd         |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-311911           | kubernetes-upgrade-311911 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1           |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                   |                           |         |         |                     |                     |
	|         | --container-runtime=containerd         |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-917705              | force-systemd-flag-917705 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | ssh cat                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-917705           | force-systemd-flag-917705 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| start   | -p old-k8s-version-371039              | old-k8s-version-371039    | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true          |                           |         |         |                     |                     |
	|         | --kvm-network=default                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                |                           |         |         |                     |                     |
	|         | --keep-context=false                   |                           |         |         |                     |                     |
	|         | --driver=docker                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	| ssh     | cert-options-840054 ssh                | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | openssl x509 -text -noout -in          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-840054 -- sudo         | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | cat /etc/kubernetes/admin.conf         |                           |         |         |                     |                     |
	| delete  | -p cert-options-840054                 | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| start   | -p no-preload-349453                   | no-preload-349453         | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:09 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false            |                           |         |         |                     |                     |
	|         | --driver=docker                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1           |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:08:30
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:08:30.290580  264436 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:08:30.290727  264436 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:08:30.290740  264436 out.go:358] Setting ErrFile to fd 2...
	I0916 11:08:30.290747  264436 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:08:30.291070  264436 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 11:08:30.291765  264436 out.go:352] Setting JSON to false
	I0916 11:08:30.293115  264436 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3054,"bootTime":1726481856,"procs":326,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:08:30.293251  264436 start.go:139] virtualization: kvm guest
	I0916 11:08:30.295658  264436 out.go:177] * [no-preload-349453] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:08:30.297158  264436 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:08:30.297181  264436 notify.go:220] Checking for updates...
	I0916 11:08:30.299671  264436 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:08:30.301189  264436 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:08:30.302491  264436 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 11:08:30.303773  264436 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:08:30.305030  264436 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:08:30.306912  264436 config.go:182] Loaded profile config "cert-expiration-021107": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:08:30.307059  264436 config.go:182] Loaded profile config "kubernetes-upgrade-311911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:08:30.307222  264436 config.go:182] Loaded profile config "old-k8s-version-371039": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0916 11:08:30.307352  264436 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:08:30.342404  264436 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 11:08:30.342617  264436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:08:30.412580  264436 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:76 SystemTime:2024-09-16 11:08:30.399549033 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:08:30.412784  264436 docker.go:318] overlay module found
	I0916 11:08:30.414974  264436 out.go:177] * Using the docker driver based on user configuration
	I0916 11:08:30.416257  264436 start.go:297] selected driver: docker
	I0916 11:08:30.416276  264436 start.go:901] validating driver "docker" against <nil>
	I0916 11:08:30.416296  264436 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:08:30.417426  264436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:08:30.481659  264436 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:76 SystemTime:2024-09-16 11:08:30.467819434 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:08:30.481930  264436 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 11:08:30.482367  264436 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:08:30.484332  264436 out.go:177] * Using Docker driver with root privileges
	I0916 11:08:30.485686  264436 cni.go:84] Creating CNI manager for ""
	I0916 11:08:30.485767  264436 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:08:30.485786  264436 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 11:08:30.485897  264436 start.go:340] cluster config:
	{Name:no-preload-349453 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-349453 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:08:30.487638  264436 out.go:177] * Starting "no-preload-349453" primary control-plane node in "no-preload-349453" cluster
	I0916 11:08:30.489182  264436 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 11:08:30.490994  264436 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 11:08:30.492484  264436 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:08:30.492588  264436 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 11:08:30.492646  264436 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/config.json ...
	I0916 11:08:30.492678  264436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/config.json: {Name:mk7f1330c6b2d92e29945227c336833ff6ffb7fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:30.492798  264436 cache.go:107] acquiring lock: {Name:mk505f3dd823c459cfb83f2d2a39affe63c4c40e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:08:30.492789  264436 cache.go:107] acquiring lock: {Name:mk0f2d9e0670c46fe9eb165a8119acf30531a2e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:08:30.492888  264436 cache.go:107] acquiring lock: {Name:mk0b25b3ebef8c92ed85c693112bf4f2b400d9b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:08:30.492912  264436 cache.go:115] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0916 11:08:30.492874  264436 cache.go:107] acquiring lock: {Name:mkd9c658f7569779b8a27d53e97cc0f70f55a845 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:08:30.492875  264436 cache.go:107] acquiring lock: {Name:mkb7cb231873e7918d3e306be4ec4f6091d91485 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:08:30.492929  264436 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 136.837µs
	I0916 11:08:30.492947  264436 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:08:30.492963  264436 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0916 11:08:30.492986  264436 cache.go:107] acquiring lock: {Name:mk8275b1fd51b04034df297d05c3d74274567a6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:08:30.493018  264436 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:08:30.493066  264436 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:08:30.493091  264436 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:08:30.493102  264436 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0916 11:08:30.493234  264436 cache.go:107] acquiring lock: {Name:mkd90d764df5e26e345f1c24540d37a0e89a5b18 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:08:30.493259  264436 cache.go:107] acquiring lock: {Name:mk612053845ede903900e7b583df14a07089be08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:08:30.493328  264436 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:08:30.493343  264436 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0916 11:08:30.494117  264436 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0916 11:08:30.494618  264436 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:08:30.494682  264436 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:08:30.494622  264436 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:08:30.494909  264436 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:08:30.494695  264436 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0916 11:08:30.496479  264436 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	W0916 11:08:30.521360  264436 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 11:08:30.521384  264436 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 11:08:30.521484  264436 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 11:08:30.521512  264436 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 11:08:30.521521  264436 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 11:08:30.521530  264436 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 11:08:30.521538  264436 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 11:08:30.581569  264436 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 11:08:30.581616  264436 cache.go:194] Successfully downloaded all kic artifacts
	I0916 11:08:30.581661  264436 start.go:360] acquireMachinesLock for no-preload-349453: {Name:mk8558ad422c1a28af392329b5800e6b7ec410a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:08:30.581784  264436 start.go:364] duration metric: took 104.124µs to acquireMachinesLock for "no-preload-349453"
	I0916 11:08:30.581916  264436 start.go:93] Provisioning new machine with config: &{Name:no-preload-349453 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-349453 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMe
trics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 11:08:30.582030  264436 start.go:125] createHost starting for "" (driver="docker")
	I0916 11:08:32.243803  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 11:08:32.243852  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:08:31.292696  260870 containerd.go:563] duration metric: took 1.167769285s to copy over tarball
	I0916 11:08:31.292764  260870 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 11:08:33.986408  260870 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.693618841s)
	I0916 11:08:33.986435  260870 containerd.go:570] duration metric: took 2.693711801s to extract the tarball
	I0916 11:08:33.986442  260870 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 11:08:34.058024  260870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:08:34.129814  260870 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 11:08:34.239782  260870 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:08:34.273790  260870 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0916 11:08:34.273814  260870 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0916 11:08:34.273863  260870 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:08:34.273888  260870 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:08:34.273911  260870 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:08:34.273925  260870 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0916 11:08:34.273939  260870 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:08:34.273984  260870 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0916 11:08:34.273983  260870 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0916 11:08:34.273894  260870 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:08:34.275457  260870 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:08:34.275470  260870 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:08:34.275487  260870 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0916 11:08:34.275487  260870 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:08:34.275498  260870 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:08:34.275465  260870 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:08:34.275780  260870 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0916 11:08:34.275781  260870 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0916 11:08:34.466060  260870 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.4.13-0" and sha "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934"
	I0916 11:08:34.466124  260870 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.4.13-0
	I0916 11:08:34.488460  260870 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0916 11:08:34.488504  260870 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0916 11:08:34.488539  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:08:34.492122  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 11:08:34.498533  260870 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.20.0" and sha "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080"
	I0916 11:08:34.498612  260870 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:08:34.502891  260870 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.2" and sha "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c"
	I0916 11:08:34.502966  260870 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.2
	I0916 11:08:34.507568  260870 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.20.0" and sha "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99"
	I0916 11:08:34.507620  260870 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:08:34.528734  260870 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.20.0" and sha "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899"
	I0916 11:08:34.528802  260870 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:08:34.532124  260870 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0916 11:08:34.532165  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 11:08:34.532165  260870 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:08:34.532250  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:08:34.533288  260870 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns:1.7.0" and sha "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16"
	I0916 11:08:34.533345  260870 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns:1.7.0
	I0916 11:08:34.533812  260870 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0916 11:08:34.533878  260870 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0916 11:08:34.533919  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:08:34.537025  260870 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.20.0" and sha "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc"
	I0916 11:08:34.537100  260870 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:08:34.557448  260870 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0916 11:08:34.557464  260870 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0916 11:08:34.557501  260870 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:08:34.557501  260870 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:08:34.557547  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:08:34.557547  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:08:34.568864  260870 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0916 11:08:34.568898  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 11:08:34.568915  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 11:08:34.568916  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:08:34.568924  260870 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0916 11:08:34.568944  260870 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0916 11:08:34.568958  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:08:34.568969  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:08:34.568978  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:08:34.568978  260870 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:08:34.569018  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:08:34.729417  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:08:34.729479  260870 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0916 11:08:34.729539  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:08:34.729542  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 11:08:34.729639  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:08:34.729679  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:08:34.729692  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 11:08:34.846706  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:08:34.849695  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:08:34.849746  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 11:08:34.849751  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:08:34.849830  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 11:08:34.849855  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:08:35.032207  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:08:35.032853  260870 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0916 11:08:35.037891  260870 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0916 11:08:35.037932  260870 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0916 11:08:35.038023  260870 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0916 11:08:35.038051  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 11:08:35.068211  260870 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0916 11:08:35.124935  260870 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0916 11:08:30.584062  264436 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 11:08:30.584349  264436 start.go:159] libmachine.API.Create for "no-preload-349453" (driver="docker")
	I0916 11:08:30.584376  264436 client.go:168] LocalClient.Create starting
	I0916 11:08:30.584454  264436 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem
	I0916 11:08:30.584501  264436 main.go:141] libmachine: Decoding PEM data...
	I0916 11:08:30.584522  264436 main.go:141] libmachine: Parsing certificate...
	I0916 11:08:30.584586  264436 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem
	I0916 11:08:30.584611  264436 main.go:141] libmachine: Decoding PEM data...
	I0916 11:08:30.584626  264436 main.go:141] libmachine: Parsing certificate...
	I0916 11:08:30.585045  264436 cli_runner.go:164] Run: docker network inspect no-preload-349453 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 11:08:30.610640  264436 cli_runner.go:211] docker network inspect no-preload-349453 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 11:08:30.610749  264436 network_create.go:284] running [docker network inspect no-preload-349453] to gather additional debugging logs...
	I0916 11:08:30.610897  264436 cli_runner.go:164] Run: docker network inspect no-preload-349453
	W0916 11:08:30.633247  264436 cli_runner.go:211] docker network inspect no-preload-349453 returned with exit code 1
	I0916 11:08:30.633283  264436 network_create.go:287] error running [docker network inspect no-preload-349453]: docker network inspect no-preload-349453: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-349453 not found
	I0916 11:08:30.633310  264436 network_create.go:289] output of [docker network inspect no-preload-349453]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-349453 not found
	
	** /stderr **
	I0916 11:08:30.633427  264436 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:08:30.661732  264436 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c95c64bb41bd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:bd:76:ab:c4} reservation:<nil>}
	I0916 11:08:30.663027  264436 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fad43aa9929b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:94:3f:61:fd} reservation:<nil>}
	I0916 11:08:30.664348  264436 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-49585fce923a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:ad:45:94:54} reservation:<nil>}
	I0916 11:08:30.665251  264436 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-45dc384def28 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:95:3e:48:c3} reservation:<nil>}
	I0916 11:08:30.666118  264436 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-b7c76f2e9a1f IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:4a:59:5d:75} reservation:<nil>}
	I0916 11:08:30.667352  264436 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014118f0}
	I0916 11:08:30.667386  264436 network_create.go:124] attempt to create docker network no-preload-349453 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I0916 11:08:30.667448  264436 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-349453 no-preload-349453
	I0916 11:08:30.736241  264436 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0916 11:08:30.758180  264436 network_create.go:108] docker network no-preload-349453 192.168.94.0/24 created
	I0916 11:08:30.758216  264436 kic.go:121] calculated static IP "192.168.94.2" for the "no-preload-349453" container
	I0916 11:08:30.758297  264436 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 11:08:30.767506  264436 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I0916 11:08:30.770224  264436 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0916 11:08:30.784652  264436 cli_runner.go:164] Run: docker volume create no-preload-349453 --label name.minikube.sigs.k8s.io=no-preload-349453 --label created_by.minikube.sigs.k8s.io=true
	I0916 11:08:30.787645  264436 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0916 11:08:30.789687  264436 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0916 11:08:30.791298  264436 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0916 11:08:30.809926  264436 oci.go:103] Successfully created a docker volume no-preload-349453
	I0916 11:08:30.810088  264436 cli_runner.go:164] Run: docker run --rm --name no-preload-349453-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-349453 --entrypoint /usr/bin/test -v no-preload-349453:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 11:08:30.986670  264436 cache.go:157] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0916 11:08:30.986704  264436 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 493.451965ms
	I0916 11:08:30.986721  264436 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0916 11:08:30.992662  264436 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0916 11:08:31.459004  264436 cache.go:157] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0916 11:08:31.459044  264436 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1" took 966.158295ms
	I0916 11:08:31.459071  264436 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0916 11:08:32.902149  264436 cache.go:157] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0916 11:08:32.902263  264436 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1" took 2.409439664s
	I0916 11:08:32.902288  264436 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0916 11:08:32.954934  264436 cache.go:157] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0916 11:08:32.955019  264436 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1" took 2.462197691s
	I0916 11:08:32.955043  264436 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0916 11:08:32.982491  264436 cache.go:157] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0916 11:08:32.982539  264436 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1" took 2.489760683s
	I0916 11:08:32.982557  264436 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0916 11:08:33.008590  264436 cache.go:157] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0916 11:08:33.008619  264436 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 2.515390278s
	I0916 11:08:33.008636  264436 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0916 11:08:33.364029  264436 cache.go:157] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 exists
	I0916 11:08:33.364061  264436 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0" took 2.871077786s
	I0916 11:08:33.364074  264436 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0916 11:08:33.364098  264436 cache.go:87] Successfully saved all images to host disk.
	I0916 11:08:35.392285  260870 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I0916 11:08:35.392370  260870 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:08:35.438527  260870 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0916 11:08:35.438576  260870 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:08:35.438615  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:08:35.442067  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:08:35.527055  260870 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0916 11:08:35.527210  260870 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0916 11:08:35.531022  260870 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0916 11:08:35.531056  260870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0916 11:08:35.609317  260870 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0916 11:08:35.609393  260870 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I0916 11:08:36.042074  260870 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0916 11:08:36.042130  260870 cache_images.go:92] duration metric: took 1.768300894s to LoadCachedImages
	W0916 11:08:36.042205  260870 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0916 11:08:36.042220  260870 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.20.0 containerd true true} ...
	I0916 11:08:36.042328  260870 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-371039 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-371039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:08:36.042388  260870 ssh_runner.go:195] Run: sudo crictl info
	I0916 11:08:36.087682  260870 cni.go:84] Creating CNI manager for ""
	I0916 11:08:36.087706  260870 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:08:36.087715  260870 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:08:36.087732  260870 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-371039 NodeName:old-k8s-version-371039 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0916 11:08:36.087889  260870 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-371039"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:08:36.087956  260870 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0916 11:08:36.096824  260870 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 11:08:36.096888  260870 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:08:36.105501  260870 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (443 bytes)
	I0916 11:08:36.123886  260870 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:08:36.142412  260870 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2128 bytes)
	I0916 11:08:36.160845  260870 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0916 11:08:36.164496  260870 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:08:36.175171  260870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:08:36.270265  260870 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:08:36.288432  260870 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039 for IP: 192.168.103.2
	I0916 11:08:36.288456  260870 certs.go:194] generating shared ca certs ...
	I0916 11:08:36.288476  260870 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:36.288648  260870 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 11:08:36.288704  260870 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 11:08:36.288714  260870 certs.go:256] generating profile certs ...
	I0916 11:08:36.288781  260870 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.key
	I0916 11:08:36.288802  260870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.crt with IP's: []
	I0916 11:08:36.405455  260870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.crt ...
	I0916 11:08:36.405492  260870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.crt: {Name:mk82ea8fcc0c34a14f2e7e173fd4907cf9b8e3c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:36.405667  260870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.key ...
	I0916 11:08:36.405681  260870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.key: {Name:mkae0b2fcb25419f4a74135b55a637382d7b9ec5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:36.405759  260870 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/apiserver.key.2be0dd44
	I0916 11:08:36.405776  260870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/apiserver.crt.2be0dd44 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I0916 11:08:36.459262  260870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/apiserver.crt.2be0dd44 ...
	I0916 11:08:36.459292  260870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/apiserver.crt.2be0dd44: {Name:mk62a33feea446132b32229b845b6bb967faebe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:36.459439  260870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/apiserver.key.2be0dd44 ...
	I0916 11:08:36.459453  260870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/apiserver.key.2be0dd44: {Name:mka88753a9e7441e98fdbaa3acff880db3ae57f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:36.459521  260870 certs.go:381] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/apiserver.crt.2be0dd44 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/apiserver.crt
	I0916 11:08:36.459592  260870 certs.go:385] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/apiserver.key.2be0dd44 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/apiserver.key
	I0916 11:08:36.459649  260870 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/proxy-client.key
	I0916 11:08:36.459664  260870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/proxy-client.crt with IP's: []
	I0916 11:08:36.713401  260870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/proxy-client.crt ...
	I0916 11:08:36.713429  260870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/proxy-client.crt: {Name:mk0c69e2fe4df3505f52bc05b74e3cc3c5f14ccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:36.713612  260870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/proxy-client.key ...
	I0916 11:08:36.713633  260870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/proxy-client.key: {Name:mk505306792a7323c50fbaa6bfa6d39fd8ceb8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:36.713831  260870 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 11:08:36.713869  260870 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 11:08:36.713876  260870 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:08:36.713896  260870 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 11:08:36.713920  260870 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:08:36.713946  260870 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 11:08:36.713982  260870 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:08:36.714511  260870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:08:36.739372  260870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:08:36.765128  260870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:08:36.793852  260870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:08:36.818818  260870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0916 11:08:36.842012  260870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 11:08:36.865358  260870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:08:36.889258  260870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 11:08:36.913024  260870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 11:08:36.939986  260870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 11:08:36.963336  260870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:08:36.986859  260870 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:08:37.003708  260870 ssh_runner.go:195] Run: openssl version
	I0916 11:08:37.009148  260870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 11:08:37.018295  260870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 11:08:37.021964  260870 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 11:08:37.022022  260870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 11:08:37.029281  260870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 11:08:37.038624  260870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 11:08:37.048291  260870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 11:08:37.052395  260870 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 11:08:37.052464  260870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 11:08:37.060420  260870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:08:37.071458  260870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:08:37.082693  260870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:08:37.086499  260870 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:08:37.086575  260870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:08:37.093458  260870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:08:37.103273  260870 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:08:37.106445  260870 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 11:08:37.106492  260870 kubeadm.go:392] StartCluster: {Name:old-k8s-version-371039 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-371039 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMe
trics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:08:37.106586  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 11:08:37.106636  260870 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:08:37.155847  260870 cri.go:89] found id: ""
	I0916 11:08:37.155918  260870 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:08:37.164683  260870 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 11:08:37.173264  260870 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 11:08:37.173334  260870 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 11:08:37.181678  260870 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 11:08:37.181704  260870 kubeadm.go:157] found existing configuration files:
	
	I0916 11:08:37.181753  260870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 11:08:37.190209  260870 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 11:08:37.190268  260870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 11:08:37.198604  260870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 11:08:37.207009  260870 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 11:08:37.207069  260870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 11:08:37.215349  260870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 11:08:37.224252  260870 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 11:08:37.224316  260870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 11:08:37.233091  260870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 11:08:37.241423  260870 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 11:08:37.241484  260870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 11:08:37.249898  260870 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 11:08:37.306344  260870 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0916 11:08:37.306396  260870 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 11:08:37.343524  260870 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 11:08:37.343631  260870 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 11:08:37.343685  260870 kubeadm.go:310] OS: Linux
	I0916 11:08:37.343789  260870 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 11:08:37.343874  260870 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 11:08:37.343965  260870 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 11:08:37.344046  260870 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 11:08:37.344122  260870 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 11:08:37.344202  260870 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 11:08:37.344274  260870 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 11:08:37.344353  260870 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 11:08:37.433846  260870 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 11:08:37.434024  260870 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 11:08:37.434226  260870 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0916 11:08:37.627977  260870 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 11:08:37.244785  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 11:08:37.244822  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:08:37.548910  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:53692->192.168.76.2:8443: read: connection reset by peer
	I0916 11:08:35.539780  264436 cli_runner.go:217] Completed: docker run --rm --name no-preload-349453-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-349453 --entrypoint /usr/bin/test -v no-preload-349453:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib: (4.729567672s)
	I0916 11:08:35.539815  264436 oci.go:107] Successfully prepared a docker volume no-preload-349453
	I0916 11:08:35.539835  264436 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	W0916 11:08:35.539966  264436 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 11:08:35.540080  264436 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 11:08:35.601426  264436 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-349453 --name no-preload-349453 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-349453 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-349453 --network no-preload-349453 --ip 192.168.94.2 --volume no-preload-349453:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 11:08:35.950506  264436 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Running}}
	I0916 11:08:35.975787  264436 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Status}}
	I0916 11:08:35.997694  264436 cli_runner.go:164] Run: docker exec no-preload-349453 stat /var/lib/dpkg/alternatives/iptables
	I0916 11:08:36.047229  264436 oci.go:144] the created container "no-preload-349453" has a running status.
	I0916 11:08:36.047269  264436 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa...
	I0916 11:08:36.201725  264436 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 11:08:36.232588  264436 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Status}}
	I0916 11:08:36.251268  264436 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 11:08:36.251296  264436 kic_runner.go:114] Args: [docker exec --privileged no-preload-349453 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 11:08:36.308796  264436 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Status}}
	I0916 11:08:36.359437  264436 machine.go:93] provisionDockerMachine start ...
	I0916 11:08:36.359543  264436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:08:36.385658  264436 main.go:141] libmachine: Using SSH client type: native
	I0916 11:08:36.385896  264436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I0916 11:08:36.385910  264436 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 11:08:36.568192  264436 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-349453
	
	I0916 11:08:36.568220  264436 ubuntu.go:169] provisioning hostname "no-preload-349453"
	I0916 11:08:36.568291  264436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:08:36.590804  264436 main.go:141] libmachine: Using SSH client type: native
	I0916 11:08:36.591032  264436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I0916 11:08:36.591049  264436 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-349453 && echo "no-preload-349453" | sudo tee /etc/hostname
	I0916 11:08:36.756044  264436 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-349453
	
	I0916 11:08:36.756141  264436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:08:36.777822  264436 main.go:141] libmachine: Using SSH client type: native
	I0916 11:08:36.778002  264436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I0916 11:08:36.778020  264436 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-349453' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-349453/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-349453' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:08:36.911965  264436 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:08:36.911996  264436 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 11:08:36.912019  264436 ubuntu.go:177] setting up certificates
	I0916 11:08:36.912033  264436 provision.go:84] configureAuth start
	I0916 11:08:36.912089  264436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-349453
	I0916 11:08:36.932315  264436 provision.go:143] copyHostCerts
	I0916 11:08:36.932386  264436 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 11:08:36.932399  264436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 11:08:36.932471  264436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 11:08:36.932569  264436 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 11:08:36.932580  264436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 11:08:36.932621  264436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 11:08:36.932706  264436 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 11:08:36.932717  264436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 11:08:36.932753  264436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 11:08:36.932828  264436 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.no-preload-349453 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-349453]
	I0916 11:08:37.209883  264436 provision.go:177] copyRemoteCerts
	I0916 11:08:37.209938  264436 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:08:37.209969  264436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:08:37.228662  264436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:08:37.329001  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 11:08:37.353063  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0916 11:08:37.377321  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 11:08:37.402804  264436 provision.go:87] duration metric: took 490.759265ms to configureAuth
	I0916 11:08:37.402834  264436 ubuntu.go:193] setting minikube options for container-runtime
	I0916 11:08:37.403023  264436 config.go:182] Loaded profile config "no-preload-349453": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:08:37.403037  264436 machine.go:96] duration metric: took 1.043574485s to provisionDockerMachine
	I0916 11:08:37.403043  264436 client.go:171] duration metric: took 6.81866199s to LocalClient.Create
	I0916 11:08:37.403064  264436 start.go:167] duration metric: took 6.818716316s to libmachine.API.Create "no-preload-349453"
	I0916 11:08:37.403076  264436 start.go:293] postStartSetup for "no-preload-349453" (driver="docker")
	I0916 11:08:37.403088  264436 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:08:37.403140  264436 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:08:37.403174  264436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:08:37.422611  264436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:08:37.517150  264436 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:08:37.520935  264436 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 11:08:37.520967  264436 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 11:08:37.520979  264436 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 11:08:37.520988  264436 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 11:08:37.520999  264436 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 11:08:37.521061  264436 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 11:08:37.521153  264436 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 11:08:37.521276  264436 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:08:37.530028  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:08:37.556224  264436 start.go:296] duration metric: took 153.132782ms for postStartSetup
	I0916 11:08:37.556638  264436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-349453
	I0916 11:08:37.580790  264436 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/config.json ...
	I0916 11:08:37.581157  264436 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 11:08:37.581227  264436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:08:37.603557  264436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:08:37.696690  264436 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 11:08:37.700950  264436 start.go:128] duration metric: took 7.118902099s to createHost
	I0916 11:08:37.700981  264436 start.go:83] releasing machines lock for "no-preload-349453", held for 7.119184519s
	I0916 11:08:37.701048  264436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-349453
	I0916 11:08:37.719562  264436 ssh_runner.go:195] Run: cat /version.json
	I0916 11:08:37.719628  264436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:08:37.719633  264436 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:08:37.719749  264436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:08:37.738079  264436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:08:37.739424  264436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:08:37.834189  264436 ssh_runner.go:195] Run: systemctl --version
	I0916 11:08:37.922817  264436 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:08:37.927917  264436 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 11:08:37.952584  264436 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 11:08:37.952658  264436 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:08:37.983959  264436 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 11:08:37.983991  264436 start.go:495] detecting cgroup driver to use...
	I0916 11:08:37.984035  264436 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 11:08:37.984084  264436 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 11:08:37.996632  264436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 11:08:38.008687  264436 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:08:38.008749  264436 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:08:38.022160  264436 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:08:38.035383  264436 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:08:38.121722  264436 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:08:38.206523  264436 docker.go:233] disabling docker service ...
	I0916 11:08:38.206610  264436 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:08:38.227941  264436 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:08:38.240500  264436 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:08:38.314496  264436 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:08:38.393479  264436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:08:38.405005  264436 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:08:38.420776  264436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 11:08:38.431358  264436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 11:08:38.441360  264436 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 11:08:38.441418  264436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 11:08:38.451477  264436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 11:08:38.461117  264436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 11:08:38.470893  264436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 11:08:38.481242  264436 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:08:38.490694  264436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 11:08:38.500709  264436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 11:08:38.510200  264436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 11:08:38.519856  264436 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:08:38.530496  264436 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:08:38.539419  264436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:08:38.617864  264436 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 11:08:38.714406  264436 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 11:08:38.714480  264436 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 11:08:38.718630  264436 start.go:563] Will wait 60s for crictl version
	I0916 11:08:38.718678  264436 ssh_runner.go:195] Run: which crictl
	I0916 11:08:38.722108  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:08:38.756823  264436 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 11:08:38.756917  264436 ssh_runner.go:195] Run: containerd --version
	I0916 11:08:38.780335  264436 ssh_runner.go:195] Run: containerd --version
	I0916 11:08:38.807827  264436 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 11:08:37.630791  260870 out.go:235]   - Generating certificates and keys ...
	I0916 11:08:37.630901  260870 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 11:08:37.630988  260870 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 11:08:37.916130  260870 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 11:08:38.019360  260870 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 11:08:38.158112  260870 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 11:08:38.636583  260870 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 11:08:39.235249  260870 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 11:08:39.235559  260870 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-371039] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0916 11:08:39.445341  260870 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 11:08:39.445561  260870 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-371039] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0916 11:08:39.651806  260870 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 11:08:39.784722  260870 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 11:08:39.962483  260870 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 11:08:39.962681  260870 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 11:08:38.809241  264436 cli_runner.go:164] Run: docker network inspect no-preload-349453 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:08:38.826659  264436 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0916 11:08:38.830468  264436 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:08:38.840961  264436 kubeadm.go:883] updating cluster {Name:no-preload-349453 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-349453 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:08:38.841074  264436 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:08:38.841123  264436 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:08:38.880915  264436 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0916 11:08:38.880944  264436 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0916 11:08:38.881004  264436 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:08:38.881044  264436 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0916 11:08:38.881075  264436 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:08:38.881092  264436 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0916 11:08:38.881101  264436 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:08:38.881114  264436 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:08:38.881057  264436 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:08:38.881079  264436 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:08:38.882295  264436 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:08:38.882294  264436 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0916 11:08:38.882392  264436 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0916 11:08:38.882555  264436 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:08:38.882579  264436 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:08:38.882584  264436 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:08:38.882604  264436 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:08:38.882640  264436 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:08:39.057574  264436 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.11.3" and sha "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6"
	I0916 11:08:39.057644  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:08:39.079273  264436 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0916 11:08:39.079331  264436 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:08:39.079378  264436 ssh_runner.go:195] Run: which crictl
	I0916 11:08:39.082866  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:08:39.087405  264436 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.31.1" and sha "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561"
	I0916 11:08:39.087451  264436 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10" and sha "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136"
	I0916 11:08:39.087473  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:08:39.087504  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10
	I0916 11:08:39.098221  264436 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.31.1" and sha "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee"
	I0916 11:08:39.098303  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:08:39.099842  264436 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.31.1" and sha "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b"
	I0916 11:08:39.099923  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:08:39.104576  264436 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.31.1" and sha "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1"
	I0916 11:08:39.104653  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:08:39.112051  264436 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.5.15-0" and sha "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4"
	I0916 11:08:39.112113  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.5.15-0
	I0916 11:08:39.134734  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:08:39.134733  264436 cache_images.go:116] "registry.k8s.io/pause:3.10" needs transfer: "registry.k8s.io/pause:3.10" does not exist at hash "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136" in container runtime
	I0916 11:08:39.134813  264436 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0916 11:08:39.134858  264436 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0916 11:08:39.134908  264436 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0916 11:08:39.134931  264436 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:08:39.134948  264436 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0916 11:08:39.134970  264436 ssh_runner.go:195] Run: which crictl
	I0916 11:08:39.134979  264436 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:08:39.134864  264436 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:08:39.135036  264436 ssh_runner.go:195] Run: which crictl
	I0916 11:08:39.135077  264436 ssh_runner.go:195] Run: which crictl
	I0916 11:08:39.134913  264436 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:08:39.135127  264436 ssh_runner.go:195] Run: which crictl
	I0916 11:08:39.134827  264436 cri.go:218] Removing image: registry.k8s.io/pause:3.10
	I0916 11:08:39.135203  264436 ssh_runner.go:195] Run: which crictl
	I0916 11:08:39.143907  264436 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0916 11:08:39.143963  264436 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0916 11:08:39.144023  264436 ssh_runner.go:195] Run: which crictl
	I0916 11:08:39.169982  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:08:39.170019  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:08:39.170040  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:08:39.170093  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I0916 11:08:39.170098  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:08:39.170142  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:08:39.170202  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0916 11:08:39.354583  264436 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0916 11:08:39.354683  264436 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0916 11:08:39.354784  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I0916 11:08:39.354865  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:08:39.354955  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0916 11:08:39.355274  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:08:39.355389  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:08:39.355478  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:08:39.541651  264436 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.11.3: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.11.3': No such file or directory
	I0916 11:08:39.541683  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0916 11:08:39.541688  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 --> /var/lib/minikube/images/coredns_v1.11.3 (18571264 bytes)
	I0916 11:08:39.541724  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I0916 11:08:39.541800  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:08:39.541868  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:08:39.541804  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:08:39.541947  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:08:39.775749  264436 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0916 11:08:39.775784  264436 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I0916 11:08:39.775871  264436 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0916 11:08:39.775871  264436 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10
	I0916 11:08:39.775955  264436 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0916 11:08:39.775968  264436 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0916 11:08:39.775918  264436 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0916 11:08:39.776028  264436 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0916 11:08:39.776041  264436 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0916 11:08:39.776053  264436 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0916 11:08:39.776071  264436 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0916 11:08:39.776108  264436 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0916 11:08:39.802405  264436 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.31.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.31.1': No such file or directory
	I0916 11:08:39.802441  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 --> /var/lib/minikube/images/kube-apiserver_v1.31.1 (28057088 bytes)
	I0916 11:08:39.802507  264436 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.31.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.31.1': No such file or directory
	I0916 11:08:39.802523  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 --> /var/lib/minikube/images/kube-scheduler_v1.31.1 (20187136 bytes)
	I0916 11:08:39.803116  264436 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.31.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.31.1': No such file or directory
	I0916 11:08:39.803143  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 --> /var/lib/minikube/images/kube-proxy_v1.31.1 (30214144 bytes)
	I0916 11:08:39.824892  264436 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10: stat -c "%s %y" /var/lib/minikube/images/pause_3.10: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10': No such file or directory
	I0916 11:08:39.824933  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 --> /var/lib/minikube/images/pause_3.10 (321024 bytes)
	I0916 11:08:39.825041  264436 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.31.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.31.1': No such file or directory
	I0916 11:08:39.825061  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 --> /var/lib/minikube/images/kube-controller-manager_v1.31.1 (26231808 bytes)
	I0916 11:08:39.825117  264436 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.15-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.15-0': No such file or directory
	I0916 11:08:39.825133  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 --> /var/lib/minikube/images/etcd_3.5.15-0 (56918528 bytes)
	I0916 11:08:39.959272  264436 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10
	I0916 11:08:39.959408  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10
	I0916 11:08:40.023367  264436 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I0916 11:08:40.023457  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:08:40.164705  264436 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0916 11:08:40.164748  264436 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:08:40.164791  264436 ssh_runner.go:195] Run: which crictl
	I0916 11:08:40.164996  264436 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 from cache
	I0916 11:08:40.165039  264436 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0916 11:08:40.165080  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.11.3
	I0916 11:08:40.197926  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:08:40.241204  260870 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 11:08:40.317576  260870 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 11:08:40.426492  260870 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 11:08:40.596293  260870 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 11:08:40.608073  260870 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 11:08:40.609253  260870 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 11:08:40.609315  260870 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 11:08:40.694187  260870 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 11:08:37.733427  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:08:37.733912  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:08:38.232918  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:08:40.696082  260870 out.go:235]   - Booting up control plane ...
	I0916 11:08:40.696191  260870 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 11:08:40.702656  260870 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 11:08:40.704099  260870 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 11:08:40.705275  260870 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 11:08:40.708468  260870 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0916 11:08:41.423354  264436 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.11.3: (1.25824846s)
	I0916 11:08:41.423382  264436 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0916 11:08:41.423399  264436 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.225442266s)
	I0916 11:08:41.423474  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:08:41.423406  264436 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0916 11:08:41.423554  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0916 11:08:41.458101  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:08:42.482721  264436 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.059134257s)
	I0916 11:08:42.482753  264436 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0916 11:08:42.482774  264436 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0916 11:08:42.482776  264436 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.024643374s)
	I0916 11:08:42.482817  264436 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0916 11:08:42.482820  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0916 11:08:42.482894  264436 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0916 11:08:43.495795  264436 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.012950946s)
	I0916 11:08:43.495827  264436 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0916 11:08:43.495859  264436 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0916 11:08:43.495876  264436 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.01296017s)
	I0916 11:08:43.495905  264436 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0916 11:08:43.495919  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0916 11:08:43.495923  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0916 11:08:44.472580  264436 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0916 11:08:44.472626  264436 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0916 11:08:44.472679  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.31.1
	I0916 11:08:43.233973  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 11:08:43.234020  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:08:45.540795  264436 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.31.1: (1.068091792s)
	I0916 11:08:45.540818  264436 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0916 11:08:45.540840  264436 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0916 11:08:45.540887  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.15-0
	I0916 11:08:47.901181  264436 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.15-0: (2.360264084s)
	I0916 11:08:47.901218  264436 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0916 11:08:47.901243  264436 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0916 11:08:47.901300  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I0916 11:08:48.984630  264436 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5: (1.083298899s)
	I0916 11:08:48.984663  264436 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0916 11:08:48.984689  264436 cache_images.go:123] Successfully loaded all cached images
	I0916 11:08:48.984695  264436 cache_images.go:92] duration metric: took 10.103732508s to LoadCachedImages
	I0916 11:08:48.984709  264436 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.31.1 containerd true true} ...
	I0916 11:08:48.984835  264436 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-349453 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-349453 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:08:48.984901  264436 ssh_runner.go:195] Run: sudo crictl info
	I0916 11:08:49.032116  264436 cni.go:84] Creating CNI manager for ""
	I0916 11:08:49.032193  264436 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:08:49.032211  264436 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:08:49.032240  264436 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-349453 NodeName:no-preload-349453 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 11:08:49.032400  264436 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-349453"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:08:49.032472  264436 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 11:08:49.044890  264436 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0916 11:08:49.045024  264436 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0916 11:08:49.056347  264436 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0916 11:08:49.056466  264436 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 11:08:49.056673  264436 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0916 11:08:49.057166  264436 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0916 11:08:49.066816  264436 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0916 11:08:49.066853  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0916 11:08:49.943393  264436 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 11:08:49.947835  264436 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0916 11:08:49.947869  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0916 11:08:50.181687  264436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:08:50.194184  264436 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 11:08:50.197931  264436 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0916 11:08:50.197959  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0916 11:08:50.395973  264436 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:08:50.404517  264436 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0916 11:08:50.422561  264436 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:08:50.445036  264436 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2171 bytes)
	I0916 11:08:50.465483  264436 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0916 11:08:50.470084  264436 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:08:50.482485  264436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:08:50.547958  264436 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:08:50.563251  264436 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453 for IP: 192.168.94.2
	I0916 11:08:50.563273  264436 certs.go:194] generating shared ca certs ...
	I0916 11:08:50.563298  264436 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:50.563456  264436 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 11:08:50.563505  264436 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 11:08:50.563517  264436 certs.go:256] generating profile certs ...
	I0916 11:08:50.563627  264436 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.key
	I0916 11:08:50.563648  264436 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.crt with IP's: []
	I0916 11:08:50.618540  264436 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.crt ...
	I0916 11:08:50.618569  264436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.crt: {Name:mk337746002b2836356861444fb583afa57b1d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:50.618748  264436 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.key ...
	I0916 11:08:50.618771  264436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.key: {Name:mk9c5aa9e774198cfcb02ec0058188ab8edfaed1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:50.618845  264436 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.key.85f7849d
	I0916 11:08:50.618860  264436 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.crt.85f7849d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I0916 11:08:50.875559  264436 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.crt.85f7849d ...
	I0916 11:08:50.875598  264436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.crt.85f7849d: {Name:mk481f9ec5bc5101be906a4ddce3a071783b2c96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:50.875829  264436 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.key.85f7849d ...
	I0916 11:08:50.875849  264436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.key.85f7849d: {Name:mk4e723f8d9625ad4b4558240421f0210105e957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:50.875954  264436 certs.go:381] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.crt.85f7849d -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.crt
	I0916 11:08:50.876051  264436 certs.go:385] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.key.85f7849d -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.key
	I0916 11:08:50.876127  264436 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/proxy-client.key
	I0916 11:08:50.876147  264436 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/proxy-client.crt with IP's: []
	I0916 11:08:51.303691  264436 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/proxy-client.crt ...
	I0916 11:08:51.303759  264436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/proxy-client.crt: {Name:mk2a9791d1a10304f96ba7678b9c3811d30b3fb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:51.303945  264436 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/proxy-client.key ...
	I0916 11:08:51.303961  264436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/proxy-client.key: {Name:mka24f10f8b232c8b84bdf799b45958f97693ca9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:51.304131  264436 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 11:08:51.304175  264436 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 11:08:51.304185  264436 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:08:51.304215  264436 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 11:08:51.304238  264436 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:08:51.304268  264436 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 11:08:51.304303  264436 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:08:51.304858  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:08:51.329508  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:08:51.353418  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:08:51.377163  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:08:51.401708  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 11:08:51.428477  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 11:08:51.452154  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:08:51.475382  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 11:08:51.498240  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:08:51.521123  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 11:08:51.543771  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 11:08:51.574546  264436 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:08:51.591543  264436 ssh_runner.go:195] Run: openssl version
	I0916 11:08:51.597060  264436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:08:51.606641  264436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:08:51.610471  264436 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:08:51.610524  264436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:08:51.617457  264436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:08:51.626770  264436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 11:08:51.636218  264436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 11:08:51.640059  264436 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 11:08:51.640119  264436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 11:08:51.646939  264436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 11:08:51.657727  264436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 11:08:51.667722  264436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 11:08:51.671519  264436 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 11:08:51.671587  264436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 11:08:51.678428  264436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:08:51.687852  264436 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:08:51.691310  264436 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 11:08:51.691367  264436 kubeadm.go:392] StartCluster: {Name:no-preload-349453 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-349453 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:08:51.691439  264436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 11:08:51.691486  264436 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:08:51.724621  264436 cri.go:89] found id: ""
	I0916 11:08:51.724695  264436 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:08:51.734987  264436 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 11:08:51.744004  264436 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 11:08:51.744075  264436 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 11:08:51.755258  264436 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 11:08:51.755283  264436 kubeadm.go:157] found existing configuration files:
	
	I0916 11:08:51.755333  264436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 11:08:51.768412  264436 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 11:08:51.768474  264436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 11:08:51.777349  264436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 11:08:51.785929  264436 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 11:08:51.786003  264436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 11:08:51.794532  264436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 11:08:51.803220  264436 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 11:08:51.803342  264436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 11:08:51.812093  264436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 11:08:51.820809  264436 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 11:08:51.820873  264436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 11:08:51.829429  264436 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 11:08:51.865931  264436 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 11:08:51.865989  264436 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 11:08:51.885115  264436 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 11:08:51.885236  264436 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 11:08:51.885298  264436 kubeadm.go:310] OS: Linux
	I0916 11:08:51.885387  264436 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 11:08:51.885459  264436 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 11:08:51.885534  264436 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 11:08:51.885607  264436 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 11:08:51.885679  264436 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 11:08:51.885763  264436 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 11:08:51.885838  264436 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 11:08:51.885903  264436 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 11:08:51.885972  264436 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 11:08:51.941753  264436 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 11:08:51.941901  264436 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 11:08:51.942020  264436 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 11:08:51.947090  264436 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 11:08:48.234717  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 11:08:48.234765  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:08:51.949776  264436 out.go:235]   - Generating certificates and keys ...
	I0916 11:08:51.949877  264436 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 11:08:51.949940  264436 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 11:08:52.122699  264436 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 11:08:52.249550  264436 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 11:08:52.352028  264436 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 11:08:52.445139  264436 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 11:08:52.652691  264436 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 11:08:52.652923  264436 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-349453] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0916 11:08:52.751947  264436 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 11:08:52.752095  264436 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-349453] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0916 11:08:52.932640  264436 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 11:08:53.294351  264436 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 11:08:53.505338  264436 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 11:08:53.505405  264436 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 11:08:53.576935  264436 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 11:08:53.665445  264436 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 11:08:53.781881  264436 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 11:08:54.142742  264436 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 11:08:54.452184  264436 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 11:08:54.452959  264436 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 11:08:54.456552  264436 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 11:08:55.210981  260870 kubeadm.go:310] [apiclient] All control plane components are healthy after 14.502545 seconds
	I0916 11:08:55.211125  260870 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 11:08:54.460019  264436 out.go:235]   - Booting up control plane ...
	I0916 11:08:54.460188  264436 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 11:08:54.460277  264436 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 11:08:54.460605  264436 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 11:08:54.473017  264436 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 11:08:54.480142  264436 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 11:08:54.480269  264436 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 11:08:54.584649  264436 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 11:08:54.584816  264436 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 11:08:55.085943  264436 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.441739ms
	I0916 11:08:55.086058  264436 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 11:08:55.222604  260870 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 11:08:55.747349  260870 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 11:08:55.747575  260870 kubeadm.go:310] [mark-control-plane] Marking the node old-k8s-version-371039 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
	I0916 11:08:56.255515  260870 kubeadm.go:310] [bootstrap-token] Using token: 7575lv.7anw6bs48k43jhje
	I0916 11:08:56.257005  260870 out.go:235]   - Configuring RBAC rules ...
	I0916 11:08:56.257190  260870 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 11:08:56.261944  260870 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 11:08:56.268917  260870 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 11:08:56.271036  260870 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 11:08:56.273203  260870 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 11:08:56.275371  260870 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 11:08:56.282938  260870 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 11:08:56.505496  260870 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 11:08:56.674523  260870 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 11:08:56.675435  260870 kubeadm.go:310] 
	I0916 11:08:56.675511  260870 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 11:08:56.675550  260870 kubeadm.go:310] 
	I0916 11:08:56.675666  260870 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 11:08:56.675679  260870 kubeadm.go:310] 
	I0916 11:08:56.675769  260870 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 11:08:56.675860  260870 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 11:08:56.675953  260870 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 11:08:56.675963  260870 kubeadm.go:310] 
	I0916 11:08:56.676057  260870 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 11:08:56.676076  260870 kubeadm.go:310] 
	I0916 11:08:56.676146  260870 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 11:08:56.676157  260870 kubeadm.go:310] 
	I0916 11:08:56.676232  260870 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 11:08:56.676346  260870 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 11:08:56.676449  260870 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 11:08:56.676459  260870 kubeadm.go:310] 
	I0916 11:08:56.676577  260870 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 11:08:56.676690  260870 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 11:08:56.676702  260870 kubeadm.go:310] 
	I0916 11:08:56.676805  260870 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7575lv.7anw6bs48k43jhje \
	I0916 11:08:56.676964  260870 kubeadm.go:310]     --discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 \
	I0916 11:08:56.676999  260870 kubeadm.go:310]     --control-plane 
	I0916 11:08:56.677008  260870 kubeadm.go:310] 
	I0916 11:08:56.677141  260870 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 11:08:56.677154  260870 kubeadm.go:310] 
	I0916 11:08:56.677267  260870 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7575lv.7anw6bs48k43jhje \
	I0916 11:08:56.677407  260870 kubeadm.go:310]     --discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 
	I0916 11:08:56.679220  260870 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 11:08:56.679366  260870 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 11:08:56.679403  260870 cni.go:84] Creating CNI manager for ""
	I0916 11:08:56.679418  260870 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:08:56.681153  260870 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 11:08:53.235786  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 11:08:53.235837  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:00.087505  264436 kubeadm.go:310] [api-check] The API server is healthy after 5.001488031s
	I0916 11:09:00.098392  264436 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 11:09:00.110362  264436 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 11:09:00.128932  264436 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 11:09:00.129187  264436 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-349453 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 11:09:00.137036  264436 kubeadm.go:310] [bootstrap-token] Using token: 7hha87.1fmccqtk5mel1d08
	I0916 11:08:56.682324  260870 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 11:08:56.686207  260870 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.20.0/kubectl ...
	I0916 11:08:56.686225  260870 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 11:08:56.703974  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 11:08:57.087171  260870 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 11:08:57.087286  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:08:57.087327  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-371039 minikube.k8s.io/updated_at=2024_09_16T11_08_57_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=old-k8s-version-371039 minikube.k8s.io/primary=true
	I0916 11:08:57.094951  260870 ops.go:34] apiserver oom_adj: -16
	I0916 11:08:57.203677  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:08:57.703899  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:08:58.204371  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:08:58.703936  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:08:59.203918  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:08:59.704356  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:00.204155  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:00.138673  264436 out.go:235]   - Configuring RBAC rules ...
	I0916 11:09:00.138843  264436 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 11:09:00.143189  264436 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 11:09:00.149188  264436 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 11:09:00.151958  264436 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 11:09:00.154792  264436 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 11:09:00.158528  264436 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 11:09:00.493607  264436 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 11:09:00.933899  264436 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 11:09:01.494256  264436 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 11:09:01.495468  264436 kubeadm.go:310] 
	I0916 11:09:01.495563  264436 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 11:09:01.495578  264436 kubeadm.go:310] 
	I0916 11:09:01.495691  264436 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 11:09:01.495707  264436 kubeadm.go:310] 
	I0916 11:09:01.495784  264436 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 11:09:01.495872  264436 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 11:09:01.495955  264436 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 11:09:01.495973  264436 kubeadm.go:310] 
	I0916 11:09:01.496023  264436 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 11:09:01.496031  264436 kubeadm.go:310] 
	I0916 11:09:01.496072  264436 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 11:09:01.496104  264436 kubeadm.go:310] 
	I0916 11:09:01.496187  264436 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 11:09:01.496302  264436 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 11:09:01.496394  264436 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 11:09:01.496403  264436 kubeadm.go:310] 
	I0916 11:09:01.496503  264436 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 11:09:01.496612  264436 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 11:09:01.496625  264436 kubeadm.go:310] 
	I0916 11:09:01.496698  264436 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7hha87.1fmccqtk5mel1d08 \
	I0916 11:09:01.496843  264436 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 \
	I0916 11:09:01.496894  264436 kubeadm.go:310] 	--control-plane 
	I0916 11:09:01.496904  264436 kubeadm.go:310] 
	I0916 11:09:01.497000  264436 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 11:09:01.497009  264436 kubeadm.go:310] 
	I0916 11:09:01.497108  264436 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7hha87.1fmccqtk5mel1d08 \
	I0916 11:09:01.497239  264436 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 
	I0916 11:09:01.499128  264436 kubeadm.go:310] W0916 11:08:51.862879    1806 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:09:01.499457  264436 kubeadm.go:310] W0916 11:08:51.863553    1806 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:09:01.499768  264436 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 11:09:01.499953  264436 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 11:09:01.499988  264436 cni.go:84] Creating CNI manager for ""
	I0916 11:09:01.500000  264436 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:09:01.501798  264436 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 11:08:58.236473  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 11:08:58.236522  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:08:58.646312  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:53610->192.168.76.2:8443: read: connection reset by peer
	I0916 11:08:58.733440  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:08:58.733905  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:08:59.233571  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:08:59.234025  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:08:59.733738  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:08:59.734160  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:00.232786  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:00.233148  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:00.732769  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:00.733156  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:01.232818  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:01.233245  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:01.732791  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:01.733233  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:02.232778  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:02.233205  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:00.704323  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:01.204668  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:01.703878  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:02.204580  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:02.704540  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:03.203853  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:03.703804  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:04.204076  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:04.703894  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:05.204018  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:01.503075  264436 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 11:09:01.507256  264436 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 11:09:01.507277  264436 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 11:09:01.524545  264436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 11:09:01.727673  264436 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 11:09:01.727825  264436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-349453 minikube.k8s.io/updated_at=2024_09_16T11_09_01_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=no-preload-349453 minikube.k8s.io/primary=true
	I0916 11:09:01.728021  264436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:01.738577  264436 ops.go:34] apiserver oom_adj: -16
	I0916 11:09:01.822449  264436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:02.323484  264436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:02.822776  264436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:03.322974  264436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:03.823263  264436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:04.323195  264436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:04.822824  264436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:05.323453  264436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:05.822962  264436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:05.891978  264436 kubeadm.go:1113] duration metric: took 4.164004406s to wait for elevateKubeSystemPrivileges
	I0916 11:09:05.892013  264436 kubeadm.go:394] duration metric: took 14.200646498s to StartCluster
	I0916 11:09:05.892048  264436 settings.go:142] acquiring lock: {Name:mk5f140da961bb985c566f732d611f3e238f1f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:09:05.892129  264436 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:09:05.895884  264436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:09:05.896177  264436 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 11:09:05.896353  264436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 11:09:05.896448  264436 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:09:05.896535  264436 addons.go:69] Setting storage-provisioner=true in profile "no-preload-349453"
	I0916 11:09:05.896553  264436 addons.go:69] Setting default-storageclass=true in profile "no-preload-349453"
	I0916 11:09:05.896597  264436 config.go:182] Loaded profile config "no-preload-349453": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:09:05.896617  264436 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-349453"
	I0916 11:09:05.896562  264436 addons.go:234] Setting addon storage-provisioner=true in "no-preload-349453"
	I0916 11:09:05.896721  264436 host.go:66] Checking if "no-preload-349453" exists ...
	I0916 11:09:05.896991  264436 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Status}}
	I0916 11:09:05.897173  264436 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Status}}
	I0916 11:09:05.899195  264436 out.go:177] * Verifying Kubernetes components...
	I0916 11:09:05.900632  264436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:09:05.920822  264436 addons.go:234] Setting addon default-storageclass=true in "no-preload-349453"
	I0916 11:09:05.920872  264436 host.go:66] Checking if "no-preload-349453" exists ...
	I0916 11:09:05.921227  264436 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Status}}
	I0916 11:09:05.922853  264436 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:09:05.924578  264436 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:09:05.924598  264436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:09:05.924661  264436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:05.953061  264436 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:09:05.953083  264436 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:09:05.953143  264436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:05.957772  264436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:09:05.975394  264436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:09:06.034923  264436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 11:09:06.040755  264436 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:09:06.143479  264436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:09:06.240584  264436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:09:06.536048  264436 start.go:971] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I0916 11:09:06.539111  264436 node_ready.go:35] waiting up to 6m0s for node "no-preload-349453" to be "Ready" ...
	I0916 11:09:06.547015  264436 node_ready.go:49] node "no-preload-349453" has status "Ready":"True"
	I0916 11:09:06.547042  264436 node_ready.go:38] duration metric: took 7.901547ms for node "no-preload-349453" to be "Ready" ...
	I0916 11:09:06.547095  264436 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:09:06.555838  264436 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:06.932212  264436 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0916 11:09:02.733678  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:02.734077  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:03.233262  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:03.233718  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:03.733114  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:03.733576  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:04.233410  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:04.233881  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:04.733574  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:04.733949  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:05.233532  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:05.233933  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:05.733512  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:05.733953  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:06.233584  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:06.234044  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:06.733637  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:06.734106  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:07.233844  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:07.234332  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:05.703927  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:06.204762  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:06.704365  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:07.204577  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:07.704243  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:08.204636  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:08.703906  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:09.204497  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:09.704711  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:10.204447  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:06.933504  264436 addons.go:510] duration metric: took 1.037058154s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 11:09:07.040392  264436 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-349453" context rescaled to 1 replicas
	I0916 11:09:08.563999  264436 pod_ready.go:103] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:10.703866  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:11.204455  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:11.703883  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:12.204359  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:12.332820  260870 kubeadm.go:1113] duration metric: took 15.245596472s to wait for elevateKubeSystemPrivileges
	I0916 11:09:12.332850  260870 kubeadm.go:394] duration metric: took 35.226361301s to StartCluster
	I0916 11:09:12.332867  260870 settings.go:142] acquiring lock: {Name:mk5f140da961bb985c566f732d611f3e238f1f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:09:12.332941  260870 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:09:12.334200  260870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:09:12.334409  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 11:09:12.334422  260870 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 11:09:12.334489  260870 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:09:12.334595  260870 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-371039"
	I0916 11:09:12.334614  260870 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-371039"
	I0916 11:09:12.334633  260870 config.go:182] Loaded profile config "old-k8s-version-371039": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0916 11:09:12.334646  260870 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-371039"
	I0916 11:09:12.334621  260870 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-371039"
	I0916 11:09:12.334766  260870 host.go:66] Checking if "old-k8s-version-371039" exists ...
	I0916 11:09:12.335022  260870 cli_runner.go:164] Run: docker container inspect old-k8s-version-371039 --format={{.State.Status}}
	I0916 11:09:12.335157  260870 cli_runner.go:164] Run: docker container inspect old-k8s-version-371039 --format={{.State.Status}}
	I0916 11:09:12.336297  260870 out.go:177] * Verifying Kubernetes components...
	I0916 11:09:12.337718  260870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:09:12.357086  260870 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:09:07.733738  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:07.734147  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:08.233742  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:08.234148  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:08.733771  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:08.734260  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:09.233808  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:12.358665  260870 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:09:12.358689  260870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:09:12.358754  260870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-371039
	I0916 11:09:12.359729  260870 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-371039"
	I0916 11:09:12.359827  260870 host.go:66] Checking if "old-k8s-version-371039" exists ...
	I0916 11:09:12.360343  260870 cli_runner.go:164] Run: docker container inspect old-k8s-version-371039 --format={{.State.Status}}
	I0916 11:09:12.383998  260870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/old-k8s-version-371039/id_rsa Username:docker}
	I0916 11:09:12.389783  260870 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:09:12.389805  260870 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:09:12.389868  260870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-371039
	I0916 11:09:12.408070  260870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/old-k8s-version-371039/id_rsa Username:docker}
	I0916 11:09:12.546475  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 11:09:12.553288  260870 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:09:12.647594  260870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:09:12.648622  260870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:09:13.259944  260870 start.go:971] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0916 11:09:13.261675  260870 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-371039" to be "Ready" ...
	I0916 11:09:13.325576  260870 node_ready.go:49] node "old-k8s-version-371039" has status "Ready":"True"
	I0916 11:09:13.325600  260870 node_ready.go:38] duration metric: took 63.887515ms for node "old-k8s-version-371039" to be "Ready" ...
	I0916 11:09:13.325612  260870 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:09:13.335290  260870 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-78djj" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:13.528295  260870 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0916 11:09:13.530325  260870 addons.go:510] duration metric: took 1.195834763s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 11:09:13.764048  260870 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-371039" context rescaled to 1 replicas
	I0916 11:09:11.062167  264436 pod_ready.go:103] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:13.062678  264436 pod_ready.go:103] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:15.063223  264436 pod_ready.go:103] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:14.234494  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 11:09:14.234540  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:15.342274  260870 pod_ready.go:103] pod "coredns-74ff55c5b-78djj" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:17.841129  260870 pod_ready.go:103] pod "coredns-74ff55c5b-78djj" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:17.560912  264436 pod_ready.go:103] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:19.562845  264436 pod_ready.go:103] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:19.235598  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 11:09:19.235680  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:09:19.235754  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:09:19.269692  254463 cri.go:89] found id: "f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd"
	I0916 11:09:19.269715  254463 cri.go:89] found id: "78f1386658cccbd9f65d9408afecb31c6db52bb84d72237168313ed1e03541f7"
	I0916 11:09:19.269720  254463 cri.go:89] found id: ""
	I0916 11:09:19.269729  254463 logs.go:276] 2 containers: [f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd 78f1386658cccbd9f65d9408afecb31c6db52bb84d72237168313ed1e03541f7]
	I0916 11:09:19.269789  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:19.273402  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:19.276885  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:09:19.276963  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:09:19.308719  254463 cri.go:89] found id: ""
	I0916 11:09:19.308746  254463 logs.go:276] 0 containers: []
	W0916 11:09:19.308755  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:09:19.308771  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:09:19.308830  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:09:19.342334  254463 cri.go:89] found id: ""
	I0916 11:09:19.342361  254463 logs.go:276] 0 containers: []
	W0916 11:09:19.342372  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:09:19.342379  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:09:19.342437  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:09:19.375316  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:09:19.375337  254463 cri.go:89] found id: ""
	I0916 11:09:19.375343  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:09:19.375391  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:19.378835  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:09:19.378904  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:09:19.411345  254463 cri.go:89] found id: ""
	I0916 11:09:19.411370  254463 logs.go:276] 0 containers: []
	W0916 11:09:19.411378  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:09:19.411384  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:09:19.411441  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:09:19.445048  254463 cri.go:89] found id: "b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9"
	I0916 11:09:19.445068  254463 cri.go:89] found id: "d134862346ef7847a5bec871679eb22f4238875f0baf507bd5b7da3db1391339"
	I0916 11:09:19.445072  254463 cri.go:89] found id: ""
	I0916 11:09:19.445079  254463 logs.go:276] 2 containers: [b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9 d134862346ef7847a5bec871679eb22f4238875f0baf507bd5b7da3db1391339]
	I0916 11:09:19.445131  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:19.448637  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:19.451955  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:09:19.452028  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:09:19.485223  254463 cri.go:89] found id: ""
	I0916 11:09:19.485248  254463 logs.go:276] 0 containers: []
	W0916 11:09:19.485257  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:09:19.485263  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:09:19.485337  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:09:19.517574  254463 cri.go:89] found id: ""
	I0916 11:09:19.517608  254463 logs.go:276] 0 containers: []
	W0916 11:09:19.517618  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:09:19.517650  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:09:19.517669  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:09:19.557222  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:09:19.557264  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:09:19.594969  254463 logs.go:123] Gathering logs for kube-apiserver [78f1386658cccbd9f65d9408afecb31c6db52bb84d72237168313ed1e03541f7] ...
	I0916 11:09:19.595000  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78f1386658cccbd9f65d9408afecb31c6db52bb84d72237168313ed1e03541f7"
	I0916 11:09:19.630078  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:09:19.630121  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:09:19.681369  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:09:19.681400  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:09:20.341144  260870 pod_ready.go:103] pod "coredns-74ff55c5b-78djj" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:22.840781  260870 pod_ready.go:103] pod "coredns-74ff55c5b-78djj" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:24.840812  260870 pod_ready.go:103] pod "coredns-74ff55c5b-78djj" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:22.060907  264436 pod_ready.go:103] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:24.062298  264436 pod_ready.go:103] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:26.841099  260870 pod_ready.go:103] pod "coredns-74ff55c5b-78djj" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:29.341391  260870 pod_ready.go:103] pod "coredns-74ff55c5b-78djj" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:29.841004  260870 pod_ready.go:93] pod "coredns-74ff55c5b-78djj" in "kube-system" namespace has status "Ready":"True"
	I0916 11:09:29.841028  260870 pod_ready.go:82] duration metric: took 16.505708515s for pod "coredns-74ff55c5b-78djj" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:29.841039  260870 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-lgf42" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:29.842812  260870 pod_ready.go:98] error getting pod "coredns-74ff55c5b-lgf42" in "kube-system" namespace (skipping!): pods "coredns-74ff55c5b-lgf42" not found
	I0916 11:09:29.842836  260870 pod_ready.go:82] duration metric: took 1.790096ms for pod "coredns-74ff55c5b-lgf42" in "kube-system" namespace to be "Ready" ...
	E0916 11:09:29.842848  260870 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-74ff55c5b-lgf42" in "kube-system" namespace (skipping!): pods "coredns-74ff55c5b-lgf42" not found
	I0916 11:09:29.842857  260870 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-371039" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:26.562286  264436 pod_ready.go:103] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:29.061948  264436 pod_ready.go:103] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:30.186175  254463 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.504756872s)
	W0916 11:09:30.186209  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:33322->[::1]:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:33322->[::1]:8443: read: connection reset by peer
	
	** /stderr **
	I0916 11:09:30.186217  254463 logs.go:123] Gathering logs for kube-apiserver [f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd] ...
	I0916 11:09:30.186233  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd"
	I0916 11:09:30.223830  254463 logs.go:123] Gathering logs for kube-controller-manager [b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9] ...
	I0916 11:09:30.223863  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9"
	I0916 11:09:30.256977  254463 logs.go:123] Gathering logs for kube-controller-manager [d134862346ef7847a5bec871679eb22f4238875f0baf507bd5b7da3db1391339] ...
	I0916 11:09:30.257004  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d134862346ef7847a5bec871679eb22f4238875f0baf507bd5b7da3db1391339"
	I0916 11:09:30.292614  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:09:30.292649  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:09:30.353308  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:09:30.353345  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:09:31.848871  260870 pod_ready.go:103] pod "etcd-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:33.849476  260870 pod_ready.go:103] pod "etcd-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:31.561693  264436 pod_ready.go:103] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:34.061654  264436 pod_ready.go:103] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:35.061879  264436 pod_ready.go:93] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"True"
	I0916 11:09:35.061902  264436 pod_ready.go:82] duration metric: took 28.506020354s for pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:35.061911  264436 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mvlrh" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:35.063656  264436 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-mvlrh" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-mvlrh" not found
	I0916 11:09:35.063679  264436 pod_ready.go:82] duration metric: took 1.762521ms for pod "coredns-7c65d6cfc9-mvlrh" in "kube-system" namespace to be "Ready" ...
	E0916 11:09:35.063692  264436 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-mvlrh" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-mvlrh" not found
	I0916 11:09:35.063701  264436 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:35.068205  264436 pod_ready.go:93] pod "etcd-no-preload-349453" in "kube-system" namespace has status "Ready":"True"
	I0916 11:09:35.068227  264436 pod_ready.go:82] duration metric: took 4.517527ms for pod "etcd-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:35.068239  264436 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:35.072552  264436 pod_ready.go:93] pod "kube-apiserver-no-preload-349453" in "kube-system" namespace has status "Ready":"True"
	I0916 11:09:35.072576  264436 pod_ready.go:82] duration metric: took 4.327352ms for pod "kube-apiserver-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:35.072586  264436 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:35.076783  264436 pod_ready.go:93] pod "kube-controller-manager-no-preload-349453" in "kube-system" namespace has status "Ready":"True"
	I0916 11:09:35.076810  264436 pod_ready.go:82] duration metric: took 4.217917ms for pod "kube-controller-manager-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:35.076820  264436 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-n7m28" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:35.260357  264436 pod_ready.go:93] pod "kube-proxy-n7m28" in "kube-system" namespace has status "Ready":"True"
	I0916 11:09:35.260383  264436 pod_ready.go:82] duration metric: took 183.557365ms for pod "kube-proxy-n7m28" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:35.260393  264436 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:35.660221  264436 pod_ready.go:93] pod "kube-scheduler-no-preload-349453" in "kube-system" namespace has status "Ready":"True"
	I0916 11:09:35.660246  264436 pod_ready.go:82] duration metric: took 399.846457ms for pod "kube-scheduler-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:35.660257  264436 pod_ready.go:39] duration metric: took 29.113141917s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:09:35.660274  264436 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:09:35.660348  264436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:09:35.673043  264436 api_server.go:72] duration metric: took 29.776823258s to wait for apiserver process to appear ...
	I0916 11:09:35.673068  264436 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:09:35.673092  264436 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0916 11:09:35.676860  264436 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0916 11:09:35.677763  264436 api_server.go:141] control plane version: v1.31.1
	I0916 11:09:35.677787  264436 api_server.go:131] duration metric: took 4.712796ms to wait for apiserver health ...
	I0916 11:09:35.677800  264436 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 11:09:35.862606  264436 system_pods.go:59] 8 kube-system pods found
	I0916 11:09:35.862640  264436 system_pods.go:61] "coredns-7c65d6cfc9-9zbwk" [427a37dd-9a56-455f-bd9e-3ee604164481] Running
	I0916 11:09:35.862646  264436 system_pods.go:61] "etcd-no-preload-349453" [11377b15-3f0d-463f-8b6f-8eae19108040] Running
	I0916 11:09:35.862651  264436 system_pods.go:61] "kindnet-qbh58" [b43e889b-3179-48c6-b6c4-e6c42131c50c] Running
	I0916 11:09:35.862655  264436 system_pods.go:61] "kube-apiserver-no-preload-349453" [06a64195-a484-412f-930b-310199ce4d80] Running
	I0916 11:09:35.862660  264436 system_pods.go:61] "kube-controller-manager-no-preload-349453" [f31864d9-a8cc-4d04-8f94-497ae381d9ca] Running
	I0916 11:09:35.862664  264436 system_pods.go:61] "kube-proxy-n7m28" [a0580caf-fdc3-483b-a5b9-f29db25b8ef6] Running
	I0916 11:09:35.862667  264436 system_pods.go:61] "kube-scheduler-no-preload-349453" [94505c46-f9fd-4bc6-8287-27c30e4696f0] Running
	I0916 11:09:35.862672  264436 system_pods.go:61] "storage-provisioner" [2f218f7f-9232-4d85-bd8d-6cdc6516c83f] Running
	I0916 11:09:35.862678  264436 system_pods.go:74] duration metric: took 184.872639ms to wait for pod list to return data ...
	I0916 11:09:35.862685  264436 default_sa.go:34] waiting for default service account to be created ...
	I0916 11:09:36.061081  264436 default_sa.go:45] found service account: "default"
	I0916 11:09:36.061114  264436 default_sa.go:55] duration metric: took 198.421124ms for default service account to be created ...
	I0916 11:09:36.061127  264436 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 11:09:36.262420  264436 system_pods.go:86] 8 kube-system pods found
	I0916 11:09:36.262457  264436 system_pods.go:89] "coredns-7c65d6cfc9-9zbwk" [427a37dd-9a56-455f-bd9e-3ee604164481] Running
	I0916 11:09:36.262466  264436 system_pods.go:89] "etcd-no-preload-349453" [11377b15-3f0d-463f-8b6f-8eae19108040] Running
	I0916 11:09:36.262471  264436 system_pods.go:89] "kindnet-qbh58" [b43e889b-3179-48c6-b6c4-e6c42131c50c] Running
	I0916 11:09:36.262477  264436 system_pods.go:89] "kube-apiserver-no-preload-349453" [06a64195-a484-412f-930b-310199ce4d80] Running
	I0916 11:09:36.262483  264436 system_pods.go:89] "kube-controller-manager-no-preload-349453" [f31864d9-a8cc-4d04-8f94-497ae381d9ca] Running
	I0916 11:09:36.262489  264436 system_pods.go:89] "kube-proxy-n7m28" [a0580caf-fdc3-483b-a5b9-f29db25b8ef6] Running
	I0916 11:09:36.262494  264436 system_pods.go:89] "kube-scheduler-no-preload-349453" [94505c46-f9fd-4bc6-8287-27c30e4696f0] Running
	I0916 11:09:36.262500  264436 system_pods.go:89] "storage-provisioner" [2f218f7f-9232-4d85-bd8d-6cdc6516c83f] Running
	I0916 11:09:36.262508  264436 system_pods.go:126] duration metric: took 201.374457ms to wait for k8s-apps to be running ...
	I0916 11:09:36.262526  264436 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 11:09:36.262581  264436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:09:36.276938  264436 system_svc.go:56] duration metric: took 14.399242ms WaitForService to wait for kubelet
	I0916 11:09:36.276973  264436 kubeadm.go:582] duration metric: took 30.380758589s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:09:36.277002  264436 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:09:36.460520  264436 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 11:09:36.460557  264436 node_conditions.go:123] node cpu capacity is 8
	I0916 11:09:36.460575  264436 node_conditions.go:105] duration metric: took 183.566872ms to run NodePressure ...
	I0916 11:09:36.460589  264436 start.go:241] waiting for startup goroutines ...
	I0916 11:09:36.460599  264436 start.go:246] waiting for cluster config update ...
	I0916 11:09:36.460617  264436 start.go:255] writing updated cluster config ...
	I0916 11:09:36.460929  264436 ssh_runner.go:195] Run: rm -f paused
	I0916 11:09:36.468132  264436 out.go:177] * Done! kubectl is now configured to use "no-preload-349453" cluster and "default" namespace by default
	E0916 11:09:36.469497  264436 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	I0916 11:09:32.874622  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:32.875104  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:32.875158  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:09:32.875202  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:09:32.908596  254463 cri.go:89] found id: "f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd"
	I0916 11:09:32.908623  254463 cri.go:89] found id: ""
	I0916 11:09:32.908633  254463 logs.go:276] 1 containers: [f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd]
	I0916 11:09:32.908690  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:32.912284  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:09:32.912375  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:09:32.945054  254463 cri.go:89] found id: ""
	I0916 11:09:32.945081  254463 logs.go:276] 0 containers: []
	W0916 11:09:32.945092  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:09:32.945099  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:09:32.945158  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:09:32.979410  254463 cri.go:89] found id: ""
	I0916 11:09:32.979440  254463 logs.go:276] 0 containers: []
	W0916 11:09:32.979451  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:09:32.979458  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:09:32.979527  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:09:33.012749  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:09:33.012772  254463 cri.go:89] found id: ""
	I0916 11:09:33.012782  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:09:33.012842  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:33.016202  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:09:33.016267  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:09:33.047808  254463 cri.go:89] found id: ""
	I0916 11:09:33.047836  254463 logs.go:276] 0 containers: []
	W0916 11:09:33.047847  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:09:33.047855  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:09:33.047904  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:09:33.082293  254463 cri.go:89] found id: "b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9"
	I0916 11:09:33.082318  254463 cri.go:89] found id: ""
	I0916 11:09:33.082328  254463 logs.go:276] 1 containers: [b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9]
	I0916 11:09:33.082384  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:33.085741  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:09:33.085804  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:09:33.117890  254463 cri.go:89] found id: ""
	I0916 11:09:33.117912  254463 logs.go:276] 0 containers: []
	W0916 11:09:33.117920  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:09:33.117926  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:09:33.117973  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:09:33.151245  254463 cri.go:89] found id: ""
	I0916 11:09:33.151278  254463 logs.go:276] 0 containers: []
	W0916 11:09:33.151291  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:09:33.151303  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:09:33.151315  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:09:33.211351  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:09:33.211375  254463 logs.go:123] Gathering logs for kube-apiserver [f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd] ...
	I0916 11:09:33.211390  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd"
	I0916 11:09:33.246165  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:09:33.246193  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:09:33.300562  254463 logs.go:123] Gathering logs for kube-controller-manager [b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9] ...
	I0916 11:09:33.300595  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9"
	I0916 11:09:33.333028  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:09:33.333056  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:09:33.377881  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:09:33.377926  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:09:33.414270  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:09:33.414300  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:09:33.473625  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:09:33.473667  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:09:35.994362  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:35.994766  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:35.994820  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:09:35.994868  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:09:36.027811  254463 cri.go:89] found id: "f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd"
	I0916 11:09:36.027833  254463 cri.go:89] found id: ""
	I0916 11:09:36.027843  254463 logs.go:276] 1 containers: [f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd]
	I0916 11:09:36.027898  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:36.031236  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:09:36.031315  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:09:36.064048  254463 cri.go:89] found id: ""
	I0916 11:09:36.064089  254463 logs.go:276] 0 containers: []
	W0916 11:09:36.064099  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:09:36.064105  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:09:36.064161  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:09:36.097716  254463 cri.go:89] found id: ""
	I0916 11:09:36.097740  254463 logs.go:276] 0 containers: []
	W0916 11:09:36.097749  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:09:36.097755  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:09:36.097802  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:09:36.128830  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:09:36.128853  254463 cri.go:89] found id: ""
	I0916 11:09:36.128862  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:09:36.128917  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:36.132337  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:09:36.132402  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:09:36.164034  254463 cri.go:89] found id: ""
	I0916 11:09:36.164056  254463 logs.go:276] 0 containers: []
	W0916 11:09:36.164067  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:09:36.164075  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:09:36.164136  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:09:36.196454  254463 cri.go:89] found id: "b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9"
	I0916 11:09:36.196479  254463 cri.go:89] found id: ""
	I0916 11:09:36.196486  254463 logs.go:276] 1 containers: [b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9]
	I0916 11:09:36.196540  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:36.199847  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:09:36.199897  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:09:36.234554  254463 cri.go:89] found id: ""
	I0916 11:09:36.234577  254463 logs.go:276] 0 containers: []
	W0916 11:09:36.234585  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:09:36.234591  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:09:36.234634  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:09:36.269383  254463 cri.go:89] found id: ""
	I0916 11:09:36.269403  254463 logs.go:276] 0 containers: []
	W0916 11:09:36.269411  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:09:36.269418  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:09:36.269430  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:09:36.328618  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:09:36.328641  254463 logs.go:123] Gathering logs for kube-apiserver [f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd] ...
	I0916 11:09:36.328656  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd"
	I0916 11:09:36.365262  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:09:36.365299  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:09:36.419000  254463 logs.go:123] Gathering logs for kube-controller-manager [b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9] ...
	I0916 11:09:36.419036  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9"
	I0916 11:09:36.452272  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:09:36.452299  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:09:36.503844  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:09:36.503880  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:09:36.549774  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:09:36.549800  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:09:36.621917  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:09:36.621947  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	30acbc7b45e29       c69fa2e9cbf5f       5 seconds ago       Running             coredns                   0                   290db8b125607       coredns-7c65d6cfc9-9zbwk
	b30641ccb64e3       12968670680f4       29 seconds ago      Running             kindnet-cni               0                   06502caa119d4       kindnet-qbh58
	6fe6dedc21740       6e38f40d628db       31 seconds ago      Running             storage-provisioner       0                   0e0c238d616bc       storage-provisioner
	49542fa155836       60c005f310ff3       32 seconds ago      Running             kube-proxy                0                   0072787e29726       kube-proxy-n7m28
	a4b95a39232c2       175ffd71cce3d       43 seconds ago      Running             kube-controller-manager   0                   8aeec0e766fdb       kube-controller-manager-no-preload-349453
	5c82d38a57c77       9aa1fad941575       43 seconds ago      Running             kube-scheduler            0                   8200d83c8723c       kube-scheduler-no-preload-349453
	0b8b34459e371       2e96e5913fc06       43 seconds ago      Running             etcd                      0                   151cda393a927       etcd-no-preload-349453
	5d35346ecb3ed       6bab7719df100       43 seconds ago      Running             kube-apiserver            0                   4db1422602ab8       kube-apiserver-no-preload-349453
	
	
	==> containerd <==
	Sep 16 11:09:07 no-preload-349453 containerd[860]: time="2024-09-16T11:09:07.362357609Z" level=info msg="StartContainer for \"6fe6dedc217401e73f5795b6cd5cfdd5d65a4314df29b9b5be8775dc661cbffa\" returns successfully"
	Sep 16 11:09:07 no-preload-349453 containerd[860]: time="2024-09-16T11:09:07.545781963Z" level=error msg="failed to decode hosts.toml" error="invalid `host` tree"
	Sep 16 11:09:09 no-preload-349453 containerd[860]: time="2024-09-16T11:09:09.755702767Z" level=info msg="ImageCreate event name:\"docker.io/kindest/kindnetd:v20240813-c6f155d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 11:09:09 no-preload-349453 containerd[860]: time="2024-09-16T11:09:09.756499468Z" level=info msg="stop pulling image docker.io/kindest/kindnetd:v20240813-c6f155d6: active requests=0, bytes read=36804223"
	Sep 16 11:09:09 no-preload-349453 containerd[860]: time="2024-09-16T11:09:09.757807119Z" level=info msg="ImageCreate event name:\"sha256:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 11:09:09 no-preload-349453 containerd[860]: time="2024-09-16T11:09:09.760234610Z" level=info msg="ImageCreate event name:\"docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 11:09:09 no-preload-349453 containerd[860]: time="2024-09-16T11:09:09.760712943Z" level=info msg="Pulled image \"docker.io/kindest/kindnetd:v20240813-c6f155d6\" with image id \"sha256:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f\", repo tag \"docker.io/kindest/kindnetd:v20240813-c6f155d6\", repo digest \"docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166\", size \"36793393\" in 2.901543409s"
	Sep 16 11:09:09 no-preload-349453 containerd[860]: time="2024-09-16T11:09:09.760776822Z" level=info msg="PullImage \"docker.io/kindest/kindnetd:v20240813-c6f155d6\" returns image reference \"sha256:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f\""
	Sep 16 11:09:09 no-preload-349453 containerd[860]: time="2024-09-16T11:09:09.764672858Z" level=info msg="CreateContainer within sandbox \"06502caa119d42a5346554004e633bc20fb46b393d2a00987f03e1f4604bb0cb\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Sep 16 11:09:09 no-preload-349453 containerd[860]: time="2024-09-16T11:09:09.778359852Z" level=info msg="CreateContainer within sandbox \"06502caa119d42a5346554004e633bc20fb46b393d2a00987f03e1f4604bb0cb\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"b30641ccb64e362b81a950f5292970ba402b0ac24d54dd1f6caba97fe10efe1a\""
	Sep 16 11:09:09 no-preload-349453 containerd[860]: time="2024-09-16T11:09:09.779047237Z" level=info msg="StartContainer for \"b30641ccb64e362b81a950f5292970ba402b0ac24d54dd1f6caba97fe10efe1a\""
	Sep 16 11:09:09 no-preload-349453 containerd[860]: time="2024-09-16T11:09:09.837438306Z" level=info msg="StartContainer for \"b30641ccb64e362b81a950f5292970ba402b0ac24d54dd1f6caba97fe10efe1a\" returns successfully"
	Sep 16 11:09:11 no-preload-349453 containerd[860]: time="2024-09-16T11:09:11.254668504Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Sep 16 11:09:19 no-preload-349453 containerd[860]: time="2024-09-16T11:09:19.751075294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9zbwk,Uid:427a37dd-9a56-455f-bd9e-3ee604164481,Namespace:kube-system,Attempt:0,}"
	Sep 16 11:09:19 no-preload-349453 containerd[860]: time="2024-09-16T11:09:19.776142115Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9zbwk,Uid:427a37dd-9a56-455f-bd9e-3ee604164481,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5cf5f98e50f3f78d6413dc73a990393f6b55cca1a6aeabc7984f5b2b95148f50\": failed to find network info for sandbox \"5cf5f98e50f3f78d6413dc73a990393f6b55cca1a6aeabc7984f5b2b95148f50\""
	Sep 16 11:09:33 no-preload-349453 containerd[860]: time="2024-09-16T11:09:33.751187315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9zbwk,Uid:427a37dd-9a56-455f-bd9e-3ee604164481,Namespace:kube-system,Attempt:0,}"
	Sep 16 11:09:33 no-preload-349453 containerd[860]: time="2024-09-16T11:09:33.785477296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 11:09:33 no-preload-349453 containerd[860]: time="2024-09-16T11:09:33.785552025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 11:09:33 no-preload-349453 containerd[860]: time="2024-09-16T11:09:33.785563820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:09:33 no-preload-349453 containerd[860]: time="2024-09-16T11:09:33.785650565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:09:33 no-preload-349453 containerd[860]: time="2024-09-16T11:09:33.833330346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9zbwk,Uid:427a37dd-9a56-455f-bd9e-3ee604164481,Namespace:kube-system,Attempt:0,} returns sandbox id \"290db8b125607d520b2543935c4df4e547514fdab3734aeba6c5c7a161992240\""
	Sep 16 11:09:33 no-preload-349453 containerd[860]: time="2024-09-16T11:09:33.836077045Z" level=info msg="CreateContainer within sandbox \"290db8b125607d520b2543935c4df4e547514fdab3734aeba6c5c7a161992240\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Sep 16 11:09:33 no-preload-349453 containerd[860]: time="2024-09-16T11:09:33.851638309Z" level=info msg="CreateContainer within sandbox \"290db8b125607d520b2543935c4df4e547514fdab3734aeba6c5c7a161992240\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"30acbc7b45e29f44070b3c2cd9e349609594ebbaf39bfad9077eca4ac3da4b6d\""
	Sep 16 11:09:33 no-preload-349453 containerd[860]: time="2024-09-16T11:09:33.852272763Z" level=info msg="StartContainer for \"30acbc7b45e29f44070b3c2cd9e349609594ebbaf39bfad9077eca4ac3da4b6d\""
	Sep 16 11:09:33 no-preload-349453 containerd[860]: time="2024-09-16T11:09:33.896573615Z" level=info msg="StartContainer for \"30acbc7b45e29f44070b3c2cd9e349609594ebbaf39bfad9077eca4ac3da4b6d\" returns successfully"
	
	
	==> coredns [30acbc7b45e29f44070b3c2cd9e349609594ebbaf39bfad9077eca4ac3da4b6d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:57592 - 13339 "HINFO IN 8962497822399797364.2477591037072266195. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011748401s
	
	
	==> describe nodes <==
	Name:               no-preload-349453
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-349453
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=no-preload-349453
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T11_09_01_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:08:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-349453
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:09:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:09:31 +0000   Mon, 16 Sep 2024 11:08:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:09:31 +0000   Mon, 16 Sep 2024 11:08:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:09:31 +0000   Mon, 16 Sep 2024 11:08:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:09:31 +0000   Mon, 16 Sep 2024 11:08:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-349453
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 7ac769ff9aa04aaf92b2dd2bf68f2f82
	  System UUID:                28dd4bdd-2700-4b67-8389-386a38b68a64
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-9zbwk                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     33s
	  kube-system                 etcd-no-preload-349453                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         38s
	  kube-system                 kindnet-qbh58                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      34s
	  kube-system                 kube-apiserver-no-preload-349453             250m (3%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-controller-manager-no-preload-349453    200m (2%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-proxy-n7m28                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-scheduler-no-preload-349453             100m (1%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 32s   kube-proxy       
	  Normal   Starting                 39s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 39s   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  39s   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  39s   kubelet          Node no-preload-349453 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    39s   kubelet          Node no-preload-349453 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     39s   kubelet          Node no-preload-349453 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           35s   node-controller  Node no-preload-349453 event: Registered Node no-preload-349453 in Controller
	
	
	==> dmesg <==
	[Sep16 11:00] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000006] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000002] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +0.000040] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000004] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +1.028430] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000006] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +0.004229] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000004] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +2.011572] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000009] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +4.031652] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000006] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +0.000018] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +8.195254] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000007] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[Sep16 11:03] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-26a107bdc9bf
	[  +0.000006] ll header: 00000000: 02 42 7f 04 0c 59 02 42 c0 a8 4c 02 08 00
	[  +1.005595] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-26a107bdc9bf
	[  +0.000005] ll header: 00000000: 02 42 7f 04 0c 59 02 42 c0 a8 4c 02 08 00
	[Sep16 11:07] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [0b8b34459e3719308549f98b1bdb7656a632bebe98e50614463b773f213a735d] <==
	{"level":"info","ts":"2024-09-16T11:08:56.341999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T11:08:56.342053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T11:08:56.342083Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 1"}
	{"level":"info","ts":"2024-09-16T11:08:56.342100Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T11:08:56.342109Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2024-09-16T11:08:56.342122Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 2"}
	{"level":"info","ts":"2024-09-16T11:08:56.342153Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2024-09-16T11:08:56.343021Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:08:56.343585Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:08:56.343582Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:no-preload-349453 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T11:08:56.343760Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:08:56.343891Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T11:08:56.343954Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T11:08:56.344739Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:08:56.344836Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:08:56.344861Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:08:56.344977Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:08:56.345568Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:08:56.346072Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T11:08:56.346688Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2024-09-16T11:08:59.251819Z","caller":"traceutil/trace.go:171","msg":"trace[909223504] linearizableReadLoop","detail":"{readStateIndex:78; appliedIndex:77; }","duration":"124.299534ms","start":"2024-09-16T11:08:59.127499Z","end":"2024-09-16T11:08:59.251798Z","steps":["trace[909223504] 'read index received'  (duration: 61.163504ms)","trace[909223504] 'applied index is now lower than readState.Index'  (duration: 63.13541ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T11:08:59.251872Z","caller":"traceutil/trace.go:171","msg":"trace[1280881910] transaction","detail":"{read_only:false; response_revision:74; number_of_response:1; }","duration":"128.600617ms","start":"2024-09-16T11:08:59.123247Z","end":"2024-09-16T11:08:59.251847Z","steps":["trace[1280881910] 'process raft request'  (duration: 65.397729ms)","trace[1280881910] 'compare'  (duration: 63.021346ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T11:08:59.251948Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.433124ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2024-09-16T11:08:59.252009Z","caller":"traceutil/trace.go:171","msg":"trace[1202054448] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:74; }","duration":"124.508287ms","start":"2024-09-16T11:08:59.127491Z","end":"2024-09-16T11:08:59.251999Z","steps":["trace[1202054448] 'agreement among raft nodes before linearized reading'  (duration: 124.386955ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T11:08:59.439373Z","caller":"traceutil/trace.go:171","msg":"trace[1221868137] transaction","detail":"{read_only:false; response_revision:75; number_of_response:1; }","duration":"183.565022ms","start":"2024-09-16T11:08:59.255790Z","end":"2024-09-16T11:08:59.439355Z","steps":["trace[1221868137] 'process raft request'  (duration: 120.890221ms)","trace[1221868137] 'compare'  (duration: 62.56898ms)"],"step_count":2}
	
	
	==> kernel <==
	 11:09:39 up 52 min,  0 users,  load average: 3.87, 3.57, 2.18
	Linux no-preload-349453 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [b30641ccb64e362b81a950f5292970ba402b0ac24d54dd1f6caba97fe10efe1a] <==
	I0916 11:09:10.022282       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 11:09:10.022538       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I0916 11:09:10.022724       1 main.go:148] setting mtu 1500 for CNI 
	I0916 11:09:10.022743       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 11:09:10.022773       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 11:09:10.420723       1 controller.go:334] Starting controller kube-network-policies
	I0916 11:09:10.421181       1 controller.go:338] Waiting for informer caches to sync
	I0916 11:09:10.421189       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 11:09:10.721709       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 11:09:10.721737       1 metrics.go:61] Registering metrics
	I0916 11:09:10.721785       1 controller.go:374] Syncing nftables rules
	I0916 11:09:20.425801       1 main.go:295] Handling node with IPs: map[192.168.94.2:{}]
	I0916 11:09:20.425835       1 main.go:299] handling current node
	I0916 11:09:30.427819       1 main.go:295] Handling node with IPs: map[192.168.94.2:{}]
	I0916 11:09:30.427851       1 main.go:299] handling current node
	
	
	==> kube-apiserver [5d35346ecb3ed281435941a815ae449eb1e54a96a4798b3c0a33aa91f7f98817] <==
	I0916 11:08:58.121126       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0916 11:08:58.121202       1 aggregator.go:171] initial CRD sync complete...
	I0916 11:08:58.121292       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 11:08:58.121348       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 11:08:58.121378       1 cache.go:39] Caches are synced for autoregister controller
	I0916 11:08:58.125871       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 11:08:58.125896       1 policy_source.go:224] refreshing policies
	E0916 11:08:58.127837       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0916 11:08:58.128408       1 controller.go:615] quota admission added evaluator for: namespaces
	I0916 11:08:58.330384       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 11:08:59.052089       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0916 11:08:59.116570       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0916 11:08:59.116590       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 11:08:59.898521       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 11:08:59.934152       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 11:09:00.034219       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0916 11:09:00.046397       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I0916 11:09:00.047830       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 11:09:00.052313       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 11:09:00.132789       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 11:09:00.923235       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 11:09:00.932632       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 11:09:00.942148       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 11:09:05.485210       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0916 11:09:05.785724       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [a4b95a39232c279fb958cb4f48f6828232143f571fa31629227ec0c0bfd79969] <==
	I0916 11:09:04.958862       1 shared_informer.go:320] Caches are synced for disruption
	I0916 11:09:05.033681       1 shared_informer.go:320] Caches are synced for service account
	I0916 11:09:05.036943       1 shared_informer.go:320] Caches are synced for namespace
	I0916 11:09:05.044777       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 11:09:05.088060       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 11:09:05.501553       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:09:05.582619       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:09:05.582651       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 11:09:05.590465       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-349453"
	I0916 11:09:06.045501       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="255.463843ms"
	I0916 11:09:06.052468       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.901285ms"
	I0916 11:09:06.052558       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="51.667µs"
	I0916 11:09:06.053697       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="66.481µs"
	I0916 11:09:06.131407       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="100.516µs"
	I0916 11:09:06.647300       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="17.526542ms"
	I0916 11:09:06.654990       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="7.635755ms"
	I0916 11:09:06.655120       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="81.851µs"
	I0916 11:09:07.881805       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="70.434µs"
	I0916 11:09:07.887535       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="88.293µs"
	I0916 11:09:07.891032       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="71.532µs"
	I0916 11:09:11.264980       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-349453"
	I0916 11:09:31.598112       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-349453"
	I0916 11:09:34.905630       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="79.673µs"
	I0916 11:09:34.923877       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.97092ms"
	I0916 11:09:34.923984       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="71.271µs"
	
	
	==> kube-proxy [49542fa155836c3fde0549a1cb7c27e6491fe3d03f704b1dd6194e52b361cf04] <==
	I0916 11:09:06.867943       1 server_linux.go:66] "Using iptables proxy"
	I0916 11:09:06.995156       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.94.2"]
	E0916 11:09:06.995228       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 11:09:07.016693       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 11:09:07.016755       1 server_linux.go:169] "Using iptables Proxier"
	I0916 11:09:07.018577       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 11:09:07.018989       1 server.go:483] "Version info" version="v1.31.1"
	I0916 11:09:07.019027       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:09:07.020423       1 config.go:105] "Starting endpoint slice config controller"
	I0916 11:09:07.020505       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 11:09:07.020533       1 config.go:328] "Starting node config controller"
	I0916 11:09:07.020679       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 11:09:07.020603       1 config.go:199] "Starting service config controller"
	I0916 11:09:07.020757       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 11:09:07.121453       1 shared_informer.go:320] Caches are synced for service config
	I0916 11:09:07.121498       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 11:09:07.121503       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5c82d38a57c77763e3457d1b66ad9e5b564c5c6137cef3dc16a48cd5d44dcf69] <==
	W0916 11:08:59.221741       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 11:08:59.221790       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:59.261959       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 11:08:59.262001       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:59.265606       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:08:59.265658       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:59.490611       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 11:08:59.490652       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:59.579438       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 11:08:59.579489       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:59.585912       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 11:08:59.585982       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:59.629574       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 11:08:59.629617       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:59.663059       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:08:59.663100       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0916 11:08:59.685631       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 11:08:59.685685       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:59.695015       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 11:08:59.695064       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:59.697126       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 11:08:59.697157       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:59.699134       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 11:08:59.699171       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 11:09:02.728201       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 11:09:06 no-preload-349453 kubelet[2271]: E0916 11:09:06.435017    2271 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"190e287f0460cfd292e37ea473da8054fb99e136fc9d7a7ad33a51e55404f6e1\": failed to find network info for sandbox \"190e287f0460cfd292e37ea473da8054fb99e136fc9d7a7ad33a51e55404f6e1\"" pod="kube-system/coredns-7c65d6cfc9-mvlrh"
	Sep 16 11:09:06 no-preload-349453 kubelet[2271]: E0916 11:09:06.435076    2271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-mvlrh_kube-system(42523754-f961-412c-9c6a-2ad437fadc08)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-mvlrh_kube-system(42523754-f961-412c-9c6a-2ad437fadc08)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"190e287f0460cfd292e37ea473da8054fb99e136fc9d7a7ad33a51e55404f6e1\\\": failed to find network info for sandbox \\\"190e287f0460cfd292e37ea473da8054fb99e136fc9d7a7ad33a51e55404f6e1\\\"\"" pod="kube-system/coredns-7c65d6cfc9-mvlrh" podUID="42523754-f961-412c-9c6a-2ad437fadc08"
	Sep 16 11:09:06 no-preload-349453 kubelet[2271]: E0916 11:09:06.443968    2271 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdef68bab8cc4b9ea1fa5e97dd3485840d4687181c308884f056b386fcf8e330\": failed to find network info for sandbox \"fdef68bab8cc4b9ea1fa5e97dd3485840d4687181c308884f056b386fcf8e330\""
	Sep 16 11:09:06 no-preload-349453 kubelet[2271]: E0916 11:09:06.444042    2271 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdef68bab8cc4b9ea1fa5e97dd3485840d4687181c308884f056b386fcf8e330\": failed to find network info for sandbox \"fdef68bab8cc4b9ea1fa5e97dd3485840d4687181c308884f056b386fcf8e330\"" pod="kube-system/coredns-7c65d6cfc9-9zbwk"
	Sep 16 11:09:06 no-preload-349453 kubelet[2271]: E0916 11:09:06.444070    2271 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdef68bab8cc4b9ea1fa5e97dd3485840d4687181c308884f056b386fcf8e330\": failed to find network info for sandbox \"fdef68bab8cc4b9ea1fa5e97dd3485840d4687181c308884f056b386fcf8e330\"" pod="kube-system/coredns-7c65d6cfc9-9zbwk"
	Sep 16 11:09:06 no-preload-349453 kubelet[2271]: E0916 11:09:06.444119    2271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-9zbwk_kube-system(427a37dd-9a56-455f-bd9e-3ee604164481)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-9zbwk_kube-system(427a37dd-9a56-455f-bd9e-3ee604164481)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fdef68bab8cc4b9ea1fa5e97dd3485840d4687181c308884f056b386fcf8e330\\\": failed to find network info for sandbox \\\"fdef68bab8cc4b9ea1fa5e97dd3485840d4687181c308884f056b386fcf8e330\\\"\"" pod="kube-system/coredns-7c65d6cfc9-9zbwk" podUID="427a37dd-9a56-455f-bd9e-3ee604164481"
	Sep 16 11:09:06 no-preload-349453 kubelet[2271]: I0916 11:09:06.858135    2271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n7m28" podStartSLOduration=1.858118054 podStartE2EDuration="1.858118054s" podCreationTimestamp="2024-09-16 11:09:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:09:06.857384714 +0000 UTC m=+6.194635549" watchObservedRunningTime="2024-09-16 11:09:06.858118054 +0000 UTC m=+6.195368888"
	Sep 16 11:09:06 no-preload-349453 kubelet[2271]: I0916 11:09:06.925656    2271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42523754-f961-412c-9c6a-2ad437fadc08-config-volume\") pod \"42523754-f961-412c-9c6a-2ad437fadc08\" (UID: \"42523754-f961-412c-9c6a-2ad437fadc08\") "
	Sep 16 11:09:06 no-preload-349453 kubelet[2271]: I0916 11:09:06.925713    2271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggz6t\" (UniqueName: \"kubernetes.io/projected/42523754-f961-412c-9c6a-2ad437fadc08-kube-api-access-ggz6t\") pod \"42523754-f961-412c-9c6a-2ad437fadc08\" (UID: \"42523754-f961-412c-9c6a-2ad437fadc08\") "
	Sep 16 11:09:06 no-preload-349453 kubelet[2271]: I0916 11:09:06.925791    2271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96zdr\" (UniqueName: \"kubernetes.io/projected/2f218f7f-9232-4d85-bd8d-6cdc6516c83f-kube-api-access-96zdr\") pod \"storage-provisioner\" (UID: \"2f218f7f-9232-4d85-bd8d-6cdc6516c83f\") " pod="kube-system/storage-provisioner"
	Sep 16 11:09:06 no-preload-349453 kubelet[2271]: I0916 11:09:06.925872    2271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2f218f7f-9232-4d85-bd8d-6cdc6516c83f-tmp\") pod \"storage-provisioner\" (UID: \"2f218f7f-9232-4d85-bd8d-6cdc6516c83f\") " pod="kube-system/storage-provisioner"
	Sep 16 11:09:06 no-preload-349453 kubelet[2271]: I0916 11:09:06.926063    2271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42523754-f961-412c-9c6a-2ad437fadc08-config-volume" (OuterVolumeSpecName: "config-volume") pod "42523754-f961-412c-9c6a-2ad437fadc08" (UID: "42523754-f961-412c-9c6a-2ad437fadc08"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Sep 16 11:09:06 no-preload-349453 kubelet[2271]: I0916 11:09:06.928599    2271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42523754-f961-412c-9c6a-2ad437fadc08-kube-api-access-ggz6t" (OuterVolumeSpecName: "kube-api-access-ggz6t") pod "42523754-f961-412c-9c6a-2ad437fadc08" (UID: "42523754-f961-412c-9c6a-2ad437fadc08"). InnerVolumeSpecName "kube-api-access-ggz6t". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 11:09:07 no-preload-349453 kubelet[2271]: I0916 11:09:07.026104    2271 reconciler_common.go:288] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42523754-f961-412c-9c6a-2ad437fadc08-config-volume\") on node \"no-preload-349453\" DevicePath \"\""
	Sep 16 11:09:07 no-preload-349453 kubelet[2271]: I0916 11:09:07.026141    2271 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-ggz6t\" (UniqueName: \"kubernetes.io/projected/42523754-f961-412c-9c6a-2ad437fadc08-kube-api-access-ggz6t\") on node \"no-preload-349453\" DevicePath \"\""
	Sep 16 11:09:07 no-preload-349453 kubelet[2271]: I0916 11:09:07.848853    2271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.848828737 podStartE2EDuration="1.848828737s" podCreationTimestamp="2024-09-16 11:09:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:09:07.848577382 +0000 UTC m=+7.185828216" watchObservedRunningTime="2024-09-16 11:09:07.848828737 +0000 UTC m=+7.186079571"
	Sep 16 11:09:08 no-preload-349453 kubelet[2271]: I0916 11:09:08.753518    2271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42523754-f961-412c-9c6a-2ad437fadc08" path="/var/lib/kubelet/pods/42523754-f961-412c-9c6a-2ad437fadc08/volumes"
	Sep 16 11:09:09 no-preload-349453 kubelet[2271]: I0916 11:09:09.856622    2271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-qbh58" podStartSLOduration=1.952501992 podStartE2EDuration="4.856602316s" podCreationTimestamp="2024-09-16 11:09:05 +0000 UTC" firstStartedPulling="2024-09-16 11:09:06.857718399 +0000 UTC m=+6.194969216" lastFinishedPulling="2024-09-16 11:09:09.761818723 +0000 UTC m=+9.099069540" observedRunningTime="2024-09-16 11:09:09.856516169 +0000 UTC m=+9.193767018" watchObservedRunningTime="2024-09-16 11:09:09.856602316 +0000 UTC m=+9.193853150"
	Sep 16 11:09:11 no-preload-349453 kubelet[2271]: I0916 11:09:11.254039    2271 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 16 11:09:11 no-preload-349453 kubelet[2271]: I0916 11:09:11.255017    2271 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 16 11:09:19 no-preload-349453 kubelet[2271]: E0916 11:09:19.776626    2271 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cf5f98e50f3f78d6413dc73a990393f6b55cca1a6aeabc7984f5b2b95148f50\": failed to find network info for sandbox \"5cf5f98e50f3f78d6413dc73a990393f6b55cca1a6aeabc7984f5b2b95148f50\""
	Sep 16 11:09:19 no-preload-349453 kubelet[2271]: E0916 11:09:19.776742    2271 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cf5f98e50f3f78d6413dc73a990393f6b55cca1a6aeabc7984f5b2b95148f50\": failed to find network info for sandbox \"5cf5f98e50f3f78d6413dc73a990393f6b55cca1a6aeabc7984f5b2b95148f50\"" pod="kube-system/coredns-7c65d6cfc9-9zbwk"
	Sep 16 11:09:19 no-preload-349453 kubelet[2271]: E0916 11:09:19.776774    2271 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cf5f98e50f3f78d6413dc73a990393f6b55cca1a6aeabc7984f5b2b95148f50\": failed to find network info for sandbox \"5cf5f98e50f3f78d6413dc73a990393f6b55cca1a6aeabc7984f5b2b95148f50\"" pod="kube-system/coredns-7c65d6cfc9-9zbwk"
	Sep 16 11:09:19 no-preload-349453 kubelet[2271]: E0916 11:09:19.776838    2271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-9zbwk_kube-system(427a37dd-9a56-455f-bd9e-3ee604164481)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-9zbwk_kube-system(427a37dd-9a56-455f-bd9e-3ee604164481)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5cf5f98e50f3f78d6413dc73a990393f6b55cca1a6aeabc7984f5b2b95148f50\\\": failed to find network info for sandbox \\\"5cf5f98e50f3f78d6413dc73a990393f6b55cca1a6aeabc7984f5b2b95148f50\\\"\"" pod="kube-system/coredns-7c65d6cfc9-9zbwk" podUID="427a37dd-9a56-455f-bd9e-3ee604164481"
	Sep 16 11:09:34 no-preload-349453 kubelet[2271]: I0916 11:09:34.905623    2271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-9zbwk" podStartSLOduration=28.905602056 podStartE2EDuration="28.905602056s" podCreationTimestamp="2024-09-16 11:09:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:09:34.905581503 +0000 UTC m=+34.242832342" watchObservedRunningTime="2024-09-16 11:09:34.905602056 +0000 UTC m=+34.242852892"
	
	
	==> storage-provisioner [6fe6dedc217401e73f5795b6cd5cfdd5d65a4314df29b9b5be8775dc661cbffa] <==
	I0916 11:09:07.370432       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 11:09:07.378006       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 11:09:07.378048       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 11:09:07.384602       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 11:09:07.384718       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6ddd7c41-8f63-47a8-9650-2ec5bbdf92e6", APIVersion:"v1", ResourceVersion:"382", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-349453_a47d0e4b-78c2-4200-80ff-c828e8be01d0 became leader
	I0916 11:09:07.384766       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-349453_a47d0e4b-78c2-4200-80ff-c828e8be01d0!
	I0916 11:09:07.485942       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-349453_a47d0e4b-78c2-4200-80ff-c828e8be01d0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-349453 -n no-preload-349453
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-349453 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context no-preload-349453 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (501.245µs)
helpers_test.go:263: kubectl --context no-preload-349453 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (3.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-349453 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-349453 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-349453 describe deploy/metrics-server -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (585.406µs)
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-349453 describe deploy/metrics-server -n kube-system": fork/exec /usr/local/bin/kubectl: exec format error
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-349453
helpers_test.go:235: (dbg) docker inspect no-preload-349453:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d44e8cc5581d6fb816ecfca3e6fae57d456bc84775e39f1a8ca84f29fd4dc0a3",
	        "Created": "2024-09-16T11:08:35.617729941Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 265359,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T11:08:35.76202248Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/d44e8cc5581d6fb816ecfca3e6fae57d456bc84775e39f1a8ca84f29fd4dc0a3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d44e8cc5581d6fb816ecfca3e6fae57d456bc84775e39f1a8ca84f29fd4dc0a3/hostname",
	        "HostsPath": "/var/lib/docker/containers/d44e8cc5581d6fb816ecfca3e6fae57d456bc84775e39f1a8ca84f29fd4dc0a3/hosts",
	        "LogPath": "/var/lib/docker/containers/d44e8cc5581d6fb816ecfca3e6fae57d456bc84775e39f1a8ca84f29fd4dc0a3/d44e8cc5581d6fb816ecfca3e6fae57d456bc84775e39f1a8ca84f29fd4dc0a3-json.log",
	        "Name": "/no-preload-349453",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-349453:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-349453",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1f16bfb5a739593a3a28aa0d43852a8530e8255fd42575cca67f10b48c3d69d7-init/diff:/var/lib/docker/overlay2/e391a31d00d345931d18da68f8a7d7338830f6d9b98fe203461225d03a65d594/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1f16bfb5a739593a3a28aa0d43852a8530e8255fd42575cca67f10b48c3d69d7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1f16bfb5a739593a3a28aa0d43852a8530e8255fd42575cca67f10b48c3d69d7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1f16bfb5a739593a3a28aa0d43852a8530e8255fd42575cca67f10b48c3d69d7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-349453",
	                "Source": "/var/lib/docker/volumes/no-preload-349453/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-349453",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-349453",
	                "name.minikube.sigs.k8s.io": "no-preload-349453",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "de544e1372d8cb8fd0e1807ad2b8bb665590a19816c7b2adbc56336e3321ad31",
	            "SandboxKey": "/var/run/docker/netns/de544e1372d8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-349453": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "DriverOpts": null,
	                    "NetworkID": "2cc59d4eff808c995119ae607628ad9854df9618b8c5cd5213cb8d98e98ab4f4",
	                    "EndpointID": "afac10d13376be205fe178b7e126e3c65a6479a99b3db779bc1b7fa1828380a8",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-349453",
	                        "d44e8cc5581d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-349453 -n no-preload-349453
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-349453 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-349453 logs -n 25: (1.140432219s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-771611 sudo                                 | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | systemctl cat containerd                              |                           |         |         |                     |                     |
	|         | --no-pager                                            |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo cat                             | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | /lib/systemd/system/containerd.service                |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo cat                             | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | /etc/containerd/config.toml                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo                                 | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | containerd config dump                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo                                 | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | systemctl status crio --all                           |                           |         |         |                     |                     |
	|         | --full --no-pager                                     |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo                                 | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | systemctl cat crio --no-pager                         |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo find                            | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo crio                            | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | config                                                |                           |         |         |                     |                     |
	| delete  | -p cilium-771611                                      | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	| delete  | -p missing-upgrade-327796                             | missing-upgrade-327796    | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	| start   | -p cert-expiration-021107                             | cert-expiration-021107    | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:08 UTC |
	|         | --memory=2048                                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                  |                           |         |         |                     |                     |
	|         | --driver=docker                                       |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                        |                           |         |         |                     |                     |
	| start   | -p force-systemd-flag-917705                          | force-systemd-flag-917705 | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:08 UTC |
	|         | --memory=2048 --force-systemd                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                                  |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-846070                              | force-systemd-env-846070  | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | ssh cat                                               |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                           |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-846070                           | force-systemd-env-846070  | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| stop    | -p kubernetes-upgrade-311911                          | kubernetes-upgrade-311911 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| start   | -p cert-options-840054                                | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | --memory=2048                                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                 |                           |         |         |                     |                     |
	|         | --driver=docker                                       |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                        |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-311911                          | kubernetes-upgrade-311911 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC |                     |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                                  |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-917705                             | force-systemd-flag-917705 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | ssh cat                                               |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                           |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-917705                          | force-systemd-flag-917705 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| start   | -p old-k8s-version-371039                             | old-k8s-version-371039    | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC |                     |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                         |                           |         |         |                     |                     |
	|         | --kvm-network=default                                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                               |                           |         |         |                     |                     |
	|         | --keep-context=false                                  |                           |         |         |                     |                     |
	|         | --driver=docker                                       |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                          |                           |         |         |                     |                     |
	| ssh     | cert-options-840054 ssh                               | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | openssl x509 -text -noout -in                         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                 |                           |         |         |                     |                     |
	| ssh     | -p cert-options-840054 -- sudo                        | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | cat /etc/kubernetes/admin.conf                        |                           |         |         |                     |                     |
	| delete  | -p cert-options-840054                                | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| start   | -p no-preload-349453                                  | no-preload-349453         | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:09 UTC |
	|         | --memory=2200                                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                     |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                           |                           |         |         |                     |                     |
	|         | --driver=docker                                       |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                          |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-349453            | no-preload-349453         | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC | 16 Sep 24 11:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                |                           |         |         |                     |                     |
	|---------|-------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:08:30
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:08:30.290580  264436 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:08:30.290727  264436 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:08:30.290740  264436 out.go:358] Setting ErrFile to fd 2...
	I0916 11:08:30.290747  264436 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:08:30.291070  264436 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 11:08:30.291765  264436 out.go:352] Setting JSON to false
	I0916 11:08:30.293115  264436 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3054,"bootTime":1726481856,"procs":326,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:08:30.293251  264436 start.go:139] virtualization: kvm guest
	I0916 11:08:30.295658  264436 out.go:177] * [no-preload-349453] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:08:30.297158  264436 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:08:30.297181  264436 notify.go:220] Checking for updates...
	I0916 11:08:30.299671  264436 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:08:30.301189  264436 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:08:30.302491  264436 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 11:08:30.303773  264436 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:08:30.305030  264436 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:08:30.306912  264436 config.go:182] Loaded profile config "cert-expiration-021107": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:08:30.307059  264436 config.go:182] Loaded profile config "kubernetes-upgrade-311911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:08:30.307222  264436 config.go:182] Loaded profile config "old-k8s-version-371039": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0916 11:08:30.307352  264436 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:08:30.342404  264436 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 11:08:30.342617  264436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:08:30.412580  264436 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:76 SystemTime:2024-09-16 11:08:30.399549033 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:08:30.412784  264436 docker.go:318] overlay module found
	I0916 11:08:30.414974  264436 out.go:177] * Using the docker driver based on user configuration
	I0916 11:08:30.416257  264436 start.go:297] selected driver: docker
	I0916 11:08:30.416276  264436 start.go:901] validating driver "docker" against <nil>
	I0916 11:08:30.416296  264436 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:08:30.417426  264436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:08:30.481659  264436 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:76 SystemTime:2024-09-16 11:08:30.467819434 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:08:30.481930  264436 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 11:08:30.482367  264436 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:08:30.484332  264436 out.go:177] * Using Docker driver with root privileges
	I0916 11:08:30.485686  264436 cni.go:84] Creating CNI manager for ""
	I0916 11:08:30.485767  264436 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:08:30.485786  264436 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 11:08:30.485897  264436 start.go:340] cluster config:
	{Name:no-preload-349453 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-349453 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:08:30.487638  264436 out.go:177] * Starting "no-preload-349453" primary control-plane node in "no-preload-349453" cluster
	I0916 11:08:30.489182  264436 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 11:08:30.490994  264436 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 11:08:30.492484  264436 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:08:30.492588  264436 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 11:08:30.492646  264436 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/config.json ...
	I0916 11:08:30.492678  264436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/config.json: {Name:mk7f1330c6b2d92e29945227c336833ff6ffb7fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:30.492798  264436 cache.go:107] acquiring lock: {Name:mk505f3dd823c459cfb83f2d2a39affe63c4c40e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:08:30.492789  264436 cache.go:107] acquiring lock: {Name:mk0f2d9e0670c46fe9eb165a8119acf30531a2e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:08:30.492888  264436 cache.go:107] acquiring lock: {Name:mk0b25b3ebef8c92ed85c693112bf4f2b400d9b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:08:30.492912  264436 cache.go:115] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0916 11:08:30.492874  264436 cache.go:107] acquiring lock: {Name:mkd9c658f7569779b8a27d53e97cc0f70f55a845 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:08:30.492875  264436 cache.go:107] acquiring lock: {Name:mkb7cb231873e7918d3e306be4ec4f6091d91485 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:08:30.492929  264436 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 136.837µs
	I0916 11:08:30.492947  264436 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:08:30.492963  264436 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0916 11:08:30.492986  264436 cache.go:107] acquiring lock: {Name:mk8275b1fd51b04034df297d05c3d74274567a6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:08:30.493018  264436 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:08:30.493066  264436 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:08:30.493091  264436 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:08:30.493102  264436 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0916 11:08:30.493234  264436 cache.go:107] acquiring lock: {Name:mkd90d764df5e26e345f1c24540d37a0e89a5b18 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:08:30.493259  264436 cache.go:107] acquiring lock: {Name:mk612053845ede903900e7b583df14a07089be08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:08:30.493328  264436 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:08:30.493343  264436 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0916 11:08:30.494117  264436 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0916 11:08:30.494618  264436 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:08:30.494682  264436 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:08:30.494622  264436 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:08:30.494909  264436 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:08:30.494695  264436 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0916 11:08:30.496479  264436 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	W0916 11:08:30.521360  264436 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 11:08:30.521384  264436 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 11:08:30.521484  264436 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 11:08:30.521512  264436 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 11:08:30.521521  264436 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 11:08:30.521530  264436 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 11:08:30.521538  264436 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 11:08:30.581569  264436 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 11:08:30.581616  264436 cache.go:194] Successfully downloaded all kic artifacts
	I0916 11:08:30.581661  264436 start.go:360] acquireMachinesLock for no-preload-349453: {Name:mk8558ad422c1a28af392329b5800e6b7ec410a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:08:30.581784  264436 start.go:364] duration metric: took 104.124µs to acquireMachinesLock for "no-preload-349453"
	I0916 11:08:30.581916  264436 start.go:93] Provisioning new machine with config: &{Name:no-preload-349453 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-349453 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMe
trics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 11:08:30.582030  264436 start.go:125] createHost starting for "" (driver="docker")
	I0916 11:08:32.243803  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 11:08:32.243852  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:08:31.292696  260870 containerd.go:563] duration metric: took 1.167769285s to copy over tarball
	I0916 11:08:31.292764  260870 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 11:08:33.986408  260870 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.693618841s)
	I0916 11:08:33.986435  260870 containerd.go:570] duration metric: took 2.693711801s to extract the tarball
	I0916 11:08:33.986442  260870 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 11:08:34.058024  260870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:08:34.129814  260870 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 11:08:34.239782  260870 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:08:34.273790  260870 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0916 11:08:34.273814  260870 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0916 11:08:34.273863  260870 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:08:34.273888  260870 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:08:34.273911  260870 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:08:34.273925  260870 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0916 11:08:34.273939  260870 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:08:34.273984  260870 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0916 11:08:34.273983  260870 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0916 11:08:34.273894  260870 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:08:34.275457  260870 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:08:34.275470  260870 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:08:34.275487  260870 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0916 11:08:34.275487  260870 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:08:34.275498  260870 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:08:34.275465  260870 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:08:34.275780  260870 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0916 11:08:34.275781  260870 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0916 11:08:34.466060  260870 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.4.13-0" and sha "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934"
	I0916 11:08:34.466124  260870 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.4.13-0
	I0916 11:08:34.488460  260870 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0916 11:08:34.488504  260870 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0916 11:08:34.488539  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:08:34.492122  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 11:08:34.498533  260870 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.20.0" and sha "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080"
	I0916 11:08:34.498612  260870 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:08:34.502891  260870 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.2" and sha "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c"
	I0916 11:08:34.502966  260870 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.2
	I0916 11:08:34.507568  260870 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.20.0" and sha "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99"
	I0916 11:08:34.507620  260870 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:08:34.528734  260870 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.20.0" and sha "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899"
	I0916 11:08:34.528802  260870 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:08:34.532124  260870 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0916 11:08:34.532165  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 11:08:34.532165  260870 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:08:34.532250  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:08:34.533288  260870 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns:1.7.0" and sha "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16"
	I0916 11:08:34.533345  260870 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns:1.7.0
	I0916 11:08:34.533812  260870 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0916 11:08:34.533878  260870 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0916 11:08:34.533919  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:08:34.537025  260870 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.20.0" and sha "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc"
	I0916 11:08:34.537100  260870 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:08:34.557448  260870 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0916 11:08:34.557464  260870 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0916 11:08:34.557501  260870 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:08:34.557501  260870 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:08:34.557547  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:08:34.557547  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:08:34.568864  260870 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0916 11:08:34.568898  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 11:08:34.568915  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 11:08:34.568916  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:08:34.568924  260870 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0916 11:08:34.568944  260870 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0916 11:08:34.568958  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:08:34.568969  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:08:34.568978  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:08:34.568978  260870 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:08:34.569018  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:08:34.729417  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:08:34.729479  260870 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0916 11:08:34.729539  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:08:34.729542  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 11:08:34.729639  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:08:34.729679  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:08:34.729692  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 11:08:34.846706  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:08:34.849695  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:08:34.849746  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 11:08:34.849751  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:08:34.849830  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 11:08:34.849855  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:08:35.032207  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:08:35.032853  260870 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0916 11:08:35.037891  260870 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0916 11:08:35.037932  260870 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0916 11:08:35.038023  260870 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0916 11:08:35.038051  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 11:08:35.068211  260870 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0916 11:08:35.124935  260870 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0916 11:08:30.584062  264436 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 11:08:30.584349  264436 start.go:159] libmachine.API.Create for "no-preload-349453" (driver="docker")
	I0916 11:08:30.584376  264436 client.go:168] LocalClient.Create starting
	I0916 11:08:30.584454  264436 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem
	I0916 11:08:30.584501  264436 main.go:141] libmachine: Decoding PEM data...
	I0916 11:08:30.584522  264436 main.go:141] libmachine: Parsing certificate...
	I0916 11:08:30.584586  264436 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem
	I0916 11:08:30.584611  264436 main.go:141] libmachine: Decoding PEM data...
	I0916 11:08:30.584626  264436 main.go:141] libmachine: Parsing certificate...
	I0916 11:08:30.585045  264436 cli_runner.go:164] Run: docker network inspect no-preload-349453 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 11:08:30.610640  264436 cli_runner.go:211] docker network inspect no-preload-349453 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 11:08:30.610749  264436 network_create.go:284] running [docker network inspect no-preload-349453] to gather additional debugging logs...
	I0916 11:08:30.610897  264436 cli_runner.go:164] Run: docker network inspect no-preload-349453
	W0916 11:08:30.633247  264436 cli_runner.go:211] docker network inspect no-preload-349453 returned with exit code 1
	I0916 11:08:30.633283  264436 network_create.go:287] error running [docker network inspect no-preload-349453]: docker network inspect no-preload-349453: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-349453 not found
	I0916 11:08:30.633310  264436 network_create.go:289] output of [docker network inspect no-preload-349453]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-349453 not found
	
	** /stderr **
	I0916 11:08:30.633427  264436 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:08:30.661732  264436 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c95c64bb41bd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:bd:76:ab:c4} reservation:<nil>}
	I0916 11:08:30.663027  264436 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fad43aa9929b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:94:3f:61:fd} reservation:<nil>}
	I0916 11:08:30.664348  264436 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-49585fce923a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:ad:45:94:54} reservation:<nil>}
	I0916 11:08:30.665251  264436 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-45dc384def28 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:95:3e:48:c3} reservation:<nil>}
	I0916 11:08:30.666118  264436 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-b7c76f2e9a1f IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:4a:59:5d:75} reservation:<nil>}
	I0916 11:08:30.667352  264436 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014118f0}
	I0916 11:08:30.667386  264436 network_create.go:124] attempt to create docker network no-preload-349453 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I0916 11:08:30.667448  264436 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-349453 no-preload-349453
	I0916 11:08:30.736241  264436 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0916 11:08:30.758180  264436 network_create.go:108] docker network no-preload-349453 192.168.94.0/24 created
	I0916 11:08:30.758216  264436 kic.go:121] calculated static IP "192.168.94.2" for the "no-preload-349453" container
	I0916 11:08:30.758297  264436 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 11:08:30.767506  264436 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I0916 11:08:30.770224  264436 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0916 11:08:30.784652  264436 cli_runner.go:164] Run: docker volume create no-preload-349453 --label name.minikube.sigs.k8s.io=no-preload-349453 --label created_by.minikube.sigs.k8s.io=true
	I0916 11:08:30.787645  264436 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0916 11:08:30.789687  264436 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0916 11:08:30.791298  264436 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0916 11:08:30.809926  264436 oci.go:103] Successfully created a docker volume no-preload-349453
	I0916 11:08:30.810088  264436 cli_runner.go:164] Run: docker run --rm --name no-preload-349453-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-349453 --entrypoint /usr/bin/test -v no-preload-349453:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 11:08:30.986670  264436 cache.go:157] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0916 11:08:30.986704  264436 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 493.451965ms
	I0916 11:08:30.986721  264436 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0916 11:08:30.992662  264436 cache.go:162] opening:  /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0916 11:08:31.459004  264436 cache.go:157] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0916 11:08:31.459044  264436 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1" took 966.158295ms
	I0916 11:08:31.459071  264436 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0916 11:08:32.902149  264436 cache.go:157] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0916 11:08:32.902263  264436 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1" took 2.409439664s
	I0916 11:08:32.902288  264436 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0916 11:08:32.954934  264436 cache.go:157] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0916 11:08:32.955019  264436 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1" took 2.462197691s
	I0916 11:08:32.955043  264436 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0916 11:08:32.982491  264436 cache.go:157] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0916 11:08:32.982539  264436 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1" took 2.489760683s
	I0916 11:08:32.982557  264436 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0916 11:08:33.008590  264436 cache.go:157] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0916 11:08:33.008619  264436 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 2.515390278s
	I0916 11:08:33.008636  264436 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0916 11:08:33.364029  264436 cache.go:157] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 exists
	I0916 11:08:33.364061  264436 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0" took 2.871077786s
	I0916 11:08:33.364074  264436 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0916 11:08:33.364098  264436 cache.go:87] Successfully saved all images to host disk.
	I0916 11:08:35.392285  260870 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I0916 11:08:35.392370  260870 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:08:35.438527  260870 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0916 11:08:35.438576  260870 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:08:35.438615  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:08:35.442067  260870 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:08:35.527055  260870 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0916 11:08:35.527210  260870 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0916 11:08:35.531022  260870 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0916 11:08:35.531056  260870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0916 11:08:35.609317  260870 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0916 11:08:35.609393  260870 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I0916 11:08:36.042074  260870 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0916 11:08:36.042130  260870 cache_images.go:92] duration metric: took 1.768300894s to LoadCachedImages
	W0916 11:08:36.042205  260870 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I0916 11:08:36.042220  260870 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.20.0 containerd true true} ...
	I0916 11:08:36.042328  260870 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-371039 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-371039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:08:36.042388  260870 ssh_runner.go:195] Run: sudo crictl info
	I0916 11:08:36.087682  260870 cni.go:84] Creating CNI manager for ""
	I0916 11:08:36.087706  260870 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:08:36.087715  260870 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:08:36.087732  260870 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-371039 NodeName:old-k8s-version-371039 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0916 11:08:36.087889  260870 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-371039"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:08:36.087956  260870 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0916 11:08:36.096824  260870 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 11:08:36.096888  260870 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:08:36.105501  260870 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (443 bytes)
	I0916 11:08:36.123886  260870 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:08:36.142412  260870 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2128 bytes)
	I0916 11:08:36.160845  260870 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0916 11:08:36.164496  260870 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:08:36.175171  260870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:08:36.270265  260870 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:08:36.288432  260870 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039 for IP: 192.168.103.2
	I0916 11:08:36.288456  260870 certs.go:194] generating shared ca certs ...
	I0916 11:08:36.288476  260870 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:36.288648  260870 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 11:08:36.288704  260870 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 11:08:36.288714  260870 certs.go:256] generating profile certs ...
	I0916 11:08:36.288781  260870 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.key
	I0916 11:08:36.288802  260870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.crt with IP's: []
	I0916 11:08:36.405455  260870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.crt ...
	I0916 11:08:36.405492  260870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.crt: {Name:mk82ea8fcc0c34a14f2e7e173fd4907cf9b8e3c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:36.405667  260870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.key ...
	I0916 11:08:36.405681  260870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.key: {Name:mkae0b2fcb25419f4a74135b55a637382d7b9ec5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:36.405759  260870 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/apiserver.key.2be0dd44
	I0916 11:08:36.405776  260870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/apiserver.crt.2be0dd44 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I0916 11:08:36.459262  260870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/apiserver.crt.2be0dd44 ...
	I0916 11:08:36.459292  260870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/apiserver.crt.2be0dd44: {Name:mk62a33feea446132b32229b845b6bb967faebe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:36.459439  260870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/apiserver.key.2be0dd44 ...
	I0916 11:08:36.459453  260870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/apiserver.key.2be0dd44: {Name:mka88753a9e7441e98fdbaa3acff880db3ae57f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:36.459521  260870 certs.go:381] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/apiserver.crt.2be0dd44 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/apiserver.crt
	I0916 11:08:36.459592  260870 certs.go:385] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/apiserver.key.2be0dd44 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/apiserver.key
	I0916 11:08:36.459649  260870 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/proxy-client.key
	I0916 11:08:36.459664  260870 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/proxy-client.crt with IP's: []
	I0916 11:08:36.713401  260870 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/proxy-client.crt ...
	I0916 11:08:36.713429  260870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/proxy-client.crt: {Name:mk0c69e2fe4df3505f52bc05b74e3cc3c5f14ccc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:36.713612  260870 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/proxy-client.key ...
	I0916 11:08:36.713633  260870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/proxy-client.key: {Name:mk505306792a7323c50fbaa6bfa6d39fd8ceb8da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:36.713831  260870 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 11:08:36.713869  260870 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 11:08:36.713876  260870 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:08:36.713896  260870 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 11:08:36.713920  260870 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:08:36.713946  260870 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 11:08:36.713982  260870 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:08:36.714511  260870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:08:36.739372  260870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:08:36.765128  260870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:08:36.793852  260870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:08:36.818818  260870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0916 11:08:36.842012  260870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 11:08:36.865358  260870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:08:36.889258  260870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 11:08:36.913024  260870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 11:08:36.939986  260870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 11:08:36.963336  260870 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:08:36.986859  260870 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:08:37.003708  260870 ssh_runner.go:195] Run: openssl version
	I0916 11:08:37.009148  260870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 11:08:37.018295  260870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 11:08:37.021964  260870 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 11:08:37.022022  260870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 11:08:37.029281  260870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 11:08:37.038624  260870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 11:08:37.048291  260870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 11:08:37.052395  260870 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 11:08:37.052464  260870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 11:08:37.060420  260870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:08:37.071458  260870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:08:37.082693  260870 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:08:37.086499  260870 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:08:37.086575  260870 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:08:37.093458  260870 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:08:37.103273  260870 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:08:37.106445  260870 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 11:08:37.106492  260870 kubeadm.go:392] StartCluster: {Name:old-k8s-version-371039 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-371039 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMe
trics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:08:37.106586  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 11:08:37.106636  260870 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:08:37.155847  260870 cri.go:89] found id: ""
	I0916 11:08:37.155918  260870 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:08:37.164683  260870 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 11:08:37.173264  260870 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 11:08:37.173334  260870 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 11:08:37.181678  260870 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 11:08:37.181704  260870 kubeadm.go:157] found existing configuration files:
	
	I0916 11:08:37.181753  260870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 11:08:37.190209  260870 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 11:08:37.190268  260870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 11:08:37.198604  260870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 11:08:37.207009  260870 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 11:08:37.207069  260870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 11:08:37.215349  260870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 11:08:37.224252  260870 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 11:08:37.224316  260870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 11:08:37.233091  260870 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 11:08:37.241423  260870 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 11:08:37.241484  260870 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 11:08:37.249898  260870 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 11:08:37.306344  260870 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I0916 11:08:37.306396  260870 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 11:08:37.343524  260870 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 11:08:37.343631  260870 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 11:08:37.343685  260870 kubeadm.go:310] OS: Linux
	I0916 11:08:37.343789  260870 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 11:08:37.343874  260870 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 11:08:37.343965  260870 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 11:08:37.344046  260870 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 11:08:37.344122  260870 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 11:08:37.344202  260870 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 11:08:37.344274  260870 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 11:08:37.344353  260870 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 11:08:37.433846  260870 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 11:08:37.434024  260870 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 11:08:37.434226  260870 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0916 11:08:37.627977  260870 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 11:08:37.244785  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 11:08:37.244822  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:08:37.548910  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:53692->192.168.76.2:8443: read: connection reset by peer
	I0916 11:08:35.539780  264436 cli_runner.go:217] Completed: docker run --rm --name no-preload-349453-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-349453 --entrypoint /usr/bin/test -v no-preload-349453:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib: (4.729567672s)
	I0916 11:08:35.539815  264436 oci.go:107] Successfully prepared a docker volume no-preload-349453
	I0916 11:08:35.539835  264436 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	W0916 11:08:35.539966  264436 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 11:08:35.540080  264436 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 11:08:35.601426  264436 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-349453 --name no-preload-349453 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-349453 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-349453 --network no-preload-349453 --ip 192.168.94.2 --volume no-preload-349453:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 11:08:35.950506  264436 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Running}}
	I0916 11:08:35.975787  264436 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Status}}
	I0916 11:08:35.997694  264436 cli_runner.go:164] Run: docker exec no-preload-349453 stat /var/lib/dpkg/alternatives/iptables
	I0916 11:08:36.047229  264436 oci.go:144] the created container "no-preload-349453" has a running status.
	I0916 11:08:36.047269  264436 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa...
	I0916 11:08:36.201725  264436 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 11:08:36.232588  264436 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Status}}
	I0916 11:08:36.251268  264436 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 11:08:36.251296  264436 kic_runner.go:114] Args: [docker exec --privileged no-preload-349453 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 11:08:36.308796  264436 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Status}}
	I0916 11:08:36.359437  264436 machine.go:93] provisionDockerMachine start ...
	I0916 11:08:36.359543  264436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:08:36.385658  264436 main.go:141] libmachine: Using SSH client type: native
	I0916 11:08:36.385896  264436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I0916 11:08:36.385910  264436 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 11:08:36.568192  264436 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-349453
	
	I0916 11:08:36.568220  264436 ubuntu.go:169] provisioning hostname "no-preload-349453"
	I0916 11:08:36.568291  264436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:08:36.590804  264436 main.go:141] libmachine: Using SSH client type: native
	I0916 11:08:36.591032  264436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I0916 11:08:36.591049  264436 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-349453 && echo "no-preload-349453" | sudo tee /etc/hostname
	I0916 11:08:36.756044  264436 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-349453
	
	I0916 11:08:36.756141  264436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:08:36.777822  264436 main.go:141] libmachine: Using SSH client type: native
	I0916 11:08:36.778002  264436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I0916 11:08:36.778020  264436 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-349453' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-349453/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-349453' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:08:36.911965  264436 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:08:36.911996  264436 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 11:08:36.912019  264436 ubuntu.go:177] setting up certificates
	I0916 11:08:36.912033  264436 provision.go:84] configureAuth start
	I0916 11:08:36.912089  264436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-349453
	I0916 11:08:36.932315  264436 provision.go:143] copyHostCerts
	I0916 11:08:36.932386  264436 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 11:08:36.932399  264436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 11:08:36.932471  264436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 11:08:36.932569  264436 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 11:08:36.932580  264436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 11:08:36.932621  264436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 11:08:36.932706  264436 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 11:08:36.932717  264436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 11:08:36.932753  264436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 11:08:36.932828  264436 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.no-preload-349453 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-349453]
	I0916 11:08:37.209883  264436 provision.go:177] copyRemoteCerts
	I0916 11:08:37.209938  264436 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:08:37.209969  264436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:08:37.228662  264436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:08:37.329001  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 11:08:37.353063  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0916 11:08:37.377321  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 11:08:37.402804  264436 provision.go:87] duration metric: took 490.759265ms to configureAuth
	I0916 11:08:37.402834  264436 ubuntu.go:193] setting minikube options for container-runtime
	I0916 11:08:37.403023  264436 config.go:182] Loaded profile config "no-preload-349453": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:08:37.403037  264436 machine.go:96] duration metric: took 1.043574485s to provisionDockerMachine
	I0916 11:08:37.403043  264436 client.go:171] duration metric: took 6.81866199s to LocalClient.Create
	I0916 11:08:37.403064  264436 start.go:167] duration metric: took 6.818716316s to libmachine.API.Create "no-preload-349453"
	I0916 11:08:37.403076  264436 start.go:293] postStartSetup for "no-preload-349453" (driver="docker")
	I0916 11:08:37.403088  264436 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:08:37.403140  264436 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:08:37.403174  264436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:08:37.422611  264436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:08:37.517150  264436 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:08:37.520935  264436 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 11:08:37.520967  264436 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 11:08:37.520979  264436 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 11:08:37.520988  264436 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 11:08:37.520999  264436 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 11:08:37.521061  264436 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 11:08:37.521153  264436 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 11:08:37.521276  264436 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:08:37.530028  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:08:37.556224  264436 start.go:296] duration metric: took 153.132782ms for postStartSetup
	I0916 11:08:37.556638  264436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-349453
	I0916 11:08:37.580790  264436 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/config.json ...
	I0916 11:08:37.581157  264436 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 11:08:37.581227  264436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:08:37.603557  264436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:08:37.696690  264436 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 11:08:37.700950  264436 start.go:128] duration metric: took 7.118902099s to createHost
	I0916 11:08:37.700981  264436 start.go:83] releasing machines lock for "no-preload-349453", held for 7.119184519s
	I0916 11:08:37.701048  264436 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-349453
	I0916 11:08:37.719562  264436 ssh_runner.go:195] Run: cat /version.json
	I0916 11:08:37.719628  264436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:08:37.719633  264436 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:08:37.719749  264436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:08:37.738079  264436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:08:37.739424  264436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:08:37.834189  264436 ssh_runner.go:195] Run: systemctl --version
	I0916 11:08:37.922817  264436 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:08:37.927917  264436 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 11:08:37.952584  264436 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 11:08:37.952658  264436 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:08:37.983959  264436 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 11:08:37.983991  264436 start.go:495] detecting cgroup driver to use...
	I0916 11:08:37.984035  264436 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 11:08:37.984084  264436 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 11:08:37.996632  264436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 11:08:38.008687  264436 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:08:38.008749  264436 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:08:38.022160  264436 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:08:38.035383  264436 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:08:38.121722  264436 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:08:38.206523  264436 docker.go:233] disabling docker service ...
	I0916 11:08:38.206610  264436 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:08:38.227941  264436 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:08:38.240500  264436 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:08:38.314496  264436 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:08:38.393479  264436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:08:38.405005  264436 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:08:38.420776  264436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 11:08:38.431358  264436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 11:08:38.441360  264436 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 11:08:38.441418  264436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 11:08:38.451477  264436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 11:08:38.461117  264436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 11:08:38.470893  264436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 11:08:38.481242  264436 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:08:38.490694  264436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 11:08:38.500709  264436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 11:08:38.510200  264436 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 11:08:38.519856  264436 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:08:38.530496  264436 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:08:38.539419  264436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:08:38.617864  264436 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 11:08:38.714406  264436 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 11:08:38.714480  264436 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 11:08:38.718630  264436 start.go:563] Will wait 60s for crictl version
	I0916 11:08:38.718678  264436 ssh_runner.go:195] Run: which crictl
	I0916 11:08:38.722108  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:08:38.756823  264436 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 11:08:38.756917  264436 ssh_runner.go:195] Run: containerd --version
	I0916 11:08:38.780335  264436 ssh_runner.go:195] Run: containerd --version
	I0916 11:08:38.807827  264436 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 11:08:37.630791  260870 out.go:235]   - Generating certificates and keys ...
	I0916 11:08:37.630901  260870 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 11:08:37.630988  260870 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 11:08:37.916130  260870 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 11:08:38.019360  260870 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 11:08:38.158112  260870 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 11:08:38.636583  260870 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 11:08:39.235249  260870 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 11:08:39.235559  260870 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-371039] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0916 11:08:39.445341  260870 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 11:08:39.445561  260870 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-371039] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0916 11:08:39.651806  260870 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 11:08:39.784722  260870 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 11:08:39.962483  260870 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 11:08:39.962681  260870 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 11:08:38.809241  264436 cli_runner.go:164] Run: docker network inspect no-preload-349453 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:08:38.826659  264436 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0916 11:08:38.830468  264436 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:08:38.840961  264436 kubeadm.go:883] updating cluster {Name:no-preload-349453 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-349453 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:08:38.841074  264436 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:08:38.841123  264436 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:08:38.880915  264436 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I0916 11:08:38.880944  264436 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0916 11:08:38.881004  264436 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:08:38.881044  264436 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0916 11:08:38.881075  264436 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:08:38.881092  264436 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0916 11:08:38.881101  264436 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:08:38.881114  264436 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:08:38.881057  264436 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:08:38.881079  264436 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:08:38.882295  264436 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:08:38.882294  264436 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0916 11:08:38.882392  264436 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0916 11:08:38.882555  264436 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:08:38.882579  264436 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:08:38.882584  264436 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:08:38.882604  264436 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:08:38.882640  264436 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:08:39.057574  264436 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns/coredns:v1.11.3" and sha "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6"
	I0916 11:08:39.057644  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:08:39.079273  264436 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I0916 11:08:39.079331  264436 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:08:39.079378  264436 ssh_runner.go:195] Run: which crictl
	I0916 11:08:39.082866  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:08:39.087405  264436 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.31.1" and sha "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561"
	I0916 11:08:39.087451  264436 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.10" and sha "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136"
	I0916 11:08:39.087473  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:08:39.087504  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.10
	I0916 11:08:39.098221  264436 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.31.1" and sha "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee"
	I0916 11:08:39.098303  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:08:39.099842  264436 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.31.1" and sha "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b"
	I0916 11:08:39.099923  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:08:39.104576  264436 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.31.1" and sha "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1"
	I0916 11:08:39.104653  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:08:39.112051  264436 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.5.15-0" and sha "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4"
	I0916 11:08:39.112113  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.5.15-0
	I0916 11:08:39.134734  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:08:39.134733  264436 cache_images.go:116] "registry.k8s.io/pause:3.10" needs transfer: "registry.k8s.io/pause:3.10" does not exist at hash "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136" in container runtime
	I0916 11:08:39.134813  264436 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I0916 11:08:39.134858  264436 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I0916 11:08:39.134908  264436 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I0916 11:08:39.134931  264436 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:08:39.134948  264436 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I0916 11:08:39.134970  264436 ssh_runner.go:195] Run: which crictl
	I0916 11:08:39.134979  264436 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:08:39.134864  264436 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:08:39.135036  264436 ssh_runner.go:195] Run: which crictl
	I0916 11:08:39.135077  264436 ssh_runner.go:195] Run: which crictl
	I0916 11:08:39.134913  264436 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:08:39.135127  264436 ssh_runner.go:195] Run: which crictl
	I0916 11:08:39.134827  264436 cri.go:218] Removing image: registry.k8s.io/pause:3.10
	I0916 11:08:39.135203  264436 ssh_runner.go:195] Run: which crictl
	I0916 11:08:39.143907  264436 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I0916 11:08:39.143963  264436 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I0916 11:08:39.144023  264436 ssh_runner.go:195] Run: which crictl
	I0916 11:08:39.169982  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I0916 11:08:39.170019  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:08:39.170040  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:08:39.170093  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I0916 11:08:39.170098  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:08:39.170142  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:08:39.170202  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0916 11:08:39.354583  264436 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I0916 11:08:39.354683  264436 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I0916 11:08:39.354784  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I0916 11:08:39.354865  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:08:39.354955  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0916 11:08:39.355274  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:08:39.355389  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:08:39.355478  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:08:39.541651  264436 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.11.3: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.11.3': No such file or directory
	I0916 11:08:39.541683  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I0916 11:08:39.541688  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 --> /var/lib/minikube/images/coredns_v1.11.3 (18571264 bytes)
	I0916 11:08:39.541724  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.10
	I0916 11:08:39.541800  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I0916 11:08:39.541868  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I0916 11:08:39.541804  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I0916 11:08:39.541947  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I0916 11:08:39.775749  264436 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I0916 11:08:39.775784  264436 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10
	I0916 11:08:39.775871  264436 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I0916 11:08:39.775871  264436 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10
	I0916 11:08:39.775955  264436 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I0916 11:08:39.775968  264436 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I0916 11:08:39.775918  264436 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I0916 11:08:39.776028  264436 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0916 11:08:39.776041  264436 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I0916 11:08:39.776053  264436 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I0916 11:08:39.776071  264436 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0916 11:08:39.776108  264436 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0916 11:08:39.802405  264436 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.31.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.31.1': No such file or directory
	I0916 11:08:39.802441  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 --> /var/lib/minikube/images/kube-apiserver_v1.31.1 (28057088 bytes)
	I0916 11:08:39.802507  264436 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.31.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.31.1': No such file or directory
	I0916 11:08:39.802523  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 --> /var/lib/minikube/images/kube-scheduler_v1.31.1 (20187136 bytes)
	I0916 11:08:39.803116  264436 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.31.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.31.1': No such file or directory
	I0916 11:08:39.803143  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 --> /var/lib/minikube/images/kube-proxy_v1.31.1 (30214144 bytes)
	I0916 11:08:39.824892  264436 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10: stat -c "%s %y" /var/lib/minikube/images/pause_3.10: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10': No such file or directory
	I0916 11:08:39.824933  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 --> /var/lib/minikube/images/pause_3.10 (321024 bytes)
	I0916 11:08:39.825041  264436 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.31.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.31.1': No such file or directory
	I0916 11:08:39.825061  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 --> /var/lib/minikube/images/kube-controller-manager_v1.31.1 (26231808 bytes)
	I0916 11:08:39.825117  264436 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.15-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.15-0': No such file or directory
	I0916 11:08:39.825133  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 --> /var/lib/minikube/images/etcd_3.5.15-0 (56918528 bytes)
	I0916 11:08:39.959272  264436 containerd.go:285] Loading image: /var/lib/minikube/images/pause_3.10
	I0916 11:08:39.959408  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.10
	I0916 11:08:40.023367  264436 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I0916 11:08:40.023457  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:08:40.164705  264436 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0916 11:08:40.164748  264436 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:08:40.164791  264436 ssh_runner.go:195] Run: which crictl
	I0916 11:08:40.164996  264436 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 from cache
	I0916 11:08:40.165039  264436 containerd.go:285] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I0916 11:08:40.165080  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.11.3
	I0916 11:08:40.197926  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:08:40.241204  260870 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 11:08:40.317576  260870 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 11:08:40.426492  260870 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 11:08:40.596293  260870 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 11:08:40.608073  260870 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 11:08:40.609253  260870 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 11:08:40.609315  260870 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 11:08:40.694187  260870 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 11:08:37.733427  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:08:37.733912  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:08:38.232918  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:08:40.696082  260870 out.go:235]   - Booting up control plane ...
	I0916 11:08:40.696191  260870 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 11:08:40.702656  260870 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 11:08:40.704099  260870 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 11:08:40.705275  260870 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 11:08:40.708468  260870 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0916 11:08:41.423354  264436 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_v1.11.3: (1.25824846s)
	I0916 11:08:41.423382  264436 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I0916 11:08:41.423399  264436 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.225442266s)
	I0916 11:08:41.423474  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:08:41.423406  264436 containerd.go:285] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0916 11:08:41.423554  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.31.1
	I0916 11:08:41.458101  264436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:08:42.482721  264436 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.31.1: (1.059134257s)
	I0916 11:08:42.482753  264436 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I0916 11:08:42.482774  264436 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0916 11:08:42.482776  264436 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.024643374s)
	I0916 11:08:42.482817  264436 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0916 11:08:42.482820  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.31.1
	I0916 11:08:42.482894  264436 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0916 11:08:43.495795  264436 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.012950946s)
	I0916 11:08:43.495827  264436 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I0916 11:08:43.495859  264436 containerd.go:285] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0916 11:08:43.495876  264436 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.01296017s)
	I0916 11:08:43.495905  264436 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0916 11:08:43.495919  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I0916 11:08:43.495923  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I0916 11:08:44.472580  264436 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I0916 11:08:44.472626  264436 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I0916 11:08:44.472679  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.31.1
	I0916 11:08:43.233973  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 11:08:43.234020  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:08:45.540795  264436 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.31.1: (1.068091792s)
	I0916 11:08:45.540818  264436 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I0916 11:08:45.540840  264436 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I0916 11:08:45.540887  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.15-0
	I0916 11:08:47.901181  264436 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.5.15-0: (2.360264084s)
	I0916 11:08:47.901218  264436 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I0916 11:08:47.901243  264436 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0916 11:08:47.901300  264436 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I0916 11:08:48.984630  264436 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5: (1.083298899s)
	I0916 11:08:48.984663  264436 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0916 11:08:48.984689  264436 cache_images.go:123] Successfully loaded all cached images
	I0916 11:08:48.984695  264436 cache_images.go:92] duration metric: took 10.103732508s to LoadCachedImages
	I0916 11:08:48.984709  264436 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.31.1 containerd true true} ...
	I0916 11:08:48.984835  264436 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-349453 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-349453 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:08:48.984901  264436 ssh_runner.go:195] Run: sudo crictl info
	I0916 11:08:49.032116  264436 cni.go:84] Creating CNI manager for ""
	I0916 11:08:49.032193  264436 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:08:49.032211  264436 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:08:49.032240  264436 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-349453 NodeName:no-preload-349453 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 11:08:49.032400  264436 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-349453"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:08:49.032472  264436 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 11:08:49.044890  264436 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I0916 11:08:49.045024  264436 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I0916 11:08:49.056347  264436 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I0916 11:08:49.056466  264436 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I0916 11:08:49.056673  264436 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I0916 11:08:49.057166  264436 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I0916 11:08:49.066816  264436 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I0916 11:08:49.066853  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I0916 11:08:49.943393  264436 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I0916 11:08:49.947835  264436 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I0916 11:08:49.947869  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I0916 11:08:50.181687  264436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:08:50.194184  264436 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I0916 11:08:50.197931  264436 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I0916 11:08:50.197959  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I0916 11:08:50.395973  264436 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:08:50.404517  264436 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0916 11:08:50.422561  264436 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:08:50.445036  264436 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2171 bytes)
	I0916 11:08:50.465483  264436 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0916 11:08:50.470084  264436 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:08:50.482485  264436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:08:50.547958  264436 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:08:50.563251  264436 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453 for IP: 192.168.94.2
	I0916 11:08:50.563273  264436 certs.go:194] generating shared ca certs ...
	I0916 11:08:50.563298  264436 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:50.563456  264436 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 11:08:50.563505  264436 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 11:08:50.563517  264436 certs.go:256] generating profile certs ...
	I0916 11:08:50.563627  264436 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.key
	I0916 11:08:50.563648  264436 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.crt with IP's: []
	I0916 11:08:50.618540  264436 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.crt ...
	I0916 11:08:50.618569  264436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.crt: {Name:mk337746002b2836356861444fb583afa57b1d7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:50.618748  264436 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.key ...
	I0916 11:08:50.618771  264436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.key: {Name:mk9c5aa9e774198cfcb02ec0058188ab8edfaed1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:50.618845  264436 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.key.85f7849d
	I0916 11:08:50.618860  264436 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.crt.85f7849d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I0916 11:08:50.875559  264436 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.crt.85f7849d ...
	I0916 11:08:50.875598  264436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.crt.85f7849d: {Name:mk481f9ec5bc5101be906a4ddce3a071783b2c96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:50.875829  264436 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.key.85f7849d ...
	I0916 11:08:50.875849  264436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.key.85f7849d: {Name:mk4e723f8d9625ad4b4558240421f0210105e957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:50.875954  264436 certs.go:381] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.crt.85f7849d -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.crt
	I0916 11:08:50.876051  264436 certs.go:385] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.key.85f7849d -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.key
	I0916 11:08:50.876127  264436 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/proxy-client.key
	I0916 11:08:50.876147  264436 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/proxy-client.crt with IP's: []
	I0916 11:08:51.303691  264436 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/proxy-client.crt ...
	I0916 11:08:51.303759  264436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/proxy-client.crt: {Name:mk2a9791d1a10304f96ba7678b9c3811d30b3fb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:51.303945  264436 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/proxy-client.key ...
	I0916 11:08:51.303961  264436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/proxy-client.key: {Name:mka24f10f8b232c8b84bdf799b45958f97693ca9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:08:51.304131  264436 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 11:08:51.304175  264436 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 11:08:51.304185  264436 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:08:51.304215  264436 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 11:08:51.304238  264436 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:08:51.304268  264436 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 11:08:51.304303  264436 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:08:51.304858  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:08:51.329508  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:08:51.353418  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:08:51.377163  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:08:51.401708  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 11:08:51.428477  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 11:08:51.452154  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:08:51.475382  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 11:08:51.498240  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:08:51.521123  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 11:08:51.543771  264436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 11:08:51.574546  264436 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:08:51.591543  264436 ssh_runner.go:195] Run: openssl version
	I0916 11:08:51.597060  264436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:08:51.606641  264436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:08:51.610471  264436 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:08:51.610524  264436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:08:51.617457  264436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:08:51.626770  264436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 11:08:51.636218  264436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 11:08:51.640059  264436 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 11:08:51.640119  264436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 11:08:51.646939  264436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 11:08:51.657727  264436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 11:08:51.667722  264436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 11:08:51.671519  264436 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 11:08:51.671587  264436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 11:08:51.678428  264436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:08:51.687852  264436 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:08:51.691310  264436 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 11:08:51.691367  264436 kubeadm.go:392] StartCluster: {Name:no-preload-349453 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-349453 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:08:51.691439  264436 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 11:08:51.691486  264436 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:08:51.724621  264436 cri.go:89] found id: ""
	I0916 11:08:51.724695  264436 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:08:51.734987  264436 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 11:08:51.744004  264436 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 11:08:51.744075  264436 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 11:08:51.755258  264436 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 11:08:51.755283  264436 kubeadm.go:157] found existing configuration files:
	
	I0916 11:08:51.755333  264436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 11:08:51.768412  264436 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 11:08:51.768474  264436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 11:08:51.777349  264436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 11:08:51.785929  264436 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 11:08:51.786003  264436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 11:08:51.794532  264436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 11:08:51.803220  264436 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 11:08:51.803342  264436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 11:08:51.812093  264436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 11:08:51.820809  264436 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 11:08:51.820873  264436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 11:08:51.829429  264436 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 11:08:51.865931  264436 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 11:08:51.865989  264436 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 11:08:51.885115  264436 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 11:08:51.885236  264436 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 11:08:51.885298  264436 kubeadm.go:310] OS: Linux
	I0916 11:08:51.885387  264436 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 11:08:51.885459  264436 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 11:08:51.885534  264436 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 11:08:51.885607  264436 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 11:08:51.885679  264436 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 11:08:51.885763  264436 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 11:08:51.885838  264436 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 11:08:51.885903  264436 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 11:08:51.885972  264436 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 11:08:51.941753  264436 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 11:08:51.941901  264436 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 11:08:51.942020  264436 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 11:08:51.947090  264436 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 11:08:48.234717  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 11:08:48.234765  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:08:51.949776  264436 out.go:235]   - Generating certificates and keys ...
	I0916 11:08:51.949877  264436 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 11:08:51.949940  264436 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 11:08:52.122699  264436 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 11:08:52.249550  264436 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 11:08:52.352028  264436 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 11:08:52.445139  264436 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 11:08:52.652691  264436 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 11:08:52.652923  264436 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-349453] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0916 11:08:52.751947  264436 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 11:08:52.752095  264436 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-349453] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0916 11:08:52.932640  264436 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 11:08:53.294351  264436 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 11:08:53.505338  264436 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 11:08:53.505405  264436 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 11:08:53.576935  264436 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 11:08:53.665445  264436 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 11:08:53.781881  264436 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 11:08:54.142742  264436 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 11:08:54.452184  264436 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 11:08:54.452959  264436 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 11:08:54.456552  264436 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 11:08:55.210981  260870 kubeadm.go:310] [apiclient] All control plane components are healthy after 14.502545 seconds
	I0916 11:08:55.211125  260870 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 11:08:54.460019  264436 out.go:235]   - Booting up control plane ...
	I0916 11:08:54.460188  264436 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 11:08:54.460277  264436 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 11:08:54.460605  264436 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 11:08:54.473017  264436 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 11:08:54.480142  264436 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 11:08:54.480269  264436 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 11:08:54.584649  264436 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 11:08:54.584816  264436 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 11:08:55.085943  264436 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.441739ms
	I0916 11:08:55.086058  264436 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 11:08:55.222604  260870 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 11:08:55.747349  260870 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 11:08:55.747575  260870 kubeadm.go:310] [mark-control-plane] Marking the node old-k8s-version-371039 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
	I0916 11:08:56.255515  260870 kubeadm.go:310] [bootstrap-token] Using token: 7575lv.7anw6bs48k43jhje
	I0916 11:08:56.257005  260870 out.go:235]   - Configuring RBAC rules ...
	I0916 11:08:56.257190  260870 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 11:08:56.261944  260870 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 11:08:56.268917  260870 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 11:08:56.271036  260870 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 11:08:56.273203  260870 kubeadm.go:310] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 11:08:56.275371  260870 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 11:08:56.282938  260870 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 11:08:56.505496  260870 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 11:08:56.674523  260870 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 11:08:56.675435  260870 kubeadm.go:310] 
	I0916 11:08:56.675511  260870 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 11:08:56.675550  260870 kubeadm.go:310] 
	I0916 11:08:56.675666  260870 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 11:08:56.675679  260870 kubeadm.go:310] 
	I0916 11:08:56.675769  260870 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 11:08:56.675860  260870 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 11:08:56.675953  260870 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 11:08:56.675963  260870 kubeadm.go:310] 
	I0916 11:08:56.676057  260870 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 11:08:56.676076  260870 kubeadm.go:310] 
	I0916 11:08:56.676146  260870 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 11:08:56.676157  260870 kubeadm.go:310] 
	I0916 11:08:56.676232  260870 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 11:08:56.676346  260870 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 11:08:56.676449  260870 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 11:08:56.676459  260870 kubeadm.go:310] 
	I0916 11:08:56.676577  260870 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 11:08:56.676690  260870 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 11:08:56.676702  260870 kubeadm.go:310] 
	I0916 11:08:56.676805  260870 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7575lv.7anw6bs48k43jhje \
	I0916 11:08:56.676964  260870 kubeadm.go:310]     --discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 \
	I0916 11:08:56.676999  260870 kubeadm.go:310]     --control-plane 
	I0916 11:08:56.677008  260870 kubeadm.go:310] 
	I0916 11:08:56.677141  260870 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 11:08:56.677154  260870 kubeadm.go:310] 
	I0916 11:08:56.677267  260870 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7575lv.7anw6bs48k43jhje \
	I0916 11:08:56.677407  260870 kubeadm.go:310]     --discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 
	I0916 11:08:56.679220  260870 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 11:08:56.679366  260870 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 11:08:56.679403  260870 cni.go:84] Creating CNI manager for ""
	I0916 11:08:56.679418  260870 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:08:56.681153  260870 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 11:08:53.235786  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 11:08:53.235837  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:00.087505  264436 kubeadm.go:310] [api-check] The API server is healthy after 5.001488031s
	I0916 11:09:00.098392  264436 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 11:09:00.110362  264436 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 11:09:00.128932  264436 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 11:09:00.129187  264436 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-349453 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 11:09:00.137036  264436 kubeadm.go:310] [bootstrap-token] Using token: 7hha87.1fmccqtk5mel1d08
	I0916 11:08:56.682324  260870 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 11:08:56.686207  260870 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.20.0/kubectl ...
	I0916 11:08:56.686225  260870 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 11:08:56.703974  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 11:08:57.087171  260870 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 11:08:57.087286  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:08:57.087327  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-371039 minikube.k8s.io/updated_at=2024_09_16T11_08_57_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=old-k8s-version-371039 minikube.k8s.io/primary=true
	I0916 11:08:57.094951  260870 ops.go:34] apiserver oom_adj: -16
	I0916 11:08:57.203677  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:08:57.703899  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:08:58.204371  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:08:58.703936  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:08:59.203918  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:08:59.704356  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:00.204155  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:00.138673  264436 out.go:235]   - Configuring RBAC rules ...
	I0916 11:09:00.138843  264436 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 11:09:00.143189  264436 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 11:09:00.149188  264436 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 11:09:00.151958  264436 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 11:09:00.154792  264436 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 11:09:00.158528  264436 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 11:09:00.493607  264436 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 11:09:00.933899  264436 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 11:09:01.494256  264436 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 11:09:01.495468  264436 kubeadm.go:310] 
	I0916 11:09:01.495563  264436 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 11:09:01.495578  264436 kubeadm.go:310] 
	I0916 11:09:01.495691  264436 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 11:09:01.495707  264436 kubeadm.go:310] 
	I0916 11:09:01.495784  264436 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 11:09:01.495872  264436 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 11:09:01.495955  264436 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 11:09:01.495973  264436 kubeadm.go:310] 
	I0916 11:09:01.496023  264436 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 11:09:01.496031  264436 kubeadm.go:310] 
	I0916 11:09:01.496072  264436 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 11:09:01.496104  264436 kubeadm.go:310] 
	I0916 11:09:01.496187  264436 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 11:09:01.496302  264436 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 11:09:01.496394  264436 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 11:09:01.496403  264436 kubeadm.go:310] 
	I0916 11:09:01.496503  264436 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 11:09:01.496612  264436 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 11:09:01.496625  264436 kubeadm.go:310] 
	I0916 11:09:01.496698  264436 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7hha87.1fmccqtk5mel1d08 \
	I0916 11:09:01.496843  264436 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 \
	I0916 11:09:01.496894  264436 kubeadm.go:310] 	--control-plane 
	I0916 11:09:01.496904  264436 kubeadm.go:310] 
	I0916 11:09:01.497000  264436 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 11:09:01.497009  264436 kubeadm.go:310] 
	I0916 11:09:01.497108  264436 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7hha87.1fmccqtk5mel1d08 \
	I0916 11:09:01.497239  264436 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 
	I0916 11:09:01.499128  264436 kubeadm.go:310] W0916 11:08:51.862879    1806 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:09:01.499457  264436 kubeadm.go:310] W0916 11:08:51.863553    1806 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:09:01.499768  264436 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 11:09:01.499953  264436 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 11:09:01.499988  264436 cni.go:84] Creating CNI manager for ""
	I0916 11:09:01.500000  264436 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:09:01.501798  264436 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 11:08:58.236473  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 11:08:58.236522  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:08:58.646312  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:53610->192.168.76.2:8443: read: connection reset by peer
	I0916 11:08:58.733440  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:08:58.733905  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:08:59.233571  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:08:59.234025  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:08:59.733738  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:08:59.734160  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:00.232786  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:00.233148  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:00.732769  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:00.733156  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:01.232818  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:01.233245  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:01.732791  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:01.733233  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:02.232778  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:02.233205  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:00.704323  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:01.204668  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:01.703878  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:02.204580  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:02.704540  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:03.203853  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:03.703804  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:04.204076  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:04.703894  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:05.204018  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:01.503075  264436 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 11:09:01.507256  264436 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 11:09:01.507277  264436 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 11:09:01.524545  264436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 11:09:01.727673  264436 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 11:09:01.727825  264436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-349453 minikube.k8s.io/updated_at=2024_09_16T11_09_01_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=no-preload-349453 minikube.k8s.io/primary=true
	I0916 11:09:01.728021  264436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:01.738577  264436 ops.go:34] apiserver oom_adj: -16
	I0916 11:09:01.822449  264436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:02.323484  264436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:02.822776  264436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:03.322974  264436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:03.823263  264436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:04.323195  264436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:04.822824  264436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:05.323453  264436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:05.822962  264436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:05.891978  264436 kubeadm.go:1113] duration metric: took 4.164004406s to wait for elevateKubeSystemPrivileges
	I0916 11:09:05.892013  264436 kubeadm.go:394] duration metric: took 14.200646498s to StartCluster
	I0916 11:09:05.892048  264436 settings.go:142] acquiring lock: {Name:mk5f140da961bb985c566f732d611f3e238f1f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:09:05.892129  264436 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:09:05.895884  264436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:09:05.896177  264436 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 11:09:05.896353  264436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 11:09:05.896448  264436 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:09:05.896535  264436 addons.go:69] Setting storage-provisioner=true in profile "no-preload-349453"
	I0916 11:09:05.896553  264436 addons.go:69] Setting default-storageclass=true in profile "no-preload-349453"
	I0916 11:09:05.896597  264436 config.go:182] Loaded profile config "no-preload-349453": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:09:05.896617  264436 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-349453"
	I0916 11:09:05.896562  264436 addons.go:234] Setting addon storage-provisioner=true in "no-preload-349453"
	I0916 11:09:05.896721  264436 host.go:66] Checking if "no-preload-349453" exists ...
	I0916 11:09:05.896991  264436 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Status}}
	I0916 11:09:05.897173  264436 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Status}}
	I0916 11:09:05.899195  264436 out.go:177] * Verifying Kubernetes components...
	I0916 11:09:05.900632  264436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:09:05.920822  264436 addons.go:234] Setting addon default-storageclass=true in "no-preload-349453"
	I0916 11:09:05.920872  264436 host.go:66] Checking if "no-preload-349453" exists ...
	I0916 11:09:05.921227  264436 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Status}}
	I0916 11:09:05.922853  264436 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:09:05.924578  264436 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:09:05.924598  264436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:09:05.924661  264436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:05.953061  264436 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:09:05.953083  264436 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:09:05.953143  264436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:05.957772  264436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:09:05.975394  264436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:09:06.034923  264436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 11:09:06.040755  264436 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:09:06.143479  264436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:09:06.240584  264436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:09:06.536048  264436 start.go:971] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I0916 11:09:06.539111  264436 node_ready.go:35] waiting up to 6m0s for node "no-preload-349453" to be "Ready" ...
	I0916 11:09:06.547015  264436 node_ready.go:49] node "no-preload-349453" has status "Ready":"True"
	I0916 11:09:06.547042  264436 node_ready.go:38] duration metric: took 7.901547ms for node "no-preload-349453" to be "Ready" ...
	I0916 11:09:06.547095  264436 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:09:06.555838  264436 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:06.932212  264436 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0916 11:09:02.733678  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:02.734077  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:03.233262  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:03.233718  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:03.733114  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:03.733576  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:04.233410  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:04.233881  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:04.733574  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:04.733949  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:05.233532  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:05.233933  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:05.733512  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:05.733953  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:06.233584  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:06.234044  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:06.733637  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:06.734106  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:07.233844  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:07.234332  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:05.703927  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:06.204762  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:06.704365  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:07.204577  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:07.704243  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:08.204636  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:08.703906  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:09.204497  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:09.704711  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:10.204447  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:06.933504  264436 addons.go:510] duration metric: took 1.037058154s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 11:09:07.040392  264436 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-349453" context rescaled to 1 replicas
	I0916 11:09:08.563999  264436 pod_ready.go:103] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:10.703866  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:11.204455  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:11.703883  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:12.204359  260870 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.20.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:09:12.332820  260870 kubeadm.go:1113] duration metric: took 15.245596472s to wait for elevateKubeSystemPrivileges
	I0916 11:09:12.332850  260870 kubeadm.go:394] duration metric: took 35.226361301s to StartCluster
	I0916 11:09:12.332867  260870 settings.go:142] acquiring lock: {Name:mk5f140da961bb985c566f732d611f3e238f1f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:09:12.332941  260870 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:09:12.334200  260870 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:09:12.334409  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 11:09:12.334422  260870 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 11:09:12.334489  260870 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:09:12.334595  260870 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-371039"
	I0916 11:09:12.334614  260870 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-371039"
	I0916 11:09:12.334633  260870 config.go:182] Loaded profile config "old-k8s-version-371039": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0916 11:09:12.334646  260870 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-371039"
	I0916 11:09:12.334621  260870 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-371039"
	I0916 11:09:12.334766  260870 host.go:66] Checking if "old-k8s-version-371039" exists ...
	I0916 11:09:12.335022  260870 cli_runner.go:164] Run: docker container inspect old-k8s-version-371039 --format={{.State.Status}}
	I0916 11:09:12.335157  260870 cli_runner.go:164] Run: docker container inspect old-k8s-version-371039 --format={{.State.Status}}
	I0916 11:09:12.336297  260870 out.go:177] * Verifying Kubernetes components...
	I0916 11:09:12.337718  260870 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:09:12.357086  260870 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:09:07.733738  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:07.734147  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:08.233742  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:08.234148  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:08.733771  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:08.734260  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:09.233808  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:12.358665  260870 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:09:12.358689  260870 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:09:12.358754  260870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-371039
	I0916 11:09:12.359729  260870 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-371039"
	I0916 11:09:12.359827  260870 host.go:66] Checking if "old-k8s-version-371039" exists ...
	I0916 11:09:12.360343  260870 cli_runner.go:164] Run: docker container inspect old-k8s-version-371039 --format={{.State.Status}}
	I0916 11:09:12.383998  260870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/old-k8s-version-371039/id_rsa Username:docker}
	I0916 11:09:12.389783  260870 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:09:12.389805  260870 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:09:12.389868  260870 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-371039
	I0916 11:09:12.408070  260870 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33058 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/old-k8s-version-371039/id_rsa Username:docker}
	I0916 11:09:12.546475  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.20.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 11:09:12.553288  260870 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:09:12.647594  260870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:09:12.648622  260870 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:09:13.259944  260870 start.go:971] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0916 11:09:13.261675  260870 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-371039" to be "Ready" ...
	I0916 11:09:13.325576  260870 node_ready.go:49] node "old-k8s-version-371039" has status "Ready":"True"
	I0916 11:09:13.325600  260870 node_ready.go:38] duration metric: took 63.887515ms for node "old-k8s-version-371039" to be "Ready" ...
	I0916 11:09:13.325612  260870 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:09:13.335290  260870 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-78djj" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:13.528295  260870 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0916 11:09:13.530325  260870 addons.go:510] duration metric: took 1.195834763s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0916 11:09:13.764048  260870 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-371039" context rescaled to 1 replicas
	I0916 11:09:11.062167  264436 pod_ready.go:103] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:13.062678  264436 pod_ready.go:103] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:15.063223  264436 pod_ready.go:103] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:14.234494  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 11:09:14.234540  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:15.342274  260870 pod_ready.go:103] pod "coredns-74ff55c5b-78djj" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:17.841129  260870 pod_ready.go:103] pod "coredns-74ff55c5b-78djj" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:17.560912  264436 pod_ready.go:103] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:19.562845  264436 pod_ready.go:103] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:19.235598  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 11:09:19.235680  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:09:19.235754  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:09:19.269692  254463 cri.go:89] found id: "f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd"
	I0916 11:09:19.269715  254463 cri.go:89] found id: "78f1386658cccbd9f65d9408afecb31c6db52bb84d72237168313ed1e03541f7"
	I0916 11:09:19.269720  254463 cri.go:89] found id: ""
	I0916 11:09:19.269729  254463 logs.go:276] 2 containers: [f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd 78f1386658cccbd9f65d9408afecb31c6db52bb84d72237168313ed1e03541f7]
	I0916 11:09:19.269789  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:19.273402  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:19.276885  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:09:19.276963  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:09:19.308719  254463 cri.go:89] found id: ""
	I0916 11:09:19.308746  254463 logs.go:276] 0 containers: []
	W0916 11:09:19.308755  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:09:19.308771  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:09:19.308830  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:09:19.342334  254463 cri.go:89] found id: ""
	I0916 11:09:19.342361  254463 logs.go:276] 0 containers: []
	W0916 11:09:19.342372  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:09:19.342379  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:09:19.342437  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:09:19.375316  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:09:19.375337  254463 cri.go:89] found id: ""
	I0916 11:09:19.375343  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:09:19.375391  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:19.378835  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:09:19.378904  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:09:19.411345  254463 cri.go:89] found id: ""
	I0916 11:09:19.411370  254463 logs.go:276] 0 containers: []
	W0916 11:09:19.411378  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:09:19.411384  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:09:19.411441  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:09:19.445048  254463 cri.go:89] found id: "b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9"
	I0916 11:09:19.445068  254463 cri.go:89] found id: "d134862346ef7847a5bec871679eb22f4238875f0baf507bd5b7da3db1391339"
	I0916 11:09:19.445072  254463 cri.go:89] found id: ""
	I0916 11:09:19.445079  254463 logs.go:276] 2 containers: [b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9 d134862346ef7847a5bec871679eb22f4238875f0baf507bd5b7da3db1391339]
	I0916 11:09:19.445131  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:19.448637  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:19.451955  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:09:19.452028  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:09:19.485223  254463 cri.go:89] found id: ""
	I0916 11:09:19.485248  254463 logs.go:276] 0 containers: []
	W0916 11:09:19.485257  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:09:19.485263  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:09:19.485337  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:09:19.517574  254463 cri.go:89] found id: ""
	I0916 11:09:19.517608  254463 logs.go:276] 0 containers: []
	W0916 11:09:19.517618  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:09:19.517650  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:09:19.517669  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:09:19.557222  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:09:19.557264  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:09:19.594969  254463 logs.go:123] Gathering logs for kube-apiserver [78f1386658cccbd9f65d9408afecb31c6db52bb84d72237168313ed1e03541f7] ...
	I0916 11:09:19.595000  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 78f1386658cccbd9f65d9408afecb31c6db52bb84d72237168313ed1e03541f7"
	I0916 11:09:19.630078  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:09:19.630121  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:09:19.681369  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:09:19.681400  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:09:20.341144  260870 pod_ready.go:103] pod "coredns-74ff55c5b-78djj" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:22.840781  260870 pod_ready.go:103] pod "coredns-74ff55c5b-78djj" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:24.840812  260870 pod_ready.go:103] pod "coredns-74ff55c5b-78djj" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:22.060907  264436 pod_ready.go:103] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:24.062298  264436 pod_ready.go:103] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:26.841099  260870 pod_ready.go:103] pod "coredns-74ff55c5b-78djj" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:29.341391  260870 pod_ready.go:103] pod "coredns-74ff55c5b-78djj" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:29.841004  260870 pod_ready.go:93] pod "coredns-74ff55c5b-78djj" in "kube-system" namespace has status "Ready":"True"
	I0916 11:09:29.841028  260870 pod_ready.go:82] duration metric: took 16.505708515s for pod "coredns-74ff55c5b-78djj" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:29.841039  260870 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-lgf42" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:29.842812  260870 pod_ready.go:98] error getting pod "coredns-74ff55c5b-lgf42" in "kube-system" namespace (skipping!): pods "coredns-74ff55c5b-lgf42" not found
	I0916 11:09:29.842836  260870 pod_ready.go:82] duration metric: took 1.790096ms for pod "coredns-74ff55c5b-lgf42" in "kube-system" namespace to be "Ready" ...
	E0916 11:09:29.842848  260870 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-74ff55c5b-lgf42" in "kube-system" namespace (skipping!): pods "coredns-74ff55c5b-lgf42" not found
	I0916 11:09:29.842857  260870 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-371039" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:26.562286  264436 pod_ready.go:103] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:29.061948  264436 pod_ready.go:103] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:30.186175  254463 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.504756872s)
	W0916 11:09:30.186209  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:33322->[::1]:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:33322->[::1]:8443: read: connection reset by peer
	
	** /stderr **
	I0916 11:09:30.186217  254463 logs.go:123] Gathering logs for kube-apiserver [f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd] ...
	I0916 11:09:30.186233  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd"
	I0916 11:09:30.223830  254463 logs.go:123] Gathering logs for kube-controller-manager [b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9] ...
	I0916 11:09:30.223863  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9"
	I0916 11:09:30.256977  254463 logs.go:123] Gathering logs for kube-controller-manager [d134862346ef7847a5bec871679eb22f4238875f0baf507bd5b7da3db1391339] ...
	I0916 11:09:30.257004  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d134862346ef7847a5bec871679eb22f4238875f0baf507bd5b7da3db1391339"
	I0916 11:09:30.292614  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:09:30.292649  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:09:30.353308  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:09:30.353345  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:09:31.848871  260870 pod_ready.go:103] pod "etcd-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:33.849476  260870 pod_ready.go:103] pod "etcd-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:31.561693  264436 pod_ready.go:103] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:34.061654  264436 pod_ready.go:103] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:35.061879  264436 pod_ready.go:93] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"True"
	I0916 11:09:35.061902  264436 pod_ready.go:82] duration metric: took 28.506020354s for pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:35.061911  264436 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mvlrh" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:35.063656  264436 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-mvlrh" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-mvlrh" not found
	I0916 11:09:35.063679  264436 pod_ready.go:82] duration metric: took 1.762521ms for pod "coredns-7c65d6cfc9-mvlrh" in "kube-system" namespace to be "Ready" ...
	E0916 11:09:35.063692  264436 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-mvlrh" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-mvlrh" not found
	I0916 11:09:35.063701  264436 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:35.068205  264436 pod_ready.go:93] pod "etcd-no-preload-349453" in "kube-system" namespace has status "Ready":"True"
	I0916 11:09:35.068227  264436 pod_ready.go:82] duration metric: took 4.517527ms for pod "etcd-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:35.068239  264436 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:35.072552  264436 pod_ready.go:93] pod "kube-apiserver-no-preload-349453" in "kube-system" namespace has status "Ready":"True"
	I0916 11:09:35.072576  264436 pod_ready.go:82] duration metric: took 4.327352ms for pod "kube-apiserver-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:35.072586  264436 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:35.076783  264436 pod_ready.go:93] pod "kube-controller-manager-no-preload-349453" in "kube-system" namespace has status "Ready":"True"
	I0916 11:09:35.076810  264436 pod_ready.go:82] duration metric: took 4.217917ms for pod "kube-controller-manager-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:35.076820  264436 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-n7m28" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:35.260357  264436 pod_ready.go:93] pod "kube-proxy-n7m28" in "kube-system" namespace has status "Ready":"True"
	I0916 11:09:35.260383  264436 pod_ready.go:82] duration metric: took 183.557365ms for pod "kube-proxy-n7m28" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:35.260393  264436 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:35.660221  264436 pod_ready.go:93] pod "kube-scheduler-no-preload-349453" in "kube-system" namespace has status "Ready":"True"
	I0916 11:09:35.660246  264436 pod_ready.go:82] duration metric: took 399.846457ms for pod "kube-scheduler-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:35.660257  264436 pod_ready.go:39] duration metric: took 29.113141917s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:09:35.660274  264436 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:09:35.660348  264436 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:09:35.673043  264436 api_server.go:72] duration metric: took 29.776823258s to wait for apiserver process to appear ...
	I0916 11:09:35.673068  264436 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:09:35.673092  264436 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0916 11:09:35.676860  264436 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0916 11:09:35.677763  264436 api_server.go:141] control plane version: v1.31.1
	I0916 11:09:35.677787  264436 api_server.go:131] duration metric: took 4.712796ms to wait for apiserver health ...
	I0916 11:09:35.677800  264436 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 11:09:35.862606  264436 system_pods.go:59] 8 kube-system pods found
	I0916 11:09:35.862640  264436 system_pods.go:61] "coredns-7c65d6cfc9-9zbwk" [427a37dd-9a56-455f-bd9e-3ee604164481] Running
	I0916 11:09:35.862646  264436 system_pods.go:61] "etcd-no-preload-349453" [11377b15-3f0d-463f-8b6f-8eae19108040] Running
	I0916 11:09:35.862651  264436 system_pods.go:61] "kindnet-qbh58" [b43e889b-3179-48c6-b6c4-e6c42131c50c] Running
	I0916 11:09:35.862655  264436 system_pods.go:61] "kube-apiserver-no-preload-349453" [06a64195-a484-412f-930b-310199ce4d80] Running
	I0916 11:09:35.862660  264436 system_pods.go:61] "kube-controller-manager-no-preload-349453" [f31864d9-a8cc-4d04-8f94-497ae381d9ca] Running
	I0916 11:09:35.862664  264436 system_pods.go:61] "kube-proxy-n7m28" [a0580caf-fdc3-483b-a5b9-f29db25b8ef6] Running
	I0916 11:09:35.862667  264436 system_pods.go:61] "kube-scheduler-no-preload-349453" [94505c46-f9fd-4bc6-8287-27c30e4696f0] Running
	I0916 11:09:35.862672  264436 system_pods.go:61] "storage-provisioner" [2f218f7f-9232-4d85-bd8d-6cdc6516c83f] Running
	I0916 11:09:35.862678  264436 system_pods.go:74] duration metric: took 184.872639ms to wait for pod list to return data ...
	I0916 11:09:35.862685  264436 default_sa.go:34] waiting for default service account to be created ...
	I0916 11:09:36.061081  264436 default_sa.go:45] found service account: "default"
	I0916 11:09:36.061114  264436 default_sa.go:55] duration metric: took 198.421124ms for default service account to be created ...
	I0916 11:09:36.061127  264436 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 11:09:36.262420  264436 system_pods.go:86] 8 kube-system pods found
	I0916 11:09:36.262457  264436 system_pods.go:89] "coredns-7c65d6cfc9-9zbwk" [427a37dd-9a56-455f-bd9e-3ee604164481] Running
	I0916 11:09:36.262466  264436 system_pods.go:89] "etcd-no-preload-349453" [11377b15-3f0d-463f-8b6f-8eae19108040] Running
	I0916 11:09:36.262471  264436 system_pods.go:89] "kindnet-qbh58" [b43e889b-3179-48c6-b6c4-e6c42131c50c] Running
	I0916 11:09:36.262477  264436 system_pods.go:89] "kube-apiserver-no-preload-349453" [06a64195-a484-412f-930b-310199ce4d80] Running
	I0916 11:09:36.262483  264436 system_pods.go:89] "kube-controller-manager-no-preload-349453" [f31864d9-a8cc-4d04-8f94-497ae381d9ca] Running
	I0916 11:09:36.262489  264436 system_pods.go:89] "kube-proxy-n7m28" [a0580caf-fdc3-483b-a5b9-f29db25b8ef6] Running
	I0916 11:09:36.262494  264436 system_pods.go:89] "kube-scheduler-no-preload-349453" [94505c46-f9fd-4bc6-8287-27c30e4696f0] Running
	I0916 11:09:36.262500  264436 system_pods.go:89] "storage-provisioner" [2f218f7f-9232-4d85-bd8d-6cdc6516c83f] Running
	I0916 11:09:36.262508  264436 system_pods.go:126] duration metric: took 201.374457ms to wait for k8s-apps to be running ...
	I0916 11:09:36.262526  264436 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 11:09:36.262581  264436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:09:36.276938  264436 system_svc.go:56] duration metric: took 14.399242ms WaitForService to wait for kubelet
	I0916 11:09:36.276973  264436 kubeadm.go:582] duration metric: took 30.380758589s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:09:36.277002  264436 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:09:36.460520  264436 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 11:09:36.460557  264436 node_conditions.go:123] node cpu capacity is 8
	I0916 11:09:36.460575  264436 node_conditions.go:105] duration metric: took 183.566872ms to run NodePressure ...
	I0916 11:09:36.460589  264436 start.go:241] waiting for startup goroutines ...
	I0916 11:09:36.460599  264436 start.go:246] waiting for cluster config update ...
	I0916 11:09:36.460617  264436 start.go:255] writing updated cluster config ...
	I0916 11:09:36.460929  264436 ssh_runner.go:195] Run: rm -f paused
	I0916 11:09:36.468132  264436 out.go:177] * Done! kubectl is now configured to use "no-preload-349453" cluster and "default" namespace by default
	E0916 11:09:36.469497  264436 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	I0916 11:09:32.874622  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:32.875104  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:32.875158  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:09:32.875202  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:09:32.908596  254463 cri.go:89] found id: "f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd"
	I0916 11:09:32.908623  254463 cri.go:89] found id: ""
	I0916 11:09:32.908633  254463 logs.go:276] 1 containers: [f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd]
	I0916 11:09:32.908690  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:32.912284  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:09:32.912375  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:09:32.945054  254463 cri.go:89] found id: ""
	I0916 11:09:32.945081  254463 logs.go:276] 0 containers: []
	W0916 11:09:32.945092  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:09:32.945099  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:09:32.945158  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:09:32.979410  254463 cri.go:89] found id: ""
	I0916 11:09:32.979440  254463 logs.go:276] 0 containers: []
	W0916 11:09:32.979451  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:09:32.979458  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:09:32.979527  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:09:33.012749  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:09:33.012772  254463 cri.go:89] found id: ""
	I0916 11:09:33.012782  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:09:33.012842  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:33.016202  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:09:33.016267  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:09:33.047808  254463 cri.go:89] found id: ""
	I0916 11:09:33.047836  254463 logs.go:276] 0 containers: []
	W0916 11:09:33.047847  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:09:33.047855  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:09:33.047904  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:09:33.082293  254463 cri.go:89] found id: "b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9"
	I0916 11:09:33.082318  254463 cri.go:89] found id: ""
	I0916 11:09:33.082328  254463 logs.go:276] 1 containers: [b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9]
	I0916 11:09:33.082384  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:33.085741  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:09:33.085804  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:09:33.117890  254463 cri.go:89] found id: ""
	I0916 11:09:33.117912  254463 logs.go:276] 0 containers: []
	W0916 11:09:33.117920  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:09:33.117926  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:09:33.117973  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:09:33.151245  254463 cri.go:89] found id: ""
	I0916 11:09:33.151278  254463 logs.go:276] 0 containers: []
	W0916 11:09:33.151291  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:09:33.151303  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:09:33.151315  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:09:33.211351  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:09:33.211375  254463 logs.go:123] Gathering logs for kube-apiserver [f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd] ...
	I0916 11:09:33.211390  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd"
	I0916 11:09:33.246165  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:09:33.246193  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:09:33.300562  254463 logs.go:123] Gathering logs for kube-controller-manager [b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9] ...
	I0916 11:09:33.300595  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9"
	I0916 11:09:33.333028  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:09:33.333056  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:09:33.377881  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:09:33.377926  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:09:33.414270  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:09:33.414300  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:09:33.473625  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:09:33.473667  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:09:35.994362  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:35.994766  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:35.994820  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:09:35.994868  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:09:36.027811  254463 cri.go:89] found id: "f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd"
	I0916 11:09:36.027833  254463 cri.go:89] found id: ""
	I0916 11:09:36.027843  254463 logs.go:276] 1 containers: [f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd]
	I0916 11:09:36.027898  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:36.031236  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:09:36.031315  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:09:36.064048  254463 cri.go:89] found id: ""
	I0916 11:09:36.064089  254463 logs.go:276] 0 containers: []
	W0916 11:09:36.064099  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:09:36.064105  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:09:36.064161  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:09:36.097716  254463 cri.go:89] found id: ""
	I0916 11:09:36.097740  254463 logs.go:276] 0 containers: []
	W0916 11:09:36.097749  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:09:36.097755  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:09:36.097802  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:09:36.128830  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:09:36.128853  254463 cri.go:89] found id: ""
	I0916 11:09:36.128862  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:09:36.128917  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:36.132337  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:09:36.132402  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:09:36.164034  254463 cri.go:89] found id: ""
	I0916 11:09:36.164056  254463 logs.go:276] 0 containers: []
	W0916 11:09:36.164067  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:09:36.164075  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:09:36.164136  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:09:36.196454  254463 cri.go:89] found id: "b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9"
	I0916 11:09:36.196479  254463 cri.go:89] found id: ""
	I0916 11:09:36.196486  254463 logs.go:276] 1 containers: [b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9]
	I0916 11:09:36.196540  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:36.199847  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:09:36.199897  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:09:36.234554  254463 cri.go:89] found id: ""
	I0916 11:09:36.234577  254463 logs.go:276] 0 containers: []
	W0916 11:09:36.234585  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:09:36.234591  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:09:36.234634  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:09:36.269383  254463 cri.go:89] found id: ""
	I0916 11:09:36.269403  254463 logs.go:276] 0 containers: []
	W0916 11:09:36.269411  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:09:36.269418  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:09:36.269430  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:09:36.328618  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:09:36.328641  254463 logs.go:123] Gathering logs for kube-apiserver [f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd] ...
	I0916 11:09:36.328656  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd"
	I0916 11:09:36.365262  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:09:36.365299  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:09:36.419000  254463 logs.go:123] Gathering logs for kube-controller-manager [b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9] ...
	I0916 11:09:36.419036  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9"
	I0916 11:09:36.452272  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:09:36.452299  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:09:36.503844  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:09:36.503880  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:09:36.549774  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:09:36.549800  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:09:36.621917  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:09:36.621947  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:09:36.349205  260870 pod_ready.go:103] pod "etcd-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:38.349497  260870 pod_ready.go:103] pod "etcd-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	30acbc7b45e29       c69fa2e9cbf5f       8 seconds ago       Running             coredns                   0                   290db8b125607       coredns-7c65d6cfc9-9zbwk
	b30641ccb64e3       12968670680f4       32 seconds ago      Running             kindnet-cni               0                   06502caa119d4       kindnet-qbh58
	6fe6dedc21740       6e38f40d628db       34 seconds ago      Running             storage-provisioner       0                   0e0c238d616bc       storage-provisioner
	49542fa155836       60c005f310ff3       35 seconds ago      Running             kube-proxy                0                   0072787e29726       kube-proxy-n7m28
	a4b95a39232c2       175ffd71cce3d       46 seconds ago      Running             kube-controller-manager   0                   8aeec0e766fdb       kube-controller-manager-no-preload-349453
	5c82d38a57c77       9aa1fad941575       46 seconds ago      Running             kube-scheduler            0                   8200d83c8723c       kube-scheduler-no-preload-349453
	0b8b34459e371       2e96e5913fc06       46 seconds ago      Running             etcd                      0                   151cda393a927       etcd-no-preload-349453
	5d35346ecb3ed       6bab7719df100       46 seconds ago      Running             kube-apiserver            0                   4db1422602ab8       kube-apiserver-no-preload-349453
	
	
	==> containerd <==
	Sep 16 11:09:09 no-preload-349453 containerd[860]: time="2024-09-16T11:09:09.779047237Z" level=info msg="StartContainer for \"b30641ccb64e362b81a950f5292970ba402b0ac24d54dd1f6caba97fe10efe1a\""
	Sep 16 11:09:09 no-preload-349453 containerd[860]: time="2024-09-16T11:09:09.837438306Z" level=info msg="StartContainer for \"b30641ccb64e362b81a950f5292970ba402b0ac24d54dd1f6caba97fe10efe1a\" returns successfully"
	Sep 16 11:09:11 no-preload-349453 containerd[860]: time="2024-09-16T11:09:11.254668504Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Sep 16 11:09:19 no-preload-349453 containerd[860]: time="2024-09-16T11:09:19.751075294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9zbwk,Uid:427a37dd-9a56-455f-bd9e-3ee604164481,Namespace:kube-system,Attempt:0,}"
	Sep 16 11:09:19 no-preload-349453 containerd[860]: time="2024-09-16T11:09:19.776142115Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9zbwk,Uid:427a37dd-9a56-455f-bd9e-3ee604164481,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5cf5f98e50f3f78d6413dc73a990393f6b55cca1a6aeabc7984f5b2b95148f50\": failed to find network info for sandbox \"5cf5f98e50f3f78d6413dc73a990393f6b55cca1a6aeabc7984f5b2b95148f50\""
	Sep 16 11:09:33 no-preload-349453 containerd[860]: time="2024-09-16T11:09:33.751187315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9zbwk,Uid:427a37dd-9a56-455f-bd9e-3ee604164481,Namespace:kube-system,Attempt:0,}"
	Sep 16 11:09:33 no-preload-349453 containerd[860]: time="2024-09-16T11:09:33.785477296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 11:09:33 no-preload-349453 containerd[860]: time="2024-09-16T11:09:33.785552025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 11:09:33 no-preload-349453 containerd[860]: time="2024-09-16T11:09:33.785563820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:09:33 no-preload-349453 containerd[860]: time="2024-09-16T11:09:33.785650565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:09:33 no-preload-349453 containerd[860]: time="2024-09-16T11:09:33.833330346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-9zbwk,Uid:427a37dd-9a56-455f-bd9e-3ee604164481,Namespace:kube-system,Attempt:0,} returns sandbox id \"290db8b125607d520b2543935c4df4e547514fdab3734aeba6c5c7a161992240\""
	Sep 16 11:09:33 no-preload-349453 containerd[860]: time="2024-09-16T11:09:33.836077045Z" level=info msg="CreateContainer within sandbox \"290db8b125607d520b2543935c4df4e547514fdab3734aeba6c5c7a161992240\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Sep 16 11:09:33 no-preload-349453 containerd[860]: time="2024-09-16T11:09:33.851638309Z" level=info msg="CreateContainer within sandbox \"290db8b125607d520b2543935c4df4e547514fdab3734aeba6c5c7a161992240\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"30acbc7b45e29f44070b3c2cd9e349609594ebbaf39bfad9077eca4ac3da4b6d\""
	Sep 16 11:09:33 no-preload-349453 containerd[860]: time="2024-09-16T11:09:33.852272763Z" level=info msg="StartContainer for \"30acbc7b45e29f44070b3c2cd9e349609594ebbaf39bfad9077eca4ac3da4b6d\""
	Sep 16 11:09:33 no-preload-349453 containerd[860]: time="2024-09-16T11:09:33.896573615Z" level=info msg="StartContainer for \"30acbc7b45e29f44070b3c2cd9e349609594ebbaf39bfad9077eca4ac3da4b6d\" returns successfully"
	Sep 16 11:09:41 no-preload-349453 containerd[860]: time="2024-09-16T11:09:41.145399503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:metrics-server-6867b74b74-zw8sx,Uid:ac34c3d4-46cd-404d-8aa8-7d28840fa4d0,Namespace:kube-system,Attempt:0,}"
	Sep 16 11:09:41 no-preload-349453 containerd[860]: time="2024-09-16T11:09:41.179819498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 11:09:41 no-preload-349453 containerd[860]: time="2024-09-16T11:09:41.180550332Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 11:09:41 no-preload-349453 containerd[860]: time="2024-09-16T11:09:41.180570100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:09:41 no-preload-349453 containerd[860]: time="2024-09-16T11:09:41.180686433Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:09:41 no-preload-349453 containerd[860]: time="2024-09-16T11:09:41.235092959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:metrics-server-6867b74b74-zw8sx,Uid:ac34c3d4-46cd-404d-8aa8-7d28840fa4d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"56e9d4df8b7b1aacb4716dec880350dbaf46bab2d3cc987ad3d31d723cf04d1b\""
	Sep 16 11:09:41 no-preload-349453 containerd[860]: time="2024-09-16T11:09:41.236727916Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 11:09:41 no-preload-349453 containerd[860]: time="2024-09-16T11:09:41.274504071Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host" host=fake.domain
	Sep 16 11:09:41 no-preload-349453 containerd[860]: time="2024-09-16T11:09:41.275944968Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
	Sep 16 11:09:41 no-preload-349453 containerd[860]: time="2024-09-16T11:09:41.275994126Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [30acbc7b45e29f44070b3c2cd9e349609594ebbaf39bfad9077eca4ac3da4b6d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:57592 - 13339 "HINFO IN 8962497822399797364.2477591037072266195. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011748401s
	
	
	==> describe nodes <==
	Name:               no-preload-349453
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-349453
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=no-preload-349453
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T11_09_01_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:08:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-349453
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:09:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:09:31 +0000   Mon, 16 Sep 2024 11:08:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:09:31 +0000   Mon, 16 Sep 2024 11:08:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:09:31 +0000   Mon, 16 Sep 2024 11:08:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:09:31 +0000   Mon, 16 Sep 2024 11:08:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-349453
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 7ac769ff9aa04aaf92b2dd2bf68f2f82
	  System UUID:                28dd4bdd-2700-4b67-8389-386a38b68a64
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-9zbwk                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     36s
	  kube-system                 etcd-no-preload-349453                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         41s
	  kube-system                 kindnet-qbh58                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      37s
	  kube-system                 kube-apiserver-no-preload-349453             250m (3%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-no-preload-349453    200m (2%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-n7m28                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-scheduler-no-preload-349453             100m (1%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 metrics-server-6867b74b74-zw8sx              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         2s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 35s   kube-proxy       
	  Normal   Starting                 42s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 42s   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  42s   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  42s   kubelet          Node no-preload-349453 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    42s   kubelet          Node no-preload-349453 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     42s   kubelet          Node no-preload-349453 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           38s   node-controller  Node no-preload-349453 event: Registered Node no-preload-349453 in Controller
	
	
	==> dmesg <==
	[Sep16 11:00] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000006] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000002] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +0.000040] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000004] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +1.028430] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000006] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +0.004229] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000004] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +2.011572] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000009] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +4.031652] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000006] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +0.000018] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[  +8.195254] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-49585fce923a
	[  +0.000007] ll header: 00000000: 02 42 ad 45 94 54 02 42 c0 a8 43 02 08 00
	[Sep16 11:03] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-26a107bdc9bf
	[  +0.000006] ll header: 00000000: 02 42 7f 04 0c 59 02 42 c0 a8 4c 02 08 00
	[  +1.005595] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-26a107bdc9bf
	[  +0.000005] ll header: 00000000: 02 42 7f 04 0c 59 02 42 c0 a8 4c 02 08 00
	[Sep16 11:07] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [0b8b34459e3719308549f98b1bdb7656a632bebe98e50614463b773f213a735d] <==
	{"level":"info","ts":"2024-09-16T11:08:56.341999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T11:08:56.342053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T11:08:56.342083Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 1"}
	{"level":"info","ts":"2024-09-16T11:08:56.342100Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T11:08:56.342109Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2024-09-16T11:08:56.342122Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 2"}
	{"level":"info","ts":"2024-09-16T11:08:56.342153Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2024-09-16T11:08:56.343021Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:08:56.343585Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:08:56.343582Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:no-preload-349453 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T11:08:56.343760Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:08:56.343891Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T11:08:56.343954Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T11:08:56.344739Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:08:56.344836Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:08:56.344861Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:08:56.344977Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:08:56.345568Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:08:56.346072Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T11:08:56.346688Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2024-09-16T11:08:59.251819Z","caller":"traceutil/trace.go:171","msg":"trace[909223504] linearizableReadLoop","detail":"{readStateIndex:78; appliedIndex:77; }","duration":"124.299534ms","start":"2024-09-16T11:08:59.127499Z","end":"2024-09-16T11:08:59.251798Z","steps":["trace[909223504] 'read index received'  (duration: 61.163504ms)","trace[909223504] 'applied index is now lower than readState.Index'  (duration: 63.13541ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T11:08:59.251872Z","caller":"traceutil/trace.go:171","msg":"trace[1280881910] transaction","detail":"{read_only:false; response_revision:74; number_of_response:1; }","duration":"128.600617ms","start":"2024-09-16T11:08:59.123247Z","end":"2024-09-16T11:08:59.251847Z","steps":["trace[1280881910] 'process raft request'  (duration: 65.397729ms)","trace[1280881910] 'compare'  (duration: 63.021346ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T11:08:59.251948Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.433124ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2024-09-16T11:08:59.252009Z","caller":"traceutil/trace.go:171","msg":"trace[1202054448] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:74; }","duration":"124.508287ms","start":"2024-09-16T11:08:59.127491Z","end":"2024-09-16T11:08:59.251999Z","steps":["trace[1202054448] 'agreement among raft nodes before linearized reading'  (duration: 124.386955ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T11:08:59.439373Z","caller":"traceutil/trace.go:171","msg":"trace[1221868137] transaction","detail":"{read_only:false; response_revision:75; number_of_response:1; }","duration":"183.565022ms","start":"2024-09-16T11:08:59.255790Z","end":"2024-09-16T11:08:59.439355Z","steps":["trace[1221868137] 'process raft request'  (duration: 120.890221ms)","trace[1221868137] 'compare'  (duration: 62.56898ms)"],"step_count":2}
	
	
	==> kernel <==
	 11:09:42 up 52 min,  0 users,  load average: 3.87, 3.57, 2.18
	Linux no-preload-349453 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [b30641ccb64e362b81a950f5292970ba402b0ac24d54dd1f6caba97fe10efe1a] <==
	I0916 11:09:10.022282       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 11:09:10.022538       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I0916 11:09:10.022724       1 main.go:148] setting mtu 1500 for CNI 
	I0916 11:09:10.022743       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 11:09:10.022773       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 11:09:10.420723       1 controller.go:334] Starting controller kube-network-policies
	I0916 11:09:10.421181       1 controller.go:338] Waiting for informer caches to sync
	I0916 11:09:10.421189       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 11:09:10.721709       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 11:09:10.721737       1 metrics.go:61] Registering metrics
	I0916 11:09:10.721785       1 controller.go:374] Syncing nftables rules
	I0916 11:09:20.425801       1 main.go:295] Handling node with IPs: map[192.168.94.2:{}]
	I0916 11:09:20.425835       1 main.go:299] handling current node
	I0916 11:09:30.427819       1 main.go:295] Handling node with IPs: map[192.168.94.2:{}]
	I0916 11:09:30.427851       1 main.go:299] handling current node
	I0916 11:09:40.423828       1 main.go:295] Handling node with IPs: map[192.168.94.2:{}]
	I0916 11:09:40.423873       1 main.go:299] handling current node
	
	
	==> kube-apiserver [5d35346ecb3ed281435941a815ae449eb1e54a96a4798b3c0a33aa91f7f98817] <==
	E0916 11:09:40.814483       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0916 11:09:40.815799       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I0916 11:09:40.880166       1 alloc.go:330] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.104.72.125"}
	W0916 11:09:40.924744       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 11:09:40.924829       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0916 11:09:40.929525       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 11:09:40.929582       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0916 11:09:41.809066       1 handler_proxy.go:99] no RequestInfo found in the context
	W0916 11:09:41.809101       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 11:09:41.809117       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0916 11:09:41.809188       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0916 11:09:41.810270       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0916 11:09:41.810282       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [a4b95a39232c279fb958cb4f48f6828232143f571fa31629227ec0c0bfd79969] <==
	I0916 11:09:05.501553       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:09:05.582619       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:09:05.582651       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 11:09:05.590465       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-349453"
	I0916 11:09:06.045501       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="255.463843ms"
	I0916 11:09:06.052468       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.901285ms"
	I0916 11:09:06.052558       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="51.667µs"
	I0916 11:09:06.053697       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="66.481µs"
	I0916 11:09:06.131407       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="100.516µs"
	I0916 11:09:06.647300       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="17.526542ms"
	I0916 11:09:06.654990       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="7.635755ms"
	I0916 11:09:06.655120       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="81.851µs"
	I0916 11:09:07.881805       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="70.434µs"
	I0916 11:09:07.887535       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="88.293µs"
	I0916 11:09:07.891032       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="71.532µs"
	I0916 11:09:11.264980       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-349453"
	I0916 11:09:31.598112       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-349453"
	I0916 11:09:34.905630       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="79.673µs"
	I0916 11:09:34.923877       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.97092ms"
	I0916 11:09:34.923984       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="71.271µs"
	I0916 11:09:40.840605       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="13.714865ms"
	I0916 11:09:40.857675       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="17.012662ms"
	I0916 11:09:40.857775       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="54.761µs"
	I0916 11:09:40.857822       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="23.308µs"
	I0916 11:09:41.938017       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="114.313µs"
	
	
	==> kube-proxy [49542fa155836c3fde0549a1cb7c27e6491fe3d03f704b1dd6194e52b361cf04] <==
	I0916 11:09:06.867943       1 server_linux.go:66] "Using iptables proxy"
	I0916 11:09:06.995156       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.94.2"]
	E0916 11:09:06.995228       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 11:09:07.016693       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 11:09:07.016755       1 server_linux.go:169] "Using iptables Proxier"
	I0916 11:09:07.018577       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 11:09:07.018989       1 server.go:483] "Version info" version="v1.31.1"
	I0916 11:09:07.019027       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:09:07.020423       1 config.go:105] "Starting endpoint slice config controller"
	I0916 11:09:07.020505       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 11:09:07.020533       1 config.go:328] "Starting node config controller"
	I0916 11:09:07.020679       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 11:09:07.020603       1 config.go:199] "Starting service config controller"
	I0916 11:09:07.020757       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 11:09:07.121453       1 shared_informer.go:320] Caches are synced for service config
	I0916 11:09:07.121498       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 11:09:07.121503       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5c82d38a57c77763e3457d1b66ad9e5b564c5c6137cef3dc16a48cd5d44dcf69] <==
	W0916 11:08:59.221741       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 11:08:59.221790       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:59.261959       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 11:08:59.262001       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:59.265606       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:08:59.265658       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:59.490611       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 11:08:59.490652       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:59.579438       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 11:08:59.579489       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:59.585912       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 11:08:59.585982       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:59.629574       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 11:08:59.629617       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:59.663059       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:08:59.663100       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0916 11:08:59.685631       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 11:08:59.685685       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:59.695015       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 11:08:59.695064       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:59.697126       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 11:08:59.697157       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:59.699134       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 11:08:59.699171       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 11:09:02.728201       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 11:09:06 no-preload-349453 kubelet[2271]: I0916 11:09:06.925656    2271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42523754-f961-412c-9c6a-2ad437fadc08-config-volume\") pod \"42523754-f961-412c-9c6a-2ad437fadc08\" (UID: \"42523754-f961-412c-9c6a-2ad437fadc08\") "
	Sep 16 11:09:06 no-preload-349453 kubelet[2271]: I0916 11:09:06.925713    2271 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggz6t\" (UniqueName: \"kubernetes.io/projected/42523754-f961-412c-9c6a-2ad437fadc08-kube-api-access-ggz6t\") pod \"42523754-f961-412c-9c6a-2ad437fadc08\" (UID: \"42523754-f961-412c-9c6a-2ad437fadc08\") "
	Sep 16 11:09:06 no-preload-349453 kubelet[2271]: I0916 11:09:06.925791    2271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96zdr\" (UniqueName: \"kubernetes.io/projected/2f218f7f-9232-4d85-bd8d-6cdc6516c83f-kube-api-access-96zdr\") pod \"storage-provisioner\" (UID: \"2f218f7f-9232-4d85-bd8d-6cdc6516c83f\") " pod="kube-system/storage-provisioner"
	Sep 16 11:09:06 no-preload-349453 kubelet[2271]: I0916 11:09:06.925872    2271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2f218f7f-9232-4d85-bd8d-6cdc6516c83f-tmp\") pod \"storage-provisioner\" (UID: \"2f218f7f-9232-4d85-bd8d-6cdc6516c83f\") " pod="kube-system/storage-provisioner"
	Sep 16 11:09:06 no-preload-349453 kubelet[2271]: I0916 11:09:06.926063    2271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42523754-f961-412c-9c6a-2ad437fadc08-config-volume" (OuterVolumeSpecName: "config-volume") pod "42523754-f961-412c-9c6a-2ad437fadc08" (UID: "42523754-f961-412c-9c6a-2ad437fadc08"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Sep 16 11:09:06 no-preload-349453 kubelet[2271]: I0916 11:09:06.928599    2271 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42523754-f961-412c-9c6a-2ad437fadc08-kube-api-access-ggz6t" (OuterVolumeSpecName: "kube-api-access-ggz6t") pod "42523754-f961-412c-9c6a-2ad437fadc08" (UID: "42523754-f961-412c-9c6a-2ad437fadc08"). InnerVolumeSpecName "kube-api-access-ggz6t". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 11:09:07 no-preload-349453 kubelet[2271]: I0916 11:09:07.026104    2271 reconciler_common.go:288] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42523754-f961-412c-9c6a-2ad437fadc08-config-volume\") on node \"no-preload-349453\" DevicePath \"\""
	Sep 16 11:09:07 no-preload-349453 kubelet[2271]: I0916 11:09:07.026141    2271 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-ggz6t\" (UniqueName: \"kubernetes.io/projected/42523754-f961-412c-9c6a-2ad437fadc08-kube-api-access-ggz6t\") on node \"no-preload-349453\" DevicePath \"\""
	Sep 16 11:09:07 no-preload-349453 kubelet[2271]: I0916 11:09:07.848853    2271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.848828737 podStartE2EDuration="1.848828737s" podCreationTimestamp="2024-09-16 11:09:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:09:07.848577382 +0000 UTC m=+7.185828216" watchObservedRunningTime="2024-09-16 11:09:07.848828737 +0000 UTC m=+7.186079571"
	Sep 16 11:09:08 no-preload-349453 kubelet[2271]: I0916 11:09:08.753518    2271 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42523754-f961-412c-9c6a-2ad437fadc08" path="/var/lib/kubelet/pods/42523754-f961-412c-9c6a-2ad437fadc08/volumes"
	Sep 16 11:09:09 no-preload-349453 kubelet[2271]: I0916 11:09:09.856622    2271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-qbh58" podStartSLOduration=1.952501992 podStartE2EDuration="4.856602316s" podCreationTimestamp="2024-09-16 11:09:05 +0000 UTC" firstStartedPulling="2024-09-16 11:09:06.857718399 +0000 UTC m=+6.194969216" lastFinishedPulling="2024-09-16 11:09:09.761818723 +0000 UTC m=+9.099069540" observedRunningTime="2024-09-16 11:09:09.856516169 +0000 UTC m=+9.193767018" watchObservedRunningTime="2024-09-16 11:09:09.856602316 +0000 UTC m=+9.193853150"
	Sep 16 11:09:11 no-preload-349453 kubelet[2271]: I0916 11:09:11.254039    2271 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 16 11:09:11 no-preload-349453 kubelet[2271]: I0916 11:09:11.255017    2271 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 16 11:09:19 no-preload-349453 kubelet[2271]: E0916 11:09:19.776626    2271 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cf5f98e50f3f78d6413dc73a990393f6b55cca1a6aeabc7984f5b2b95148f50\": failed to find network info for sandbox \"5cf5f98e50f3f78d6413dc73a990393f6b55cca1a6aeabc7984f5b2b95148f50\""
	Sep 16 11:09:19 no-preload-349453 kubelet[2271]: E0916 11:09:19.776742    2271 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cf5f98e50f3f78d6413dc73a990393f6b55cca1a6aeabc7984f5b2b95148f50\": failed to find network info for sandbox \"5cf5f98e50f3f78d6413dc73a990393f6b55cca1a6aeabc7984f5b2b95148f50\"" pod="kube-system/coredns-7c65d6cfc9-9zbwk"
	Sep 16 11:09:19 no-preload-349453 kubelet[2271]: E0916 11:09:19.776774    2271 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cf5f98e50f3f78d6413dc73a990393f6b55cca1a6aeabc7984f5b2b95148f50\": failed to find network info for sandbox \"5cf5f98e50f3f78d6413dc73a990393f6b55cca1a6aeabc7984f5b2b95148f50\"" pod="kube-system/coredns-7c65d6cfc9-9zbwk"
	Sep 16 11:09:19 no-preload-349453 kubelet[2271]: E0916 11:09:19.776838    2271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-9zbwk_kube-system(427a37dd-9a56-455f-bd9e-3ee604164481)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-9zbwk_kube-system(427a37dd-9a56-455f-bd9e-3ee604164481)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5cf5f98e50f3f78d6413dc73a990393f6b55cca1a6aeabc7984f5b2b95148f50\\\": failed to find network info for sandbox \\\"5cf5f98e50f3f78d6413dc73a990393f6b55cca1a6aeabc7984f5b2b95148f50\\\"\"" pod="kube-system/coredns-7c65d6cfc9-9zbwk" podUID="427a37dd-9a56-455f-bd9e-3ee604164481"
	Sep 16 11:09:34 no-preload-349453 kubelet[2271]: I0916 11:09:34.905623    2271 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-9zbwk" podStartSLOduration=28.905602056 podStartE2EDuration="28.905602056s" podCreationTimestamp="2024-09-16 11:09:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:09:34.905581503 +0000 UTC m=+34.242832342" watchObservedRunningTime="2024-09-16 11:09:34.905602056 +0000 UTC m=+34.242852892"
	Sep 16 11:09:41 no-preload-349453 kubelet[2271]: I0916 11:09:41.027951    2271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwm25\" (UniqueName: \"kubernetes.io/projected/ac34c3d4-46cd-404d-8aa8-7d28840fa4d0-kube-api-access-kwm25\") pod \"metrics-server-6867b74b74-zw8sx\" (UID: \"ac34c3d4-46cd-404d-8aa8-7d28840fa4d0\") " pod="kube-system/metrics-server-6867b74b74-zw8sx"
	Sep 16 11:09:41 no-preload-349453 kubelet[2271]: I0916 11:09:41.028009    2271 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ac34c3d4-46cd-404d-8aa8-7d28840fa4d0-tmp-dir\") pod \"metrics-server-6867b74b74-zw8sx\" (UID: \"ac34c3d4-46cd-404d-8aa8-7d28840fa4d0\") " pod="kube-system/metrics-server-6867b74b74-zw8sx"
	Sep 16 11:09:41 no-preload-349453 kubelet[2271]: E0916 11:09:41.276264    2271 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 16 11:09:41 no-preload-349453 kubelet[2271]: E0916 11:09:41.276332    2271 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 16 11:09:41 no-preload-349453 kubelet[2271]: E0916 11:09:41.276531    2271 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kwm25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:
nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdi
n:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-zw8sx_kube-system(ac34c3d4-46cd-404d-8aa8-7d28840fa4d0): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host" logger="UnhandledError"
	Sep 16 11:09:41 no-preload-349453 kubelet[2271]: E0916 11:09:41.277784    2271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-6867b74b74-zw8sx" podUID="ac34c3d4-46cd-404d-8aa8-7d28840fa4d0"
	Sep 16 11:09:41 no-preload-349453 kubelet[2271]: E0916 11:09:41.926460    2271 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-zw8sx" podUID="ac34c3d4-46cd-404d-8aa8-7d28840fa4d0"
	
	
	==> storage-provisioner [6fe6dedc217401e73f5795b6cd5cfdd5d65a4314df29b9b5be8775dc661cbffa] <==
	I0916 11:09:07.370432       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 11:09:07.378006       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 11:09:07.378048       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 11:09:07.384602       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 11:09:07.384718       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6ddd7c41-8f63-47a8-9650-2ec5bbdf92e6", APIVersion:"v1", ResourceVersion:"382", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-349453_a47d0e4b-78c2-4200-80ff-c828e8be01d0 became leader
	I0916 11:09:07.384766       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-349453_a47d0e4b-78c2-4200-80ff-c828e8be01d0!
	I0916 11:09:07.485942       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-349453_a47d0e4b-78c2-4200-80ff-c828e8be01d0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-349453 -n no-preload-349453
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-349453 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context no-preload-349453 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (449.492µs)
helpers_test.go:263: kubectl --context no-preload-349453 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (3.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-371039 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-371039 create -f testdata/busybox.yaml: fork/exec /usr/local/bin/kubectl: exec format error (677.052µs)
start_stop_delete_test.go:196: kubectl --context old-k8s-version-371039 create -f testdata/busybox.yaml failed: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-371039
helpers_test.go:235: (dbg) docker inspect old-k8s-version-371039:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9e01fb8ba8f9390279ece7fa8549e0985cb4ee5a2abca405a94db6651b123b23",
	        "Created": "2024-09-16T11:08:26.808717426Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 262175,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T11:08:26.947014727Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/9e01fb8ba8f9390279ece7fa8549e0985cb4ee5a2abca405a94db6651b123b23/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9e01fb8ba8f9390279ece7fa8549e0985cb4ee5a2abca405a94db6651b123b23/hostname",
	        "HostsPath": "/var/lib/docker/containers/9e01fb8ba8f9390279ece7fa8549e0985cb4ee5a2abca405a94db6651b123b23/hosts",
	        "LogPath": "/var/lib/docker/containers/9e01fb8ba8f9390279ece7fa8549e0985cb4ee5a2abca405a94db6651b123b23/9e01fb8ba8f9390279ece7fa8549e0985cb4ee5a2abca405a94db6651b123b23-json.log",
	        "Name": "/old-k8s-version-371039",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-371039:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-371039",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a4c36d1f47b4f755139012e0a43ea6f04287ebb716fa5c832223a9a5a773adaf-init/diff:/var/lib/docker/overlay2/e391a31d00d345931d18da68f8a7d7338830f6d9b98fe203461225d03a65d594/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a4c36d1f47b4f755139012e0a43ea6f04287ebb716fa5c832223a9a5a773adaf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a4c36d1f47b4f755139012e0a43ea6f04287ebb716fa5c832223a9a5a773adaf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a4c36d1f47b4f755139012e0a43ea6f04287ebb716fa5c832223a9a5a773adaf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-371039",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-371039/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-371039",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-371039",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-371039",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cb344cb6ef2301f2020c4e997ddc256592ab1b779218cfb3d91a41736363c80c",
	            "SandboxKey": "/var/run/docker/netns/cb344cb6ef23",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-371039": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "617bc0338b3b0f6ed38b0b21b091e38e1d6c95398d3e053128c978435134833f",
	                    "EndpointID": "e8c6186d44336c3ccbe03bab444f7bdf6847c5d8aac6300c54bfe5f7be82eb5d",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-371039",
	                        "9e01fb8ba8f9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-371039 -n old-k8s-version-371039
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-371039 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-371039 logs -n 25: (1.195718929s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-771611 sudo                                  | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | containerd config dump                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo                                  | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | systemctl status crio --all                            |                           |         |         |                     |                     |
	|         | --full --no-pager                                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo                                  | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | systemctl cat crio --no-pager                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo find                             | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo crio                             | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | config                                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-771611                                       | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	| delete  | -p missing-upgrade-327796                              | missing-upgrade-327796    | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	| start   | -p cert-expiration-021107                              | cert-expiration-021107    | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:08 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| start   | -p force-systemd-flag-917705                           | force-systemd-flag-917705 | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:08 UTC |
	|         | --memory=2048 --force-systemd                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                                   |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-846070                               | force-systemd-env-846070  | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | ssh cat                                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-846070                            | force-systemd-env-846070  | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| stop    | -p kubernetes-upgrade-311911                           | kubernetes-upgrade-311911 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| start   | -p cert-options-840054                                 | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-311911                           | kubernetes-upgrade-311911 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                                   |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-917705                              | force-systemd-flag-917705 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | ssh cat                                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-917705                           | force-systemd-flag-917705 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| start   | -p old-k8s-version-371039                              | old-k8s-version-371039    | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:10 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| ssh     | cert-options-840054 ssh                                | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | openssl x509 -text -noout -in                          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-840054 -- sudo                         | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                           |         |         |                     |                     |
	| delete  | -p cert-options-840054                                 | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| start   | -p no-preload-349453                                   | no-preload-349453         | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:09 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-349453             | no-preload-349453         | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC | 16 Sep 24 11:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p no-preload-349453                                   | no-preload-349453         | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC | 16 Sep 24 11:09 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-349453                  | no-preload-349453         | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC | 16 Sep 24 11:09 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p no-preload-349453                                   | no-preload-349453         | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:09:48
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:09:48.774615  274695 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:09:48.774727  274695 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:09:48.774736  274695 out.go:358] Setting ErrFile to fd 2...
	I0916 11:09:48.774741  274695 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:09:48.774931  274695 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 11:09:48.775465  274695 out.go:352] Setting JSON to false
	I0916 11:09:48.776814  274695 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3133,"bootTime":1726481856,"procs":325,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:09:48.776915  274695 start.go:139] virtualization: kvm guest
	I0916 11:09:48.779376  274695 out.go:177] * [no-preload-349453] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:09:48.780693  274695 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:09:48.780693  274695 notify.go:220] Checking for updates...
	I0916 11:09:48.782771  274695 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:09:48.783942  274695 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:09:48.784951  274695 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 11:09:48.785992  274695 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:09:48.787325  274695 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:09:48.789055  274695 config.go:182] Loaded profile config "no-preload-349453": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:09:48.789761  274695 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:09:48.816058  274695 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 11:09:48.816173  274695 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:09:48.872573  274695 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:68 OomKillDisable:true NGoroutines:84 SystemTime:2024-09-16 11:09:48.861917048 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:09:48.872710  274695 docker.go:318] overlay module found
	I0916 11:09:48.874792  274695 out.go:177] * Using the docker driver based on existing profile
	I0916 11:09:48.876381  274695 start.go:297] selected driver: docker
	I0916 11:09:48.876396  274695 start.go:901] validating driver "docker" against &{Name:no-preload-349453 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-349453 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:09:48.876482  274695 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:09:48.877396  274695 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:09:48.937469  274695 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:68 OomKillDisable:true NGoroutines:84 SystemTime:2024-09-16 11:09:48.927117526 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:09:48.937828  274695 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:09:48.937861  274695 cni.go:84] Creating CNI manager for ""
	I0916 11:09:48.937920  274695 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:09:48.937961  274695 start.go:340] cluster config:
	{Name:no-preload-349453 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-349453 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:09:48.939958  274695 out.go:177] * Starting "no-preload-349453" primary control-plane node in "no-preload-349453" cluster
	I0916 11:09:48.941389  274695 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 11:09:48.942657  274695 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 11:09:48.943944  274695 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:09:48.944031  274695 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 11:09:48.944121  274695 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/config.json ...
	I0916 11:09:48.944323  274695 cache.go:107] acquiring lock: {Name:mk505f3dd823c459cfb83f2d2a39affe63c4c40e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:09:48.944366  274695 cache.go:107] acquiring lock: {Name:mk612053845ede903900e7b583df14a07089be08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:09:48.944387  274695 cache.go:107] acquiring lock: {Name:mkb7cb231873e7918d3e306be4ec4f6091d91485 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:09:48.944439  274695 cache.go:115] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0916 11:09:48.944446  274695 cache.go:115] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0916 11:09:48.944431  274695 cache.go:107] acquiring lock: {Name:mkd9c658f7569779b8a27d53e97cc0f70f55a845 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:09:48.944455  274695 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1" took 69.965µs
	I0916 11:09:48.944451  274695 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 91.023µs
	I0916 11:09:48.944470  274695 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0916 11:09:48.944470  274695 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0916 11:09:48.944322  274695 cache.go:107] acquiring lock: {Name:mk0f2d9e0670c46fe9eb165a8119acf30531a2e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:09:48.944483  274695 cache.go:107] acquiring lock: {Name:mk8275b1fd51b04034df297d05c3d74274567a6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:09:48.944498  274695 cache.go:115] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0916 11:09:48.944504  274695 cache.go:115] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0916 11:09:48.944507  274695 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1" took 77.168µs
	I0916 11:09:48.944511  274695 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1" took 202.159µs
	I0916 11:09:48.944515  274695 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0916 11:09:48.944519  274695 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0916 11:09:48.944519  274695 cache.go:115] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 exists
	I0916 11:09:48.944527  274695 cache.go:115] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0916 11:09:48.944530  274695 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0" took 49.3µs
	I0916 11:09:48.944527  274695 cache.go:107] acquiring lock: {Name:mk0b25b3ebef8c92ed85c693112bf4f2b400d9b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:09:48.944537  274695 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 221.354µs
	I0916 11:09:48.944545  274695 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0916 11:09:48.944537  274695 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0916 11:09:48.944548  274695 cache.go:107] acquiring lock: {Name:mkd90d764df5e26e345f1c24540d37a0e89a5b18 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:09:48.944560  274695 cache.go:115] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0916 11:09:48.944566  274695 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1" took 41.533µs
	I0916 11:09:48.944573  274695 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0916 11:09:48.944604  274695 cache.go:115] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0916 11:09:48.944610  274695 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 64.195µs
	I0916 11:09:48.944617  274695 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0916 11:09:48.944624  274695 cache.go:87] Successfully saved all images to host disk.
	W0916 11:09:48.969191  274695 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 11:09:48.969211  274695 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 11:09:48.969289  274695 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 11:09:48.969306  274695 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 11:09:48.969311  274695 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 11:09:48.969319  274695 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 11:09:48.969326  274695 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 11:09:49.025446  274695 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 11:09:49.025486  274695 cache.go:194] Successfully downloaded all kic artifacts
	I0916 11:09:49.025515  274695 start.go:360] acquireMachinesLock for no-preload-349453: {Name:mk8558ad422c1a28af392329b5800e6b7ec410a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:09:49.025584  274695 start.go:364] duration metric: took 51.504µs to acquireMachinesLock for "no-preload-349453"
	I0916 11:09:49.025602  274695 start.go:96] Skipping create...Using existing machine configuration
	I0916 11:09:49.025610  274695 fix.go:54] fixHost starting: 
	I0916 11:09:49.025910  274695 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Status}}
	I0916 11:09:49.044053  274695 fix.go:112] recreateIfNeeded on no-preload-349453: state=Stopped err=<nil>
	W0916 11:09:49.044108  274695 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 11:09:49.045989  274695 out.go:177] * Restarting existing docker container for "no-preload-349453" ...
	I0916 11:09:45.849283  260870 pod_ready.go:103] pod "etcd-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:48.349153  260870 pod_ready.go:103] pod "etcd-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:48.687452  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:48.687946  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:48.687995  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:09:48.688038  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:09:48.726246  254463 cri.go:89] found id: "f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd"
	I0916 11:09:48.726273  254463 cri.go:89] found id: ""
	I0916 11:09:48.726285  254463 logs.go:276] 1 containers: [f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd]
	I0916 11:09:48.726349  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:48.729998  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:09:48.730067  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:09:48.770403  254463 cri.go:89] found id: ""
	I0916 11:09:48.770433  254463 logs.go:276] 0 containers: []
	W0916 11:09:48.770443  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:09:48.770451  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:09:48.770511  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:09:48.807549  254463 cri.go:89] found id: ""
	I0916 11:09:48.807580  254463 logs.go:276] 0 containers: []
	W0916 11:09:48.807593  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:09:48.807601  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:09:48.807655  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:09:48.854558  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:09:48.854578  254463 cri.go:89] found id: ""
	I0916 11:09:48.854585  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:09:48.854629  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:48.858424  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:09:48.858482  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:09:48.893983  254463 cri.go:89] found id: ""
	I0916 11:09:48.894013  254463 logs.go:276] 0 containers: []
	W0916 11:09:48.894024  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:09:48.894032  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:09:48.894090  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:09:48.931964  254463 cri.go:89] found id: "cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:09:48.931987  254463 cri.go:89] found id: "b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9"
	I0916 11:09:48.931991  254463 cri.go:89] found id: ""
	I0916 11:09:48.932000  254463 logs.go:276] 2 containers: [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9]
	I0916 11:09:48.932050  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:48.936381  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:48.940101  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:09:48.940183  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:09:48.979539  254463 cri.go:89] found id: ""
	I0916 11:09:48.979566  254463 logs.go:276] 0 containers: []
	W0916 11:09:48.979578  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:09:48.979585  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:09:48.979645  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:09:49.014921  254463 cri.go:89] found id: ""
	I0916 11:09:49.014951  254463 logs.go:276] 0 containers: []
	W0916 11:09:49.014964  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:09:49.014983  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:09:49.014998  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:09:49.056665  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:09:49.056697  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:09:49.110424  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:09:49.110453  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:09:49.178554  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:09:49.178592  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:09:49.244586  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:09:49.244612  254463 logs.go:123] Gathering logs for kube-apiserver [f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd] ...
	I0916 11:09:49.244629  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd"
	I0916 11:09:49.285235  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:09:49.285264  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:09:49.385095  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:09:49.385133  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:09:49.409418  254463 logs.go:123] Gathering logs for kube-controller-manager [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff] ...
	I0916 11:09:49.409454  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:09:49.445392  254463 logs.go:123] Gathering logs for kube-controller-manager [b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9] ...
	I0916 11:09:49.445422  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9"
	I0916 11:09:51.983011  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:49.047145  274695 cli_runner.go:164] Run: docker start no-preload-349453
	I0916 11:09:49.345476  274695 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Status}}
	I0916 11:09:49.369895  274695 kic.go:430] container "no-preload-349453" state is running.
	I0916 11:09:49.370255  274695 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-349453
	I0916 11:09:49.390076  274695 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/config.json ...
	I0916 11:09:49.390324  274695 machine.go:93] provisionDockerMachine start ...
	I0916 11:09:49.390405  274695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:49.409420  274695 main.go:141] libmachine: Using SSH client type: native
	I0916 11:09:49.409726  274695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I0916 11:09:49.409751  274695 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 11:09:49.410474  274695 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33840->127.0.0.1:33068: read: connection reset by peer
	I0916 11:09:52.543274  274695 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-349453
	
	I0916 11:09:52.543304  274695 ubuntu.go:169] provisioning hostname "no-preload-349453"
	I0916 11:09:52.543357  274695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:52.561425  274695 main.go:141] libmachine: Using SSH client type: native
	I0916 11:09:52.561639  274695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I0916 11:09:52.561659  274695 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-349453 && echo "no-preload-349453" | sudo tee /etc/hostname
	I0916 11:09:52.702731  274695 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-349453
	
	I0916 11:09:52.702807  274695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:52.720926  274695 main.go:141] libmachine: Using SSH client type: native
	I0916 11:09:52.721115  274695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I0916 11:09:52.721133  274695 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-349453' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-349453/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-349453' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:09:52.852007  274695 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:09:52.852046  274695 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 11:09:52.852067  274695 ubuntu.go:177] setting up certificates
	I0916 11:09:52.852079  274695 provision.go:84] configureAuth start
	I0916 11:09:52.852141  274695 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-349453
	I0916 11:09:52.869844  274695 provision.go:143] copyHostCerts
	I0916 11:09:52.869915  274695 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 11:09:52.869927  274695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 11:09:52.869991  274695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 11:09:52.870107  274695 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 11:09:52.870119  274695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 11:09:52.870146  274695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 11:09:52.870211  274695 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 11:09:52.870219  274695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 11:09:52.870248  274695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 11:09:52.870308  274695 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.no-preload-349453 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-349453]
	I0916 11:09:53.005905  274695 provision.go:177] copyRemoteCerts
	I0916 11:09:53.005958  274695 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:09:53.005995  274695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:53.023517  274695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:09:53.120443  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 11:09:53.142805  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0916 11:09:53.166225  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 11:09:53.188868  274695 provision.go:87] duration metric: took 336.770749ms to configureAuth
	I0916 11:09:53.188907  274695 ubuntu.go:193] setting minikube options for container-runtime
	I0916 11:09:53.189114  274695 config.go:182] Loaded profile config "no-preload-349453": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:09:53.189127  274695 machine.go:96] duration metric: took 3.798788146s to provisionDockerMachine
	I0916 11:09:53.189135  274695 start.go:293] postStartSetup for "no-preload-349453" (driver="docker")
	I0916 11:09:53.189145  274695 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:09:53.189195  274695 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:09:53.189233  274695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:53.206547  274695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:09:53.304863  274695 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:09:53.308040  274695 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 11:09:53.308080  274695 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 11:09:53.308092  274695 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 11:09:53.308101  274695 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 11:09:53.308115  274695 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 11:09:53.308178  274695 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 11:09:53.308280  274695 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 11:09:53.308405  274695 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:09:53.316395  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:09:53.338548  274695 start.go:296] duration metric: took 149.394766ms for postStartSetup
	I0916 11:09:53.338650  274695 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 11:09:53.338694  274695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:53.357422  274695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:09:53.452877  274695 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 11:09:53.457360  274695 fix.go:56] duration metric: took 4.43174375s for fixHost
	I0916 11:09:53.457384  274695 start.go:83] releasing machines lock for "no-preload-349453", held for 4.431788357s
	I0916 11:09:53.457450  274695 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-349453
	I0916 11:09:53.475348  274695 ssh_runner.go:195] Run: cat /version.json
	I0916 11:09:53.475400  274695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:53.475417  274695 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:09:53.475476  274695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:53.493461  274695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:09:53.494009  274695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:09:53.583400  274695 ssh_runner.go:195] Run: systemctl --version
	I0916 11:09:53.664600  274695 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:09:53.669030  274695 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 11:09:53.686361  274695 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 11:09:53.686447  274695 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:09:53.694804  274695 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 11:09:53.694831  274695 start.go:495] detecting cgroup driver to use...
	I0916 11:09:53.694862  274695 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 11:09:53.694907  274695 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 11:09:53.707615  274695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 11:09:53.719106  274695 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:09:53.719198  274695 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:09:53.731307  274695 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:09:53.741993  274695 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:09:53.822112  274695 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:09:53.892551  274695 docker.go:233] disabling docker service ...
	I0916 11:09:53.892640  274695 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:09:53.904867  274695 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:09:53.915797  274695 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:09:53.997972  274695 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:09:54.077247  274695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:09:54.088231  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:09:54.104123  274695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 11:09:54.113650  274695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 11:09:54.123084  274695 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 11:09:54.123150  274695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 11:09:54.132500  274695 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 11:09:54.141637  274695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 11:09:54.150420  274695 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 11:09:54.159442  274695 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:09:54.169162  274695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 11:09:54.178447  274695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 11:09:54.187883  274695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 11:09:54.197946  274695 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:09:54.205872  274695 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:09:54.213572  274695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:09:54.289888  274695 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 11:09:54.379344  274695 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 11:09:54.379416  274695 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 11:09:54.383200  274695 start.go:563] Will wait 60s for crictl version
	I0916 11:09:54.383251  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:09:54.386338  274695 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:09:54.418191  274695 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 11:09:54.418249  274695 ssh_runner.go:195] Run: containerd --version
	I0916 11:09:54.441777  274695 ssh_runner.go:195] Run: containerd --version
	I0916 11:09:54.467613  274695 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 11:09:50.847763  260870 pod_ready.go:103] pod "etcd-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:52.849026  260870 pod_ready.go:103] pod "etcd-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:54.849276  260870 pod_ready.go:103] pod "etcd-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:54.468958  274695 cli_runner.go:164] Run: docker network inspect no-preload-349453 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:09:54.485947  274695 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0916 11:09:54.489631  274695 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:09:54.500473  274695 kubeadm.go:883] updating cluster {Name:no-preload-349453 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-349453 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenk
ins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:09:54.500611  274695 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:09:54.500665  274695 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:09:54.532760  274695 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 11:09:54.532781  274695 cache_images.go:84] Images are preloaded, skipping loading
	I0916 11:09:54.532790  274695 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.31.1 containerd true true} ...
	I0916 11:09:54.532898  274695 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-349453 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-349453 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:09:54.532956  274695 ssh_runner.go:195] Run: sudo crictl info
	I0916 11:09:54.565820  274695 cni.go:84] Creating CNI manager for ""
	I0916 11:09:54.565853  274695 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:09:54.565868  274695 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:09:54.565894  274695 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-349453 NodeName:no-preload-349453 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 11:09:54.566029  274695 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-349453"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:09:54.566101  274695 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 11:09:54.574595  274695 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 11:09:54.574664  274695 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:09:54.583330  274695 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0916 11:09:54.600902  274695 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:09:54.617863  274695 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2171 bytes)
	I0916 11:09:54.635791  274695 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0916 11:09:54.639161  274695 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:09:54.649784  274695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:09:54.733077  274695 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:09:54.746471  274695 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453 for IP: 192.168.94.2
	I0916 11:09:54.746493  274695 certs.go:194] generating shared ca certs ...
	I0916 11:09:54.746508  274695 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:09:54.746655  274695 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 11:09:54.746704  274695 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 11:09:54.746714  274695 certs.go:256] generating profile certs ...
	I0916 11:09:54.746801  274695 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.key
	I0916 11:09:54.746889  274695 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.key.85f7849d
	I0916 11:09:54.746961  274695 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/proxy-client.key
	I0916 11:09:54.747124  274695 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 11:09:54.747163  274695 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 11:09:54.747174  274695 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:09:54.747209  274695 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 11:09:54.747242  274695 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:09:54.747268  274695 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 11:09:54.747337  274695 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:09:54.748125  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:09:54.773659  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:09:54.798587  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:09:54.838039  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:09:54.866265  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 11:09:54.922112  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 11:09:54.949631  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:09:54.974851  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 11:09:54.998140  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:09:55.021759  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 11:09:55.047817  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 11:09:55.072006  274695 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:09:55.090041  274695 ssh_runner.go:195] Run: openssl version
	I0916 11:09:55.095459  274695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:09:55.104870  274695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:09:55.108622  274695 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:09:55.108679  274695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:09:55.115169  274695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:09:55.124341  274695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 11:09:55.134032  274695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 11:09:55.137540  274695 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 11:09:55.137603  274695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 11:09:55.144314  274695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 11:09:55.153020  274695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 11:09:55.162713  274695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 11:09:55.166242  274695 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 11:09:55.166294  274695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 11:09:55.172872  274695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:09:55.181466  274695 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:09:55.184964  274695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 11:09:55.191210  274695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 11:09:55.197521  274695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 11:09:55.204060  274695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 11:09:55.210455  274695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 11:09:55.217147  274695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 11:09:55.224151  274695 kubeadm.go:392] StartCluster: {Name:no-preload-349453 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-349453 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins
:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:09:55.224234  274695 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 11:09:55.224285  274695 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:09:55.259697  274695 cri.go:89] found id: "30acbc7b45e29f44070b3c2cd9e349609594ebbaf39bfad9077eca4ac3da4b6d"
	I0916 11:09:55.259720  274695 cri.go:89] found id: "b30641ccb64e362b81a950f5292970ba402b0ac24d54dd1f6caba97fe10efe1a"
	I0916 11:09:55.259775  274695 cri.go:89] found id: "6fe6dedc217401e73f5795b6cd5cfdd5d65a4314df29b9b5be8775dc661cbffa"
	I0916 11:09:55.259796  274695 cri.go:89] found id: "49542fa155836c3fde0549a1cb7c27e6491fe3d03f704b1dd6194e52b361cf04"
	I0916 11:09:55.259804  274695 cri.go:89] found id: "a4b95a39232c279fb958cb4f48f6828232143f571fa31629227ec0c0bfd79969"
	I0916 11:09:55.259808  274695 cri.go:89] found id: "5c82d38a57c77763e3457d1b66ad9e5b564c5c6137cef3dc16a48cd5d44dcf69"
	I0916 11:09:55.259812  274695 cri.go:89] found id: "0b8b34459e3719308549f98b1bdb7656a632bebe98e50614463b773f213a735d"
	I0916 11:09:55.259816  274695 cri.go:89] found id: "5d35346ecb3ed281435941a815ae449eb1e54a96a4798b3c0a33aa91f7f98817"
	I0916 11:09:55.259820  274695 cri.go:89] found id: ""
	I0916 11:09:55.259881  274695 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0916 11:09:55.273392  274695 cri.go:116] JSON = null
	W0916 11:09:55.273443  274695 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0916 11:09:55.273502  274695 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:09:55.282466  274695 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 11:09:55.282486  274695 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 11:09:55.282539  274695 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0916 11:09:55.291007  274695 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0916 11:09:55.291787  274695 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-349453" does not appear in /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:09:55.292250  274695 kubeconfig.go:62] /home/jenkins/minikube-integration/19651-3687/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-349453" cluster setting kubeconfig missing "no-preload-349453" context setting]
	I0916 11:09:55.292937  274695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:09:55.294364  274695 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 11:09:55.303573  274695 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.94.2
	I0916 11:09:55.303619  274695 kubeadm.go:597] duration metric: took 21.126232ms to restartPrimaryControlPlane
	I0916 11:09:55.303631  274695 kubeadm.go:394] duration metric: took 79.507692ms to StartCluster
	I0916 11:09:55.303656  274695 settings.go:142] acquiring lock: {Name:mk5f140da961bb985c566f732d611f3e238f1f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:09:55.303778  274695 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:09:55.304930  274695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:09:55.305137  274695 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 11:09:55.305211  274695 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:09:55.305322  274695 addons.go:69] Setting storage-provisioner=true in profile "no-preload-349453"
	I0916 11:09:55.305336  274695 addons.go:69] Setting default-storageclass=true in profile "no-preload-349453"
	I0916 11:09:55.305342  274695 config.go:182] Loaded profile config "no-preload-349453": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:09:55.305350  274695 addons.go:69] Setting dashboard=true in profile "no-preload-349453"
	I0916 11:09:55.305372  274695 addons.go:234] Setting addon dashboard=true in "no-preload-349453"
	W0916 11:09:55.305382  274695 addons.go:243] addon dashboard should already be in state true
	I0916 11:09:55.305353  274695 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-349453"
	I0916 11:09:55.305401  274695 addons.go:69] Setting metrics-server=true in profile "no-preload-349453"
	I0916 11:09:55.305426  274695 host.go:66] Checking if "no-preload-349453" exists ...
	I0916 11:09:55.305428  274695 addons.go:234] Setting addon metrics-server=true in "no-preload-349453"
	W0916 11:09:55.305438  274695 addons.go:243] addon metrics-server should already be in state true
	I0916 11:09:55.305485  274695 host.go:66] Checking if "no-preload-349453" exists ...
	I0916 11:09:55.305354  274695 addons.go:234] Setting addon storage-provisioner=true in "no-preload-349453"
	W0916 11:09:55.305501  274695 addons.go:243] addon storage-provisioner should already be in state true
	I0916 11:09:55.305532  274695 host.go:66] Checking if "no-preload-349453" exists ...
	I0916 11:09:55.305781  274695 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Status}}
	I0916 11:09:55.305926  274695 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Status}}
	I0916 11:09:55.305931  274695 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Status}}
	I0916 11:09:55.306010  274695 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Status}}
	I0916 11:09:55.307090  274695 out.go:177] * Verifying Kubernetes components...
	I0916 11:09:55.308706  274695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:09:55.330513  274695 addons.go:234] Setting addon default-storageclass=true in "no-preload-349453"
	W0916 11:09:55.330534  274695 addons.go:243] addon default-storageclass should already be in state true
	I0916 11:09:55.330561  274695 host.go:66] Checking if "no-preload-349453" exists ...
	I0916 11:09:55.330918  274695 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Status}}
	I0916 11:09:55.331334  274695 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0916 11:09:55.331338  274695 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0916 11:09:55.333189  274695 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:09:55.333205  274695 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 11:09:55.333269  274695 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 11:09:55.333352  274695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:55.334937  274695 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0916 11:09:56.983399  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 11:09:56.983465  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:09:56.983527  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:09:57.016275  254463 cri.go:89] found id: "5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:09:57.016298  254463 cri.go:89] found id: "f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd"
	I0916 11:09:57.016303  254463 cri.go:89] found id: ""
	I0916 11:09:57.016312  254463 logs.go:276] 2 containers: [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7 f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd]
	I0916 11:09:57.016363  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:57.019731  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:57.022928  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:09:57.022987  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:09:57.066015  254463 cri.go:89] found id: ""
	I0916 11:09:57.066043  254463 logs.go:276] 0 containers: []
	W0916 11:09:57.066055  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:09:57.066062  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:09:57.066116  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:09:57.100119  254463 cri.go:89] found id: ""
	I0916 11:09:57.100143  254463 logs.go:276] 0 containers: []
	W0916 11:09:57.100154  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:09:57.100161  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:09:57.100218  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:09:57.142278  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:09:57.142305  254463 cri.go:89] found id: ""
	I0916 11:09:57.142314  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:09:57.142369  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:57.146012  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:09:57.146093  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:09:57.180703  254463 cri.go:89] found id: ""
	I0916 11:09:57.180730  254463 logs.go:276] 0 containers: []
	W0916 11:09:57.180741  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:09:57.180749  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:09:57.180804  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:09:57.213555  254463 cri.go:89] found id: "cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:09:57.213576  254463 cri.go:89] found id: "b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9"
	I0916 11:09:57.213579  254463 cri.go:89] found id: ""
	I0916 11:09:57.213586  254463 logs.go:276] 2 containers: [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9]
	I0916 11:09:57.213630  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:57.216893  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:57.220067  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:09:57.220128  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:09:57.261058  254463 cri.go:89] found id: ""
	I0916 11:09:57.261086  254463 logs.go:276] 0 containers: []
	W0916 11:09:57.261098  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:09:57.261105  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:09:57.261163  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:09:57.296886  254463 cri.go:89] found id: ""
	I0916 11:09:57.296913  254463 logs.go:276] 0 containers: []
	W0916 11:09:57.296921  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:09:57.296936  254463 logs.go:123] Gathering logs for kube-controller-manager [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff] ...
	I0916 11:09:57.296951  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:09:57.333205  254463 logs.go:123] Gathering logs for kube-controller-manager [b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9] ...
	I0916 11:09:57.333242  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9"
	I0916 11:09:57.372259  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:09:57.372300  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:09:57.413680  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:09:57.413713  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:09:57.486222  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:09:57.486264  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:09:55.335030  274695 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:09:55.335047  274695 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:09:55.335088  274695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:55.336314  274695 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0916 11:09:55.336347  274695 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0916 11:09:55.336405  274695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:55.357420  274695 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:09:55.357447  274695 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:09:55.357506  274695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:55.366387  274695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:09:55.367347  274695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:09:55.369352  274695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:09:55.388562  274695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:09:55.520679  274695 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:09:55.542923  274695 node_ready.go:35] waiting up to 6m0s for node "no-preload-349453" to be "Ready" ...
	I0916 11:09:55.621336  274695 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 11:09:55.621435  274695 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0916 11:09:55.630720  274695 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0916 11:09:55.630753  274695 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0916 11:09:55.631139  274695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:09:55.647847  274695 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 11:09:55.647928  274695 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 11:09:55.728814  274695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:09:55.734435  274695 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0916 11:09:55.734467  274695 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0916 11:09:55.830027  274695 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 11:09:55.830070  274695 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 11:09:55.837018  274695 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0916 11:09:55.837046  274695 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0916 11:09:55.852569  274695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 11:09:55.933470  274695 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0916 11:09:55.933499  274695 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0916 11:09:56.036131  274695 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 11:09:56.036176  274695 retry.go:31] will retry after 327.547508ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 11:09:56.040318  274695 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0916 11:09:56.040402  274695 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0916 11:09:56.044576  274695 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 11:09:56.044610  274695 retry.go:31] will retry after 125.943539ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 11:09:56.133467  274695 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0916 11:09:56.133501  274695 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0916 11:09:56.171627  274695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:09:56.229693  274695 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0916 11:09:56.229778  274695 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W0916 11:09:56.324009  274695 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 11:09:56.324059  274695 retry.go:31] will retry after 179.364541ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 11:09:56.329914  274695 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0916 11:09:56.329944  274695 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0916 11:09:56.364514  274695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:09:56.424109  274695 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0916 11:09:56.424146  274695 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0916 11:09:56.503542  274695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 11:09:56.523382  274695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0916 11:09:56.849591  260870 pod_ready.go:103] pod "etcd-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:59.349876  260870 pod_ready.go:103] pod "etcd-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:59.129375  274695 node_ready.go:49] node "no-preload-349453" has status "Ready":"True"
	I0916 11:09:59.129482  274695 node_ready.go:38] duration metric: took 3.586509916s for node "no-preload-349453" to be "Ready" ...
	I0916 11:09:59.129511  274695 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:09:59.146545  274695 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:59.236575  274695 pod_ready.go:93] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"True"
	I0916 11:09:59.236620  274695 pod_ready.go:82] duration metric: took 90.034166ms for pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:59.236641  274695 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:59.244737  274695 pod_ready.go:93] pod "etcd-no-preload-349453" in "kube-system" namespace has status "Ready":"True"
	I0916 11:09:59.244763  274695 pod_ready.go:82] duration metric: took 8.113529ms for pod "etcd-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:59.244779  274695 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:59.326680  274695 pod_ready.go:93] pod "kube-apiserver-no-preload-349453" in "kube-system" namespace has status "Ready":"True"
	I0916 11:09:59.326711  274695 pod_ready.go:82] duration metric: took 81.923811ms for pod "kube-apiserver-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:59.326724  274695 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:59.331650  274695 pod_ready.go:93] pod "kube-controller-manager-no-preload-349453" in "kube-system" namespace has status "Ready":"True"
	I0916 11:09:59.331673  274695 pod_ready.go:82] duration metric: took 4.941014ms for pod "kube-controller-manager-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:59.331686  274695 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-n7m28" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:59.337818  274695 pod_ready.go:93] pod "kube-proxy-n7m28" in "kube-system" namespace has status "Ready":"True"
	I0916 11:09:59.337846  274695 pod_ready.go:82] duration metric: took 6.152494ms for pod "kube-proxy-n7m28" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:59.337858  274695 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:59.423673  274695 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (3.251989478s)
	I0916 11:09:59.732619  274695 pod_ready.go:93] pod "kube-scheduler-no-preload-349453" in "kube-system" namespace has status "Ready":"True"
	I0916 11:09:59.732647  274695 pod_ready.go:82] duration metric: took 394.781316ms for pod "kube-scheduler-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:59.732659  274695 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace to be "Ready" ...
	I0916 11:10:01.340867  274695 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.976318642s)
	I0916 11:10:01.340987  274695 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.837413291s)
	I0916 11:10:01.341014  274695 addons.go:475] Verifying addon metrics-server=true in "no-preload-349453"
	I0916 11:10:01.537050  274695 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.013618079s)
	I0916 11:10:01.538736  274695 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-349453 addons enable metrics-server
	
	I0916 11:10:01.540676  274695 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0916 11:10:01.542213  274695 addons.go:510] duration metric: took 6.237009332s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0916 11:10:01.741388  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:01.350460  260870 pod_ready.go:103] pod "etcd-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:03.848603  260870 pod_ready.go:103] pod "etcd-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:04.348851  260870 pod_ready.go:93] pod "etcd-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"True"
	I0916 11:10:04.348878  260870 pod_ready.go:82] duration metric: took 34.506013242s for pod "etcd-old-k8s-version-371039" in "kube-system" namespace to be "Ready" ...
	I0916 11:10:04.348893  260870 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-371039" in "kube-system" namespace to be "Ready" ...
	I0916 11:10:04.353032  260870 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"True"
	I0916 11:10:04.353051  260870 pod_ready.go:82] duration metric: took 4.150771ms for pod "kube-apiserver-old-k8s-version-371039" in "kube-system" namespace to be "Ready" ...
	I0916 11:10:04.353060  260870 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace to be "Ready" ...
	I0916 11:10:07.550714  254463 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.06442499s)
	W0916 11:10:07.550762  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I0916 11:10:07.550771  254463 logs.go:123] Gathering logs for kube-apiserver [f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd] ...
	I0916 11:10:07.550784  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd"
	I0916 11:10:07.596479  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:10:07.596522  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:10:07.640033  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:10:07.640079  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:10:07.665505  254463 logs.go:123] Gathering logs for kube-apiserver [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7] ...
	I0916 11:10:07.665549  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:04.238268  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:06.239302  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:08.243920  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:06.359545  260870 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:08.859689  260870 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:07.711821  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:10:07.711862  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:10.283999  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:10:12.114848  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:42500->192.168.76.2:8443: read: connection reset by peer
	I0916 11:10:12.114981  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:10:12.115056  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:10:12.152497  254463 cri.go:89] found id: "5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:12.152533  254463 cri.go:89] found id: "f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd"
	I0916 11:10:12.152540  254463 cri.go:89] found id: ""
	I0916 11:10:12.152548  254463 logs.go:276] 2 containers: [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7 f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd]
	I0916 11:10:12.152602  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:12.156067  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:12.159264  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:10:12.159327  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:10:12.190731  254463 cri.go:89] found id: ""
	I0916 11:10:12.190754  254463 logs.go:276] 0 containers: []
	W0916 11:10:12.190765  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:10:12.190772  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:10:12.190827  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:10:12.222220  254463 cri.go:89] found id: ""
	I0916 11:10:12.222242  254463 logs.go:276] 0 containers: []
	W0916 11:10:12.222250  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:10:12.222256  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:10:12.222298  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:10:12.255730  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:12.255822  254463 cri.go:89] found id: ""
	I0916 11:10:12.255829  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:10:12.255876  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:12.259472  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:10:12.259542  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:10:12.291555  254463 cri.go:89] found id: ""
	I0916 11:10:12.291579  254463 logs.go:276] 0 containers: []
	W0916 11:10:12.291589  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:10:12.291596  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:10:12.291651  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:10:12.324287  254463 cri.go:89] found id: "cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:12.324321  254463 cri.go:89] found id: "b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9"
	I0916 11:10:12.324328  254463 cri.go:89] found id: ""
	I0916 11:10:12.324337  254463 logs.go:276] 2 containers: [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9]
	I0916 11:10:12.324392  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:12.327731  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:12.330880  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:10:12.330944  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:10:12.375367  254463 cri.go:89] found id: ""
	I0916 11:10:12.375395  254463 logs.go:276] 0 containers: []
	W0916 11:10:12.375407  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:10:12.375415  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:10:12.375478  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:10:12.415075  254463 cri.go:89] found id: ""
	I0916 11:10:12.415095  254463 logs.go:276] 0 containers: []
	W0916 11:10:12.415103  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:10:12.415115  254463 logs.go:123] Gathering logs for kube-apiserver [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7] ...
	I0916 11:10:12.415126  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:12.458886  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:10:12.458930  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:10:12.496500  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:10:12.496530  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:10:12.567297  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:10:12.567333  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:10:12.624232  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:10:12.624255  254463 logs.go:123] Gathering logs for kube-apiserver [f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd] ...
	I0916 11:10:12.624270  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd"
	I0916 11:10:12.660261  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:10:12.660295  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:10.738756  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:13.238098  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:11.360052  260870 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:13.859124  260870 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:14.365001  260870 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"True"
	I0916 11:10:14.365026  260870 pod_ready.go:82] duration metric: took 10.011960541s for pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace to be "Ready" ...
	I0916 11:10:14.365036  260870 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w2kp4" in "kube-system" namespace to be "Ready" ...
	I0916 11:10:14.369505  260870 pod_ready.go:93] pod "kube-proxy-w2kp4" in "kube-system" namespace has status "Ready":"True"
	I0916 11:10:14.369528  260870 pod_ready.go:82] duration metric: took 4.48629ms for pod "kube-proxy-w2kp4" in "kube-system" namespace to be "Ready" ...
	I0916 11:10:14.369536  260870 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-371039" in "kube-system" namespace to be "Ready" ...
	I0916 11:10:12.718187  254463 logs.go:123] Gathering logs for kube-controller-manager [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff] ...
	I0916 11:10:12.718226  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:12.753095  254463 logs.go:123] Gathering logs for kube-controller-manager [b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9] ...
	I0916 11:10:12.753121  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9"
	I0916 11:10:12.786230  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:10:12.786255  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:10:12.828221  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:10:12.828253  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:10:15.348814  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:10:15.349283  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:10:15.349344  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:10:15.349400  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:10:15.384332  254463 cri.go:89] found id: "5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:15.384353  254463 cri.go:89] found id: ""
	I0916 11:10:15.384362  254463 logs.go:276] 1 containers: [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7]
	I0916 11:10:15.384418  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:15.387695  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:10:15.387808  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:10:15.420398  254463 cri.go:89] found id: ""
	I0916 11:10:15.420425  254463 logs.go:276] 0 containers: []
	W0916 11:10:15.420438  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:10:15.420447  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:10:15.420496  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:10:15.454005  254463 cri.go:89] found id: ""
	I0916 11:10:15.454035  254463 logs.go:276] 0 containers: []
	W0916 11:10:15.454049  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:10:15.454057  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:10:15.454111  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:10:15.488040  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:15.488065  254463 cri.go:89] found id: ""
	I0916 11:10:15.488072  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:10:15.488121  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:15.491658  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:10:15.491730  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:10:15.526243  254463 cri.go:89] found id: ""
	I0916 11:10:15.526276  254463 logs.go:276] 0 containers: []
	W0916 11:10:15.526289  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:10:15.526297  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:10:15.526356  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:10:15.563058  254463 cri.go:89] found id: "cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:15.563078  254463 cri.go:89] found id: ""
	I0916 11:10:15.563085  254463 logs.go:276] 1 containers: [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff]
	I0916 11:10:15.563129  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:15.566707  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:10:15.566775  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:10:15.600693  254463 cri.go:89] found id: ""
	I0916 11:10:15.600719  254463 logs.go:276] 0 containers: []
	W0916 11:10:15.600728  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:10:15.600734  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:10:15.600786  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:10:15.634854  254463 cri.go:89] found id: ""
	I0916 11:10:15.634878  254463 logs.go:276] 0 containers: []
	W0916 11:10:15.634886  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:10:15.634894  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:10:15.634912  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:10:15.656900  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:10:15.656944  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:10:15.716708  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:10:15.716734  254463 logs.go:123] Gathering logs for kube-apiserver [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7] ...
	I0916 11:10:15.716750  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:15.756043  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:10:15.756072  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:15.815128  254463 logs.go:123] Gathering logs for kube-controller-manager [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff] ...
	I0916 11:10:15.815167  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:15.851703  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:10:15.851729  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:10:15.896779  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:10:15.896822  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:10:15.933761  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:10:15.933790  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:10:15.738612  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:18.238493  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:16.375521  260870 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:18.876191  260870 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:18.508158  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:10:18.508652  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:10:18.508704  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:10:18.508768  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:10:18.541635  254463 cri.go:89] found id: "5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:18.541657  254463 cri.go:89] found id: ""
	I0916 11:10:18.541666  254463 logs.go:276] 1 containers: [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7]
	I0916 11:10:18.541721  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:18.545157  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:10:18.545220  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:10:18.577944  254463 cri.go:89] found id: ""
	I0916 11:10:18.577967  254463 logs.go:276] 0 containers: []
	W0916 11:10:18.577978  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:10:18.577985  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:10:18.578041  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:10:18.610307  254463 cri.go:89] found id: ""
	I0916 11:10:18.610334  254463 logs.go:276] 0 containers: []
	W0916 11:10:18.610345  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:10:18.610353  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:10:18.610410  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:10:18.643372  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:18.643398  254463 cri.go:89] found id: ""
	I0916 11:10:18.643409  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:10:18.643473  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:18.647339  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:10:18.647416  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:10:18.683669  254463 cri.go:89] found id: ""
	I0916 11:10:18.683696  254463 logs.go:276] 0 containers: []
	W0916 11:10:18.683708  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:10:18.683716  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:10:18.683813  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:10:18.717547  254463 cri.go:89] found id: "cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:18.717569  254463 cri.go:89] found id: ""
	I0916 11:10:18.717578  254463 logs.go:276] 1 containers: [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff]
	I0916 11:10:18.717635  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:18.721314  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:10:18.721386  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:10:18.756024  254463 cri.go:89] found id: ""
	I0916 11:10:18.756055  254463 logs.go:276] 0 containers: []
	W0916 11:10:18.756065  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:10:18.756071  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:10:18.756120  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:10:18.789325  254463 cri.go:89] found id: ""
	I0916 11:10:18.789350  254463 logs.go:276] 0 containers: []
	W0916 11:10:18.789359  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:10:18.789370  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:10:18.789384  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:10:18.860240  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:10:18.860279  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:10:18.882796  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:10:18.882826  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:10:18.941553  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:10:18.941577  254463 logs.go:123] Gathering logs for kube-apiserver [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7] ...
	I0916 11:10:18.941593  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:18.979008  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:10:18.979039  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:19.039131  254463 logs.go:123] Gathering logs for kube-controller-manager [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff] ...
	I0916 11:10:19.039170  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:19.075898  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:10:19.075929  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:10:19.119292  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:10:19.119332  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:10:21.657984  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:10:21.658407  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:10:21.658456  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:10:21.658511  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:10:21.692596  254463 cri.go:89] found id: "5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:21.692621  254463 cri.go:89] found id: ""
	I0916 11:10:21.692630  254463 logs.go:276] 1 containers: [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7]
	I0916 11:10:21.692685  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:21.696206  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:10:21.696264  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:10:21.729888  254463 cri.go:89] found id: ""
	I0916 11:10:21.729910  254463 logs.go:276] 0 containers: []
	W0916 11:10:21.729918  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:10:21.729937  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:10:21.729981  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:10:21.763929  254463 cri.go:89] found id: ""
	I0916 11:10:21.763962  254463 logs.go:276] 0 containers: []
	W0916 11:10:21.763974  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:10:21.763981  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:10:21.764047  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:10:21.799235  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:21.799256  254463 cri.go:89] found id: ""
	I0916 11:10:21.799264  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:10:21.799318  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:21.802780  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:10:21.802855  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:10:21.839854  254463 cri.go:89] found id: ""
	I0916 11:10:21.839880  254463 logs.go:276] 0 containers: []
	W0916 11:10:21.839888  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:10:21.839894  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:10:21.839953  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:10:21.873977  254463 cri.go:89] found id: "cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:21.874003  254463 cri.go:89] found id: ""
	I0916 11:10:21.874013  254463 logs.go:276] 1 containers: [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff]
	I0916 11:10:21.874068  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:21.878108  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:10:21.878178  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:10:21.911328  254463 cri.go:89] found id: ""
	I0916 11:10:21.911357  254463 logs.go:276] 0 containers: []
	W0916 11:10:21.911366  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:10:21.911372  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:10:21.911425  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:10:21.946393  254463 cri.go:89] found id: ""
	I0916 11:10:21.946423  254463 logs.go:276] 0 containers: []
	W0916 11:10:21.946435  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:10:21.946446  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:10:21.946461  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:10:21.990397  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:10:21.990439  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:10:22.027571  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:10:22.027598  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:10:22.101651  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:10:22.101686  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:10:22.122234  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:10:22.122269  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:10:22.180802  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:10:22.180833  254463 logs.go:123] Gathering logs for kube-apiserver [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7] ...
	I0916 11:10:22.180848  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:22.216487  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:10:22.216515  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:22.279504  254463 logs.go:123] Gathering logs for kube-controller-manager [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff] ...
	I0916 11:10:22.279551  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:20.240044  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:22.738619  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:21.375180  260870 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:23.375730  260870 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:24.375698  260870 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"True"
	I0916 11:10:24.375722  260870 pod_ready.go:82] duration metric: took 10.006179243s for pod "kube-scheduler-old-k8s-version-371039" in "kube-system" namespace to be "Ready" ...
	I0916 11:10:24.375730  260870 pod_ready.go:39] duration metric: took 1m11.05010529s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:10:24.375761  260870 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:10:24.375792  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:10:24.375850  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:10:24.410054  260870 cri.go:89] found id: "5e66ac9a14fe5a4058d0e0a13c533aa90cf43e7e2829bee2c2b36cb49e2bdefd"
	I0916 11:10:24.410074  260870 cri.go:89] found id: ""
	I0916 11:10:24.410084  260870 logs.go:276] 1 containers: [5e66ac9a14fe5a4058d0e0a13c533aa90cf43e7e2829bee2c2b36cb49e2bdefd]
	I0916 11:10:24.410144  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:24.413762  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:10:24.413822  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:10:24.446581  260870 cri.go:89] found id: "34eff189102300711c29178081cd84ee3bb91bbadd42e71611f1e6cad730b927"
	I0916 11:10:24.446609  260870 cri.go:89] found id: ""
	I0916 11:10:24.446619  260870 logs.go:276] 1 containers: [34eff189102300711c29178081cd84ee3bb91bbadd42e71611f1e6cad730b927]
	I0916 11:10:24.446679  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:24.450048  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:10:24.450108  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:10:24.483854  260870 cri.go:89] found id: "47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c"
	I0916 11:10:24.483876  260870 cri.go:89] found id: ""
	I0916 11:10:24.483883  260870 logs.go:276] 1 containers: [47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c]
	I0916 11:10:24.483937  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:24.487518  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:10:24.487579  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:10:24.520237  260870 cri.go:89] found id: "6b3b4e782188a028982f995826169c7b0e2d5db28caae22d39c553f66b1dbf58"
	I0916 11:10:24.520257  260870 cri.go:89] found id: ""
	I0916 11:10:24.520265  260870 logs.go:276] 1 containers: [6b3b4e782188a028982f995826169c7b0e2d5db28caae22d39c553f66b1dbf58]
	I0916 11:10:24.520325  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:24.523786  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:10:24.523857  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:10:24.556906  260870 cri.go:89] found id: "a442530bd3eed8c0c19667f9fb758696fcd53da538f2ae250a155372b1f2574e"
	I0916 11:10:24.556931  260870 cri.go:89] found id: ""
	I0916 11:10:24.556938  260870 logs.go:276] 1 containers: [a442530bd3eed8c0c19667f9fb758696fcd53da538f2ae250a155372b1f2574e]
	I0916 11:10:24.556982  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:24.560497  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:10:24.560571  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:10:24.593490  260870 cri.go:89] found id: "8e878c306812f7218314ae530ad95c7f7a54c927d987295008ebfa1401dcc21c"
	I0916 11:10:24.593510  260870 cri.go:89] found id: ""
	I0916 11:10:24.593517  260870 logs.go:276] 1 containers: [8e878c306812f7218314ae530ad95c7f7a54c927d987295008ebfa1401dcc21c]
	I0916 11:10:24.593558  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:24.597013  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:10:24.597068  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:10:24.629128  260870 cri.go:89] found id: "00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285"
	I0916 11:10:24.629149  260870 cri.go:89] found id: ""
	I0916 11:10:24.629155  260870 logs.go:276] 1 containers: [00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285]
	I0916 11:10:24.629201  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:24.632565  260870 logs.go:123] Gathering logs for dmesg ...
	I0916 11:10:24.632588  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:10:24.653890  260870 logs.go:123] Gathering logs for etcd [34eff189102300711c29178081cd84ee3bb91bbadd42e71611f1e6cad730b927] ...
	I0916 11:10:24.653925  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34eff189102300711c29178081cd84ee3bb91bbadd42e71611f1e6cad730b927"
	I0916 11:10:24.689516  260870 logs.go:123] Gathering logs for coredns [47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c] ...
	I0916 11:10:24.689544  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c"
	I0916 11:10:24.723583  260870 logs.go:123] Gathering logs for kube-scheduler [6b3b4e782188a028982f995826169c7b0e2d5db28caae22d39c553f66b1dbf58] ...
	I0916 11:10:24.723610  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b3b4e782188a028982f995826169c7b0e2d5db28caae22d39c553f66b1dbf58"
	I0916 11:10:24.761101  260870 logs.go:123] Gathering logs for container status ...
	I0916 11:10:24.761135  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:10:24.798289  260870 logs.go:123] Gathering logs for containerd ...
	I0916 11:10:24.798316  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:10:24.858329  260870 logs.go:123] Gathering logs for kubelet ...
	I0916 11:10:24.858366  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:10:24.924002  260870 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:10:24.924042  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:10:25.040339  260870 logs.go:123] Gathering logs for kube-apiserver [5e66ac9a14fe5a4058d0e0a13c533aa90cf43e7e2829bee2c2b36cb49e2bdefd] ...
	I0916 11:10:25.040371  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e66ac9a14fe5a4058d0e0a13c533aa90cf43e7e2829bee2c2b36cb49e2bdefd"
	I0916 11:10:25.092353  260870 logs.go:123] Gathering logs for kube-proxy [a442530bd3eed8c0c19667f9fb758696fcd53da538f2ae250a155372b1f2574e] ...
	I0916 11:10:25.092390  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a442530bd3eed8c0c19667f9fb758696fcd53da538f2ae250a155372b1f2574e"
	I0916 11:10:25.129881  260870 logs.go:123] Gathering logs for kube-controller-manager [8e878c306812f7218314ae530ad95c7f7a54c927d987295008ebfa1401dcc21c] ...
	I0916 11:10:25.129913  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e878c306812f7218314ae530ad95c7f7a54c927d987295008ebfa1401dcc21c"
	I0916 11:10:25.176606  260870 logs.go:123] Gathering logs for kindnet [00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285] ...
	I0916 11:10:25.176643  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285"
	I0916 11:10:24.814913  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:10:24.815331  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:10:24.815406  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:10:24.815468  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:10:24.851174  254463 cri.go:89] found id: "5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:24.851217  254463 cri.go:89] found id: ""
	I0916 11:10:24.851226  254463 logs.go:276] 1 containers: [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7]
	I0916 11:10:24.851290  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:24.855458  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:10:24.855530  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:10:24.894464  254463 cri.go:89] found id: ""
	I0916 11:10:24.894484  254463 logs.go:276] 0 containers: []
	W0916 11:10:24.894491  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:10:24.894498  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:10:24.894540  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:10:24.932639  254463 cri.go:89] found id: ""
	I0916 11:10:24.932678  254463 logs.go:276] 0 containers: []
	W0916 11:10:24.932686  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:10:24.932691  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:10:24.932736  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:10:24.969712  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:24.969795  254463 cri.go:89] found id: ""
	I0916 11:10:24.969807  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:10:24.969872  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:24.973484  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:10:24.973557  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:10:25.014852  254463 cri.go:89] found id: ""
	I0916 11:10:25.014926  254463 logs.go:276] 0 containers: []
	W0916 11:10:25.014938  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:10:25.014944  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:10:25.015001  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:10:25.051032  254463 cri.go:89] found id: "cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:25.051057  254463 cri.go:89] found id: ""
	I0916 11:10:25.051067  254463 logs.go:276] 1 containers: [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff]
	I0916 11:10:25.051128  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:25.054719  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:10:25.054797  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:10:25.093048  254463 cri.go:89] found id: ""
	I0916 11:10:25.093074  254463 logs.go:276] 0 containers: []
	W0916 11:10:25.093084  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:10:25.093092  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:10:25.093144  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:10:25.131337  254463 cri.go:89] found id: ""
	I0916 11:10:25.131374  254463 logs.go:276] 0 containers: []
	W0916 11:10:25.131387  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:10:25.131405  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:10:25.131426  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:25.195758  254463 logs.go:123] Gathering logs for kube-controller-manager [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff] ...
	I0916 11:10:25.195798  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:25.232113  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:10:25.232141  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:10:25.277260  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:10:25.277300  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:10:25.314477  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:10:25.314503  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:10:25.391725  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:10:25.391784  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:10:25.413044  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:10:25.413079  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:10:25.474224  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:10:25.474246  254463 logs.go:123] Gathering logs for kube-apiserver [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7] ...
	I0916 11:10:25.474258  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:25.238565  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:27.737733  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:27.712923  260870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:10:27.724989  260870 api_server.go:72] duration metric: took 1m15.390531014s to wait for apiserver process to appear ...
	I0916 11:10:27.725015  260870 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:10:27.725048  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:10:27.725090  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:10:27.758530  260870 cri.go:89] found id: "5e66ac9a14fe5a4058d0e0a13c533aa90cf43e7e2829bee2c2b36cb49e2bdefd"
	I0916 11:10:27.758558  260870 cri.go:89] found id: ""
	I0916 11:10:27.758567  260870 logs.go:276] 1 containers: [5e66ac9a14fe5a4058d0e0a13c533aa90cf43e7e2829bee2c2b36cb49e2bdefd]
	I0916 11:10:27.758613  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:27.762091  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:10:27.762160  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:10:27.794955  260870 cri.go:89] found id: "34eff189102300711c29178081cd84ee3bb91bbadd42e71611f1e6cad730b927"
	I0916 11:10:27.794975  260870 cri.go:89] found id: ""
	I0916 11:10:27.794982  260870 logs.go:276] 1 containers: [34eff189102300711c29178081cd84ee3bb91bbadd42e71611f1e6cad730b927]
	I0916 11:10:27.795027  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:27.798651  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:10:27.798729  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:10:27.832743  260870 cri.go:89] found id: "47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c"
	I0916 11:10:27.832764  260870 cri.go:89] found id: ""
	I0916 11:10:27.832772  260870 logs.go:276] 1 containers: [47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c]
	I0916 11:10:27.832815  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:27.836354  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:10:27.836425  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:10:27.869614  260870 cri.go:89] found id: "6b3b4e782188a028982f995826169c7b0e2d5db28caae22d39c553f66b1dbf58"
	I0916 11:10:27.869635  260870 cri.go:89] found id: ""
	I0916 11:10:27.869644  260870 logs.go:276] 1 containers: [6b3b4e782188a028982f995826169c7b0e2d5db28caae22d39c553f66b1dbf58]
	I0916 11:10:27.869703  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:27.873305  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:10:27.873379  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:10:27.906796  260870 cri.go:89] found id: "a442530bd3eed8c0c19667f9fb758696fcd53da538f2ae250a155372b1f2574e"
	I0916 11:10:27.906818  260870 cri.go:89] found id: ""
	I0916 11:10:27.906827  260870 logs.go:276] 1 containers: [a442530bd3eed8c0c19667f9fb758696fcd53da538f2ae250a155372b1f2574e]
	I0916 11:10:27.906881  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:27.910467  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:10:27.910528  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:10:27.947119  260870 cri.go:89] found id: "8e878c306812f7218314ae530ad95c7f7a54c927d987295008ebfa1401dcc21c"
	I0916 11:10:27.947147  260870 cri.go:89] found id: ""
	I0916 11:10:27.947156  260870 logs.go:276] 1 containers: [8e878c306812f7218314ae530ad95c7f7a54c927d987295008ebfa1401dcc21c]
	I0916 11:10:27.947216  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:27.951709  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:10:27.951800  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:10:27.984740  260870 cri.go:89] found id: "00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285"
	I0916 11:10:27.984762  260870 cri.go:89] found id: ""
	I0916 11:10:27.984771  260870 logs.go:276] 1 containers: [00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285]
	I0916 11:10:27.984830  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:27.988397  260870 logs.go:123] Gathering logs for container status ...
	I0916 11:10:27.988425  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:10:28.025884  260870 logs.go:123] Gathering logs for kube-apiserver [5e66ac9a14fe5a4058d0e0a13c533aa90cf43e7e2829bee2c2b36cb49e2bdefd] ...
	I0916 11:10:28.025924  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e66ac9a14fe5a4058d0e0a13c533aa90cf43e7e2829bee2c2b36cb49e2bdefd"
	I0916 11:10:28.077609  260870 logs.go:123] Gathering logs for coredns [47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c] ...
	I0916 11:10:28.077647  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c"
	I0916 11:10:28.116119  260870 logs.go:123] Gathering logs for kube-scheduler [6b3b4e782188a028982f995826169c7b0e2d5db28caae22d39c553f66b1dbf58] ...
	I0916 11:10:28.116146  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b3b4e782188a028982f995826169c7b0e2d5db28caae22d39c553f66b1dbf58"
	I0916 11:10:28.154443  260870 logs.go:123] Gathering logs for kube-proxy [a442530bd3eed8c0c19667f9fb758696fcd53da538f2ae250a155372b1f2574e] ...
	I0916 11:10:28.154480  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a442530bd3eed8c0c19667f9fb758696fcd53da538f2ae250a155372b1f2574e"
	I0916 11:10:28.192048  260870 logs.go:123] Gathering logs for kindnet [00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285] ...
	I0916 11:10:28.192076  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285"
	I0916 11:10:28.230393  260870 logs.go:123] Gathering logs for containerd ...
	I0916 11:10:28.230435  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:10:28.293330  260870 logs.go:123] Gathering logs for kubelet ...
	I0916 11:10:28.293363  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:10:28.355035  260870 logs.go:123] Gathering logs for dmesg ...
	I0916 11:10:28.355073  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:10:28.376404  260870 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:10:28.376441  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:10:28.485749  260870 logs.go:123] Gathering logs for etcd [34eff189102300711c29178081cd84ee3bb91bbadd42e71611f1e6cad730b927] ...
	I0916 11:10:28.485786  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34eff189102300711c29178081cd84ee3bb91bbadd42e71611f1e6cad730b927"
	I0916 11:10:28.526060  260870 logs.go:123] Gathering logs for kube-controller-manager [8e878c306812f7218314ae530ad95c7f7a54c927d987295008ebfa1401dcc21c] ...
	I0916 11:10:28.526099  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e878c306812f7218314ae530ad95c7f7a54c927d987295008ebfa1401dcc21c"
	I0916 11:10:28.013215  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:10:28.013660  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:10:28.013720  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:10:28.013775  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:10:28.052332  254463 cri.go:89] found id: "5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:28.052358  254463 cri.go:89] found id: ""
	I0916 11:10:28.052366  254463 logs.go:276] 1 containers: [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7]
	I0916 11:10:28.052414  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:28.056409  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:10:28.056477  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:10:28.091702  254463 cri.go:89] found id: ""
	I0916 11:10:28.091731  254463 logs.go:276] 0 containers: []
	W0916 11:10:28.091784  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:10:28.091792  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:10:28.091851  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:10:28.126028  254463 cri.go:89] found id: ""
	I0916 11:10:28.126052  254463 logs.go:276] 0 containers: []
	W0916 11:10:28.126063  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:10:28.126076  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:10:28.126133  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:10:28.163202  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:28.163249  254463 cri.go:89] found id: ""
	I0916 11:10:28.163257  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:10:28.163299  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:28.166659  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:10:28.166722  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:10:28.201886  254463 cri.go:89] found id: ""
	I0916 11:10:28.201910  254463 logs.go:276] 0 containers: []
	W0916 11:10:28.201919  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:10:28.201926  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:10:28.201984  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:10:28.246518  254463 cri.go:89] found id: "cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:28.246616  254463 cri.go:89] found id: ""
	I0916 11:10:28.246637  254463 logs.go:276] 1 containers: [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff]
	I0916 11:10:28.246722  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:28.252289  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:10:28.252395  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:10:28.286426  254463 cri.go:89] found id: ""
	I0916 11:10:28.286449  254463 logs.go:276] 0 containers: []
	W0916 11:10:28.286457  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:10:28.286463  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:10:28.286519  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:10:28.321297  254463 cri.go:89] found id: ""
	I0916 11:10:28.321321  254463 logs.go:276] 0 containers: []
	W0916 11:10:28.321328  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:10:28.321336  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:10:28.321348  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:10:28.403374  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:10:28.403422  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:10:28.426647  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:10:28.426684  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:10:28.496928  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:10:28.496947  254463 logs.go:123] Gathering logs for kube-apiserver [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7] ...
	I0916 11:10:28.496957  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:28.538666  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:10:28.538694  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:28.607309  254463 logs.go:123] Gathering logs for kube-controller-manager [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff] ...
	I0916 11:10:28.607350  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:28.641335  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:10:28.641365  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:10:28.687488  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:10:28.687527  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:10:31.224849  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:10:31.225350  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:10:31.225417  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:10:31.225483  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:10:31.262633  254463 cri.go:89] found id: "5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:31.262659  254463 cri.go:89] found id: ""
	I0916 11:10:31.262668  254463 logs.go:276] 1 containers: [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7]
	I0916 11:10:31.262726  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:31.266801  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:10:31.266884  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:10:31.302134  254463 cri.go:89] found id: ""
	I0916 11:10:31.302165  254463 logs.go:276] 0 containers: []
	W0916 11:10:31.302176  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:10:31.302183  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:10:31.302239  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:10:31.338759  254463 cri.go:89] found id: ""
	I0916 11:10:31.338781  254463 logs.go:276] 0 containers: []
	W0916 11:10:31.338789  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:10:31.338796  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:10:31.338874  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:10:31.375371  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:31.375400  254463 cri.go:89] found id: ""
	I0916 11:10:31.375410  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:10:31.375462  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:31.379039  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:10:31.379109  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:10:31.414260  254463 cri.go:89] found id: ""
	I0916 11:10:31.414282  254463 logs.go:276] 0 containers: []
	W0916 11:10:31.414290  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:10:31.414295  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:10:31.414353  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:10:31.450723  254463 cri.go:89] found id: "cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:31.450747  254463 cri.go:89] found id: ""
	I0916 11:10:31.450760  254463 logs.go:276] 1 containers: [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff]
	I0916 11:10:31.450816  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:31.454785  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:10:31.454864  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:10:31.497353  254463 cri.go:89] found id: ""
	I0916 11:10:31.497385  254463 logs.go:276] 0 containers: []
	W0916 11:10:31.497398  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:10:31.497409  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:10:31.497458  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:10:31.532978  254463 cri.go:89] found id: ""
	I0916 11:10:31.533013  254463 logs.go:276] 0 containers: []
	W0916 11:10:31.533022  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:10:31.533031  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:10:31.533042  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:10:31.613145  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:10:31.613191  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:10:31.634722  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:10:31.634750  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:10:31.702216  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:10:31.702243  254463 logs.go:123] Gathering logs for kube-apiserver [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7] ...
	I0916 11:10:31.702257  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:31.744782  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:10:31.744814  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:31.811622  254463 logs.go:123] Gathering logs for kube-controller-manager [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff] ...
	I0916 11:10:31.811663  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:31.849645  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:10:31.849684  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:10:31.895810  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:10:31.895846  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:10:29.738050  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:31.738832  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:31.079119  260870 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0916 11:10:31.085468  260870 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I0916 11:10:31.086440  260870 api_server.go:141] control plane version: v1.20.0
	I0916 11:10:31.086462  260870 api_server.go:131] duration metric: took 3.361442023s to wait for apiserver health ...
	I0916 11:10:31.086470  260870 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 11:10:31.086489  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:10:31.086546  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:10:31.119570  260870 cri.go:89] found id: "5e66ac9a14fe5a4058d0e0a13c533aa90cf43e7e2829bee2c2b36cb49e2bdefd"
	I0916 11:10:31.119594  260870 cri.go:89] found id: ""
	I0916 11:10:31.119604  260870 logs.go:276] 1 containers: [5e66ac9a14fe5a4058d0e0a13c533aa90cf43e7e2829bee2c2b36cb49e2bdefd]
	I0916 11:10:31.119659  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:31.123250  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:10:31.123324  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:10:31.156789  260870 cri.go:89] found id: "34eff189102300711c29178081cd84ee3bb91bbadd42e71611f1e6cad730b927"
	I0916 11:10:31.156812  260870 cri.go:89] found id: ""
	I0916 11:10:31.156821  260870 logs.go:276] 1 containers: [34eff189102300711c29178081cd84ee3bb91bbadd42e71611f1e6cad730b927]
	I0916 11:10:31.156877  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:31.160589  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:10:31.160666  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:10:31.193841  260870 cri.go:89] found id: "47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c"
	I0916 11:10:31.193868  260870 cri.go:89] found id: ""
	I0916 11:10:31.193877  260870 logs.go:276] 1 containers: [47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c]
	I0916 11:10:31.193919  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:31.197415  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:10:31.197484  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:10:31.230161  260870 cri.go:89] found id: "6b3b4e782188a028982f995826169c7b0e2d5db28caae22d39c553f66b1dbf58"
	I0916 11:10:31.230184  260870 cri.go:89] found id: ""
	I0916 11:10:31.230193  260870 logs.go:276] 1 containers: [6b3b4e782188a028982f995826169c7b0e2d5db28caae22d39c553f66b1dbf58]
	I0916 11:10:31.230253  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:31.233951  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:10:31.234023  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:10:31.272769  260870 cri.go:89] found id: "a442530bd3eed8c0c19667f9fb758696fcd53da538f2ae250a155372b1f2574e"
	I0916 11:10:31.272795  260870 cri.go:89] found id: ""
	I0916 11:10:31.272804  260870 logs.go:276] 1 containers: [a442530bd3eed8c0c19667f9fb758696fcd53da538f2ae250a155372b1f2574e]
	I0916 11:10:31.272867  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:31.276486  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:10:31.276554  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:10:31.312467  260870 cri.go:89] found id: "8e878c306812f7218314ae530ad95c7f7a54c927d987295008ebfa1401dcc21c"
	I0916 11:10:31.312494  260870 cri.go:89] found id: ""
	I0916 11:10:31.312502  260870 logs.go:276] 1 containers: [8e878c306812f7218314ae530ad95c7f7a54c927d987295008ebfa1401dcc21c]
	I0916 11:10:31.312560  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:31.316419  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:10:31.316486  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:10:31.353043  260870 cri.go:89] found id: "00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285"
	I0916 11:10:31.353069  260870 cri.go:89] found id: ""
	I0916 11:10:31.353078  260870 logs.go:276] 1 containers: [00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285]
	I0916 11:10:31.353140  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:31.356964  260870 logs.go:123] Gathering logs for coredns [47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c] ...
	I0916 11:10:31.356998  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c"
	I0916 11:10:31.393983  260870 logs.go:123] Gathering logs for kube-scheduler [6b3b4e782188a028982f995826169c7b0e2d5db28caae22d39c553f66b1dbf58] ...
	I0916 11:10:31.394010  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b3b4e782188a028982f995826169c7b0e2d5db28caae22d39c553f66b1dbf58"
	I0916 11:10:31.433018  260870 logs.go:123] Gathering logs for kube-proxy [a442530bd3eed8c0c19667f9fb758696fcd53da538f2ae250a155372b1f2574e] ...
	I0916 11:10:31.433050  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a442530bd3eed8c0c19667f9fb758696fcd53da538f2ae250a155372b1f2574e"
	I0916 11:10:31.474201  260870 logs.go:123] Gathering logs for kube-controller-manager [8e878c306812f7218314ae530ad95c7f7a54c927d987295008ebfa1401dcc21c] ...
	I0916 11:10:31.474228  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e878c306812f7218314ae530ad95c7f7a54c927d987295008ebfa1401dcc21c"
	I0916 11:10:31.526211  260870 logs.go:123] Gathering logs for kindnet [00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285] ...
	I0916 11:10:31.526302  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285"
	I0916 11:10:31.564909  260870 logs.go:123] Gathering logs for kubelet ...
	I0916 11:10:31.564938  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:10:31.624407  260870 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:10:31.624443  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:10:31.729709  260870 logs.go:123] Gathering logs for etcd [34eff189102300711c29178081cd84ee3bb91bbadd42e71611f1e6cad730b927] ...
	I0916 11:10:31.729740  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34eff189102300711c29178081cd84ee3bb91bbadd42e71611f1e6cad730b927"
	I0916 11:10:31.767848  260870 logs.go:123] Gathering logs for containerd ...
	I0916 11:10:31.767879  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:10:31.825821  260870 logs.go:123] Gathering logs for container status ...
	I0916 11:10:31.825856  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:10:31.866717  260870 logs.go:123] Gathering logs for dmesg ...
	I0916 11:10:31.866752  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:10:31.888660  260870 logs.go:123] Gathering logs for kube-apiserver [5e66ac9a14fe5a4058d0e0a13c533aa90cf43e7e2829bee2c2b36cb49e2bdefd] ...
	I0916 11:10:31.888704  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e66ac9a14fe5a4058d0e0a13c533aa90cf43e7e2829bee2c2b36cb49e2bdefd"
	I0916 11:10:34.446916  260870 system_pods.go:59] 8 kube-system pods found
	I0916 11:10:34.446949  260870 system_pods.go:61] "coredns-74ff55c5b-78djj" [c118a29b-0828-40a2-9653-f2d3268eb8cd] Running
	I0916 11:10:34.446957  260870 system_pods.go:61] "etcd-old-k8s-version-371039" [2ba7f794-26f1-44cb-a895-77d6e4f40f11] Running
	I0916 11:10:34.446962  260870 system_pods.go:61] "kindnet-txszz" [55ac8e8a-b323-4c4a-a7d5-3c069e89deb8] Running
	I0916 11:10:34.446967  260870 system_pods.go:61] "kube-apiserver-old-k8s-version-371039" [4964def7-7f4b-46ff-b6d0-7122a46ed405] Running
	I0916 11:10:34.446972  260870 system_pods.go:61] "kube-controller-manager-old-k8s-version-371039" [8ab8368c-496d-417a-998c-8996a091c17d] Running
	I0916 11:10:34.446977  260870 system_pods.go:61] "kube-proxy-w2kp4" [fe617d0b-b789-47b3-b18f-0f9602e3873d] Running
	I0916 11:10:34.446982  260870 system_pods.go:61] "kube-scheduler-old-k8s-version-371039" [d00cbb62-128c-4108-a3ce-c3c38c3ec762] Running
	I0916 11:10:34.446987  260870 system_pods.go:61] "storage-provisioner" [fdaf9d37-19ec-4a4e-840e-b44e7158d798] Running
	I0916 11:10:34.446996  260870 system_pods.go:74] duration metric: took 3.360519154s to wait for pod list to return data ...
	I0916 11:10:34.447006  260870 default_sa.go:34] waiting for default service account to be created ...
	I0916 11:10:34.449463  260870 default_sa.go:45] found service account: "default"
	I0916 11:10:34.449496  260870 default_sa.go:55] duration metric: took 2.482731ms for default service account to be created ...
	I0916 11:10:34.449506  260870 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 11:10:34.454401  260870 system_pods.go:86] 8 kube-system pods found
	I0916 11:10:34.454432  260870 system_pods.go:89] "coredns-74ff55c5b-78djj" [c118a29b-0828-40a2-9653-f2d3268eb8cd] Running
	I0916 11:10:34.454439  260870 system_pods.go:89] "etcd-old-k8s-version-371039" [2ba7f794-26f1-44cb-a895-77d6e4f40f11] Running
	I0916 11:10:34.454445  260870 system_pods.go:89] "kindnet-txszz" [55ac8e8a-b323-4c4a-a7d5-3c069e89deb8] Running
	I0916 11:10:34.454450  260870 system_pods.go:89] "kube-apiserver-old-k8s-version-371039" [4964def7-7f4b-46ff-b6d0-7122a46ed405] Running
	I0916 11:10:34.454456  260870 system_pods.go:89] "kube-controller-manager-old-k8s-version-371039" [8ab8368c-496d-417a-998c-8996a091c17d] Running
	I0916 11:10:34.454462  260870 system_pods.go:89] "kube-proxy-w2kp4" [fe617d0b-b789-47b3-b18f-0f9602e3873d] Running
	I0916 11:10:34.454468  260870 system_pods.go:89] "kube-scheduler-old-k8s-version-371039" [d00cbb62-128c-4108-a3ce-c3c38c3ec762] Running
	I0916 11:10:34.454472  260870 system_pods.go:89] "storage-provisioner" [fdaf9d37-19ec-4a4e-840e-b44e7158d798] Running
	I0916 11:10:34.454481  260870 system_pods.go:126] duration metric: took 4.967785ms to wait for k8s-apps to be running ...
	I0916 11:10:34.454492  260870 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 11:10:34.454539  260870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:10:34.467176  260870 system_svc.go:56] duration metric: took 12.679137ms WaitForService to wait for kubelet
	I0916 11:10:34.467202  260870 kubeadm.go:582] duration metric: took 1m22.132748603s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:10:34.467229  260870 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:10:34.470211  260870 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 11:10:34.470253  260870 node_conditions.go:123] node cpu capacity is 8
	I0916 11:10:34.470270  260870 node_conditions.go:105] duration metric: took 3.035491ms to run NodePressure ...
	I0916 11:10:34.470283  260870 start.go:241] waiting for startup goroutines ...
	I0916 11:10:34.470302  260870 start.go:246] waiting for cluster config update ...
	I0916 11:10:34.470319  260870 start.go:255] writing updated cluster config ...
	I0916 11:10:34.470680  260870 ssh_runner.go:195] Run: rm -f paused
	I0916 11:10:34.479027  260870 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-371039" cluster and "default" namespace by default
	E0916 11:10:34.480271  260870 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	47a31e8c5ac3a       bfe3a36ebd252       About a minute ago   Running             coredns                   0                   fdced5247c6ff       coredns-74ff55c5b-78djj
	00416422d2a43       12968670680f4       About a minute ago   Running             kindnet-cni               0                   6d12bd53c3747       kindnet-txszz
	bdf8504c18d6d       6e38f40d628db       About a minute ago   Running             storage-provisioner       0                   ee17991909f8c       storage-provisioner
	a442530bd3eed       10cc881966cfd       About a minute ago   Running             kube-proxy                0                   a4449e8cd9394       kube-proxy-w2kp4
	8e878c306812f       b9fa1895dcaa6       About a minute ago   Running             kube-controller-manager   0                   5a0a25910c3e4       kube-controller-manager-old-k8s-version-371039
	34eff18910230       0369cf4303ffd       About a minute ago   Running             etcd                      0                   cdb2422929db2       etcd-old-k8s-version-371039
	6b3b4e782188a       3138b6e3d4712       About a minute ago   Running             kube-scheduler            0                   92988ff644b2d       kube-scheduler-old-k8s-version-371039
	5e66ac9a14fe5       ca9843d3b5454       About a minute ago   Running             kube-apiserver            0                   e0135df35da04       kube-apiserver-old-k8s-version-371039
	
	
	==> containerd <==
	Sep 16 11:09:13 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:13.901056543Z" level=info msg="RunPodSandbox for name:\"storage-provisioner\" uid:\"fdaf9d37-19ec-4a4e-840e-b44e7158d798\" namespace:\"kube-system\" returns sandbox id \"ee17991909f8cf296e2235c0255b87d5be5775f909f9e5f300e9aa92d85bdd2b\""
	Sep 16 11:09:13 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:13.903511210Z" level=info msg="CreateContainer within sandbox \"ee17991909f8cf296e2235c0255b87d5be5775f909f9e5f300e9aa92d85bdd2b\" for container name:\"storage-provisioner\""
	Sep 16 11:09:13 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:13.916582954Z" level=info msg="CreateContainer within sandbox \"ee17991909f8cf296e2235c0255b87d5be5775f909f9e5f300e9aa92d85bdd2b\" for name:\"storage-provisioner\" returns container id \"bdf8504c18d6de710f0dc4d8c6f20cc1f4aa180f8b245b99545016551fa68aa3\""
	Sep 16 11:09:13 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:13.917059700Z" level=info msg="StartContainer for \"bdf8504c18d6de710f0dc4d8c6f20cc1f4aa180f8b245b99545016551fa68aa3\""
	Sep 16 11:09:13 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:13.964149291Z" level=info msg="StartContainer for \"bdf8504c18d6de710f0dc4d8c6f20cc1f4aa180f8b245b99545016551fa68aa3\" returns successfully"
	Sep 16 11:09:15 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:15.845118475Z" level=info msg="ImageCreate event name:\"docker.io/kindest/kindnetd:v20240813-c6f155d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 11:09:15 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:15.845810292Z" level=info msg="stop pulling image docker.io/kindest/kindnetd:v20240813-c6f155d6: active requests=0, bytes read=36804223"
	Sep 16 11:09:15 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:15.848404015Z" level=info msg="ImageCreate event name:\"sha256:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 11:09:15 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:15.850977569Z" level=info msg="ImageCreate event name:\"docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 11:09:15 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:15.851442555Z" level=info msg="Pulled image \"docker.io/kindest/kindnetd:v20240813-c6f155d6\" with image id \"sha256:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f\", repo tag \"docker.io/kindest/kindnetd:v20240813-c6f155d6\", repo digest \"docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166\", size \"36793393\" in 2.825942222s"
	Sep 16 11:09:15 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:15.851519488Z" level=info msg="PullImage \"docker.io/kindest/kindnetd:v20240813-c6f155d6\" returns image reference \"sha256:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f\""
	Sep 16 11:09:15 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:15.853492988Z" level=info msg="CreateContainer within sandbox \"6d12bd53c3747a9cedf8034bc1c60eb2f6de1b1f45b50a747c26d2d773a72512\" for container name:\"kindnet-cni\""
	Sep 16 11:09:15 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:15.865468393Z" level=info msg="CreateContainer within sandbox \"6d12bd53c3747a9cedf8034bc1c60eb2f6de1b1f45b50a747c26d2d773a72512\" for name:\"kindnet-cni\" returns container id \"00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285\""
	Sep 16 11:09:15 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:15.866095221Z" level=info msg="StartContainer for \"00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285\""
	Sep 16 11:09:15 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:15.938701932Z" level=info msg="StartContainer for \"00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285\" returns successfully"
	Sep 16 11:09:26 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:26.897556537Z" level=info msg="RunPodSandbox for name:\"coredns-74ff55c5b-78djj\" uid:\"c118a29b-0828-40a2-9653-f2d3268eb8cd\" namespace:\"kube-system\""
	Sep 16 11:09:26 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:26.932620392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 11:09:26 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:26.932681735Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 11:09:26 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:26.932692030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:09:26 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:26.932785803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:09:26 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:26.982343321Z" level=info msg="RunPodSandbox for name:\"coredns-74ff55c5b-78djj\" uid:\"c118a29b-0828-40a2-9653-f2d3268eb8cd\" namespace:\"kube-system\" returns sandbox id \"fdced5247c6ffaaed8e609287c84bd12fb6a78c58d70bf4755db09f4f4712d8a\""
	Sep 16 11:09:26 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:26.988709862Z" level=info msg="CreateContainer within sandbox \"fdced5247c6ffaaed8e609287c84bd12fb6a78c58d70bf4755db09f4f4712d8a\" for container name:\"coredns\""
	Sep 16 11:09:27 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:27.003135375Z" level=info msg="CreateContainer within sandbox \"fdced5247c6ffaaed8e609287c84bd12fb6a78c58d70bf4755db09f4f4712d8a\" for name:\"coredns\" returns container id \"47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c\""
	Sep 16 11:09:27 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:27.003815790Z" level=info msg="StartContainer for \"47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c\""
	Sep 16 11:09:27 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:27.047962732Z" level=info msg="StartContainer for \"47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c\" returns successfully"
	
	
	==> coredns [47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 77006bb880cc2529c537c33b8dc6eabc
	CoreDNS-1.7.0
	linux/amd64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:49206 - 19492 "HINFO IN 2568215532487827892.8058846988098566839. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014231723s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-371039
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-371039
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=old-k8s-version-371039
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T11_08_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:08:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-371039
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:10:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:09:31 +0000   Mon, 16 Sep 2024 11:08:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:09:31 +0000   Mon, 16 Sep 2024 11:08:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:09:31 +0000   Mon, 16 Sep 2024 11:08:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:09:31 +0000   Mon, 16 Sep 2024 11:09:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-371039
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 9635bab378394b3cbc8d38b8b7ea27c5
	  System UUID:                5a808ec9-2d43-4212-9e81-7580afba2fbc
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-74ff55c5b-78djj                           100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     83s
	  kube-system                 etcd-old-k8s-version-371039                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         93s
	  kube-system                 kindnet-txszz                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      83s
	  kube-system                 kube-apiserver-old-k8s-version-371039             250m (3%)     0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 kube-controller-manager-old-k8s-version-371039    200m (2%)     0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 kube-proxy-w2kp4                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-scheduler-old-k8s-version-371039             100m (1%)     0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  Starting                 109s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  109s (x5 over 109s)  kubelet     Node old-k8s-version-371039 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    109s (x4 over 109s)  kubelet     Node old-k8s-version-371039 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     109s (x3 over 109s)  kubelet     Node old-k8s-version-371039 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  109s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 94s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  94s                  kubelet     Node old-k8s-version-371039 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    94s                  kubelet     Node old-k8s-version-371039 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     94s                  kubelet     Node old-k8s-version-371039 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  93s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                84s                  kubelet     Node old-k8s-version-371039 status is now: NodeReady
	  Normal  Starting                 82s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000001] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000001] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +1.003295] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000012] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.003959] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000006] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +2.011810] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000005] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.000001] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +4.063628] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000008] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.000030] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000007] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.003992] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000006] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +8.187268] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000006] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.000063] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000005] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.003939] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000006] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	
	
	==> etcd [34eff189102300711c29178081cd84ee3bb91bbadd42e71611f1e6cad730b927] <==
	2024-09-16 11:08:49.167631 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-09-16 11:08:49.167680 I | embed: listening for peers on 192.168.103.2:2380
	2024-09-16 11:08:49.167826 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2024/09/16 11:08:50 INFO: f23060b075c4c089 is starting a new election at term 1
	raft2024/09/16 11:08:50 INFO: f23060b075c4c089 became candidate at term 2
	raft2024/09/16 11:08:50 INFO: f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2
	raft2024/09/16 11:08:50 INFO: f23060b075c4c089 became leader at term 2
	raft2024/09/16 11:08:50 INFO: raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2
	2024-09-16 11:08:50.056510 I | etcdserver: published {Name:old-k8s-version-371039 ClientURLs:[https://192.168.103.2:2379]} to cluster 3336683c081d149d
	2024-09-16 11:08:50.056532 I | embed: ready to serve client requests
	2024-09-16 11:08:50.057110 I | embed: ready to serve client requests
	2024-09-16 11:08:50.058044 I | embed: serving client requests on 127.0.0.1:2379
	2024-09-16 11:08:50.058490 I | etcdserver: setting up the initial cluster version to 3.4
	2024-09-16 11:08:50.068100 I | embed: serving client requests on 192.168.103.2:2379
	2024-09-16 11:08:50.070326 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-09-16 11:08:50.070887 I | etcdserver/api: enabled capabilities for version 3.4
	2024-09-16 11:09:11.117153 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:09:20.292389 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:09:30.292398 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:09:40.292229 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:09:50.292279 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:10:00.292399 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:10:10.292409 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:10:20.292351 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:10:30.292340 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 11:10:35 up 52 min,  0 users,  load average: 2.59, 3.31, 2.17
	Linux old-k8s-version-371039 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285] <==
	I0916 11:09:16.122815       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I0916 11:09:16.122984       1 main.go:148] setting mtu 1500 for CNI 
	I0916 11:09:16.123005       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 11:09:16.123030       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 11:09:16.440841       1 controller.go:334] Starting controller kube-network-policies
	I0916 11:09:16.440859       1 controller.go:338] Waiting for informer caches to sync
	I0916 11:09:16.440866       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 11:09:16.741421       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 11:09:16.741449       1 metrics.go:61] Registering metrics
	I0916 11:09:16.741493       1 controller.go:374] Syncing nftables rules
	I0916 11:09:26.443817       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:09:26.443889       1 main.go:299] handling current node
	I0916 11:09:36.443817       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:09:36.443873       1 main.go:299] handling current node
	I0916 11:09:46.444993       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:09:46.445026       1 main.go:299] handling current node
	I0916 11:09:56.448730       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:09:56.448775       1 main.go:299] handling current node
	I0916 11:10:06.442481       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:10:06.442528       1 main.go:299] handling current node
	I0916 11:10:16.440913       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:10:16.440948       1 main.go:299] handling current node
	I0916 11:10:26.440992       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:10:26.441030       1 main.go:299] handling current node
	
	
	==> kube-apiserver [5e66ac9a14fe5a4058d0e0a13c533aa90cf43e7e2829bee2c2b36cb49e2bdefd] <==
	I0916 11:08:53.520192       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 11:08:53.521141       1 apf_controller.go:253] Running API Priority and Fairness config worker
	I0916 11:08:53.520210       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0916 11:08:54.353949       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0916 11:08:54.353987       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0916 11:08:54.361543       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0916 11:08:54.365717       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0916 11:08:54.365739       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0916 11:08:54.756312       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 11:08:54.792245       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0916 11:08:54.860469       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I0916 11:08:54.861548       1 controller.go:606] quota admission added evaluator for: endpoints
	I0916 11:08:54.865453       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 11:08:55.892699       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0916 11:08:56.495384       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0916 11:08:56.665190       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0916 11:09:01.882424       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 11:09:12.124421       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0916 11:09:12.248848       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0916 11:09:28.780739       1 client.go:360] parsed scheme: "passthrough"
	I0916 11:09:28.780781       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 11:09:28.780806       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0916 11:10:06.948682       1 client.go:360] parsed scheme: "passthrough"
	I0916 11:10:06.948905       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 11:10:06.948926       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [8e878c306812f7218314ae530ad95c7f7a54c927d987295008ebfa1401dcc21c] <==
	I0916 11:09:12.046184       1 shared_informer.go:247] Caches are synced for HPA 
	I0916 11:09:12.046335       1 shared_informer.go:247] Caches are synced for endpoint 
	I0916 11:09:12.046684       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0916 11:09:12.046840       1 shared_informer.go:247] Caches are synced for GC 
	I0916 11:09:12.048076       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0916 11:09:12.119917       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0916 11:09:12.130067       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-txszz"
	I0916 11:09:12.131917       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-w2kp4"
	I0916 11:09:12.246156       1 shared_informer.go:247] Caches are synced for deployment 
	I0916 11:09:12.246176       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0916 11:09:12.246193       1 shared_informer.go:247] Caches are synced for disruption 
	I0916 11:09:12.246220       1 disruption.go:339] Sending events to api server.
	I0916 11:09:12.248247       1 shared_informer.go:247] Caches are synced for resource quota 
	I0916 11:09:12.250904       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0916 11:09:12.254472       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-lgf42"
	I0916 11:09:12.261635       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-78djj"
	I0916 11:09:12.425820       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0916 11:09:12.726025       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0916 11:09:12.819908       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0916 11:09:12.819938       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0916 11:09:13.096725       1 request.go:655] Throttling request took 1.049972591s, request: GET:https://192.168.103.2:8443/apis/autoscaling/v2beta1?timeout=32s
	I0916 11:09:13.339089       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0916 11:09:13.344204       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-lgf42"
	I0916 11:09:13.897597       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0916 11:09:13.897639       1 shared_informer.go:247] Caches are synced for resource quota 
	
	
	==> kube-proxy [a442530bd3eed8c0c19667f9fb758696fcd53da538f2ae250a155372b1f2574e] <==
	I0916 11:09:13.322536       1 node.go:172] Successfully retrieved node IP: 192.168.103.2
	I0916 11:09:13.322732       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.103.2), assume IPv4 operation
	W0916 11:09:13.345840       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0916 11:09:13.345951       1 server_others.go:185] Using iptables Proxier.
	I0916 11:09:13.346284       1 server.go:650] Version: v1.20.0
	I0916 11:09:13.347687       1 config.go:315] Starting service config controller
	I0916 11:09:13.349932       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0916 11:09:13.347841       1 config.go:224] Starting endpoint slice config controller
	I0916 11:09:13.420415       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0916 11:09:13.420676       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0916 11:09:13.450370       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [6b3b4e782188a028982f995826169c7b0e2d5db28caae22d39c553f66b1dbf58] <==
	W0916 11:08:53.425390       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 11:08:53.425498       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 11:08:53.425546       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 11:08:53.425566       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 11:08:53.445666       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 11:08:53.445756       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 11:08:53.445770       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0916 11:08:53.445859       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0916 11:08:53.447314       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 11:08:53.447706       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 11:08:53.447999       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 11:08:53.448116       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:08:53.448269       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 11:08:53.448478       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 11:08:53.448860       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 11:08:53.448864       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 11:08:53.449019       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 11:08:53.449164       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 11:08:53.450105       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 11:08:53.450247       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:08:54.410511       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 11:08:54.433702       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 11:08:54.472291       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:08:54.592362       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0916 11:08:56.246004       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Sep 16 11:09:12 old-k8s-version-371039 kubelet[2075]: I0916 11:09:12.321030    2075 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kindnet-token-xjzl9" (UniqueName: "kubernetes.io/secret/55ac8e8a-b323-4c4a-a7d5-3c069e89deb8-kindnet-token-xjzl9") pod "kindnet-txszz" (UID: "55ac8e8a-b323-4c4a-a7d5-3c069e89deb8")
	Sep 16 11:09:12 old-k8s-version-371039 kubelet[2075]: I0916 11:09:12.321249    2075 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/55ac8e8a-b323-4c4a-a7d5-3c069e89deb8-xtables-lock") pod "kindnet-txszz" (UID: "55ac8e8a-b323-4c4a-a7d5-3c069e89deb8")
	Sep 16 11:09:12 old-k8s-version-371039 kubelet[2075]: I0916 11:09:12.321298    2075 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-k6lnv" (UniqueName: "kubernetes.io/secret/fe617d0b-b789-47b3-b18f-0f9602e3873d-kube-proxy-token-k6lnv") pod "kube-proxy-w2kp4" (UID: "fe617d0b-b789-47b3-b18f-0f9602e3873d")
	Sep 16 11:09:12 old-k8s-version-371039 kubelet[2075]: I0916 11:09:12.421811    2075 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c118a29b-0828-40a2-9653-f2d3268eb8cd-config-volume") pod "coredns-74ff55c5b-78djj" (UID: "c118a29b-0828-40a2-9653-f2d3268eb8cd")
	Sep 16 11:09:12 old-k8s-version-371039 kubelet[2075]: I0916 11:09:12.421866    2075 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-vcrsr" (UniqueName: "kubernetes.io/secret/c118a29b-0828-40a2-9653-f2d3268eb8cd-coredns-token-vcrsr") pod "coredns-74ff55c5b-78djj" (UID: "c118a29b-0828-40a2-9653-f2d3268eb8cd")
	Sep 16 11:09:12 old-k8s-version-371039 kubelet[2075]: I0916 11:09:12.421921    2075 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-vcrsr" (UniqueName: "kubernetes.io/secret/30c8c5e2-3068-4ddf-bcfa-a514dee78dea-coredns-token-vcrsr") pod "coredns-74ff55c5b-lgf42" (UID: "30c8c5e2-3068-4ddf-bcfa-a514dee78dea")
	Sep 16 11:09:12 old-k8s-version-371039 kubelet[2075]: I0916 11:09:12.421971    2075 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/30c8c5e2-3068-4ddf-bcfa-a514dee78dea-config-volume") pod "coredns-74ff55c5b-lgf42" (UID: "30c8c5e2-3068-4ddf-bcfa-a514dee78dea")
	Sep 16 11:09:12 old-k8s-version-371039 kubelet[2075]: E0916 11:09:12.829232    2075 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "3a6e2da6a3e56c5a6f73b13065dcd336f8732b250981a6b8f6de9228323aab79": failed to find network info for sandbox "3a6e2da6a3e56c5a6f73b13065dcd336f8732b250981a6b8f6de9228323aab79"
	Sep 16 11:09:12 old-k8s-version-371039 kubelet[2075]: E0916 11:09:12.829320    2075 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "coredns-74ff55c5b-lgf42_kube-system(30c8c5e2-3068-4ddf-bcfa-a514dee78dea)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "3a6e2da6a3e56c5a6f73b13065dcd336f8732b250981a6b8f6de9228323aab79": failed to find network info for sandbox "3a6e2da6a3e56c5a6f73b13065dcd336f8732b250981a6b8f6de9228323aab79"
	Sep 16 11:09:12 old-k8s-version-371039 kubelet[2075]: E0916 11:09:12.829339    2075 kuberuntime_manager.go:755] createPodSandbox for pod "coredns-74ff55c5b-lgf42_kube-system(30c8c5e2-3068-4ddf-bcfa-a514dee78dea)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "3a6e2da6a3e56c5a6f73b13065dcd336f8732b250981a6b8f6de9228323aab79": failed to find network info for sandbox "3a6e2da6a3e56c5a6f73b13065dcd336f8732b250981a6b8f6de9228323aab79"
	Sep 16 11:09:12 old-k8s-version-371039 kubelet[2075]: E0916 11:09:12.829403    2075 pod_workers.go:191] Error syncing pod 30c8c5e2-3068-4ddf-bcfa-a514dee78dea ("coredns-74ff55c5b-lgf42_kube-system(30c8c5e2-3068-4ddf-bcfa-a514dee78dea)"), skipping: failed to "CreatePodSandbox" for "coredns-74ff55c5b-lgf42_kube-system(30c8c5e2-3068-4ddf-bcfa-a514dee78dea)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-74ff55c5b-lgf42_kube-system(30c8c5e2-3068-4ddf-bcfa-a514dee78dea)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"3a6e2da6a3e56c5a6f73b13065dcd336f8732b250981a6b8f6de9228323aab79\": failed to find network info for sandbox \"3a6e2da6a3e56c5a6f73b13065dcd336f8732b250981a6b8f6de9228323aab79\""
	Sep 16 11:09:12 old-k8s-version-371039 kubelet[2075]: E0916 11:09:12.836368    2075 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "6262a29949d3b2dd4f285aa4b3f78d4dda3258584a03aab14b23a9e8529c8fe5": failed to find network info for sandbox "6262a29949d3b2dd4f285aa4b3f78d4dda3258584a03aab14b23a9e8529c8fe5"
	Sep 16 11:09:12 old-k8s-version-371039 kubelet[2075]: E0916 11:09:12.836458    2075 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "coredns-74ff55c5b-78djj_kube-system(c118a29b-0828-40a2-9653-f2d3268eb8cd)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "6262a29949d3b2dd4f285aa4b3f78d4dda3258584a03aab14b23a9e8529c8fe5": failed to find network info for sandbox "6262a29949d3b2dd4f285aa4b3f78d4dda3258584a03aab14b23a9e8529c8fe5"
	Sep 16 11:09:12 old-k8s-version-371039 kubelet[2075]: E0916 11:09:12.836480    2075 kuberuntime_manager.go:755] createPodSandbox for pod "coredns-74ff55c5b-78djj_kube-system(c118a29b-0828-40a2-9653-f2d3268eb8cd)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "6262a29949d3b2dd4f285aa4b3f78d4dda3258584a03aab14b23a9e8529c8fe5": failed to find network info for sandbox "6262a29949d3b2dd4f285aa4b3f78d4dda3258584a03aab14b23a9e8529c8fe5"
	Sep 16 11:09:12 old-k8s-version-371039 kubelet[2075]: E0916 11:09:12.836537    2075 pod_workers.go:191] Error syncing pod c118a29b-0828-40a2-9653-f2d3268eb8cd ("coredns-74ff55c5b-78djj_kube-system(c118a29b-0828-40a2-9653-f2d3268eb8cd)"), skipping: failed to "CreatePodSandbox" for "coredns-74ff55c5b-78djj_kube-system(c118a29b-0828-40a2-9653-f2d3268eb8cd)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-74ff55c5b-78djj_kube-system(c118a29b-0828-40a2-9653-f2d3268eb8cd)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"6262a29949d3b2dd4f285aa4b3f78d4dda3258584a03aab14b23a9e8529c8fe5\": failed to find network info for sandbox \"6262a29949d3b2dd4f285aa4b3f78d4dda3258584a03aab14b23a9e8529c8fe5\""
	Sep 16 11:09:13 old-k8s-version-371039 kubelet[2075]: I0916 11:09:13.521958    2075 topology_manager.go:187] [topologymanager] Topology Admit Handler
	Sep 16 11:09:13 old-k8s-version-371039 kubelet[2075]: I0916 11:09:13.527538    2075 reconciler.go:196] operationExecutor.UnmountVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/30c8c5e2-3068-4ddf-bcfa-a514dee78dea-config-volume") pod "30c8c5e2-3068-4ddf-bcfa-a514dee78dea" (UID: "30c8c5e2-3068-4ddf-bcfa-a514dee78dea")
	Sep 16 11:09:13 old-k8s-version-371039 kubelet[2075]: I0916 11:09:13.527585    2075 reconciler.go:196] operationExecutor.UnmountVolume started for volume "coredns-token-vcrsr" (UniqueName: "kubernetes.io/secret/30c8c5e2-3068-4ddf-bcfa-a514dee78dea-coredns-token-vcrsr") pod "30c8c5e2-3068-4ddf-bcfa-a514dee78dea" (UID: "30c8c5e2-3068-4ddf-bcfa-a514dee78dea")
	Sep 16 11:09:13 old-k8s-version-371039 kubelet[2075]: W0916 11:09:13.527857    2075 empty_dir.go:520] Warning: Failed to clear quota on /var/lib/kubelet/pods/30c8c5e2-3068-4ddf-bcfa-a514dee78dea/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Sep 16 11:09:13 old-k8s-version-371039 kubelet[2075]: I0916 11:09:13.528062    2075 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30c8c5e2-3068-4ddf-bcfa-a514dee78dea-config-volume" (OuterVolumeSpecName: "config-volume") pod "30c8c5e2-3068-4ddf-bcfa-a514dee78dea" (UID: "30c8c5e2-3068-4ddf-bcfa-a514dee78dea"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Sep 16 11:09:13 old-k8s-version-371039 kubelet[2075]: I0916 11:09:13.530229    2075 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30c8c5e2-3068-4ddf-bcfa-a514dee78dea-coredns-token-vcrsr" (OuterVolumeSpecName: "coredns-token-vcrsr") pod "30c8c5e2-3068-4ddf-bcfa-a514dee78dea" (UID: "30c8c5e2-3068-4ddf-bcfa-a514dee78dea"). InnerVolumeSpecName "coredns-token-vcrsr". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 16 11:09:13 old-k8s-version-371039 kubelet[2075]: I0916 11:09:13.627931    2075 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/fdaf9d37-19ec-4a4e-840e-b44e7158d798-tmp") pod "storage-provisioner" (UID: "fdaf9d37-19ec-4a4e-840e-b44e7158d798")
	Sep 16 11:09:13 old-k8s-version-371039 kubelet[2075]: I0916 11:09:13.627974    2075 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-4gk79" (UniqueName: "kubernetes.io/secret/fdaf9d37-19ec-4a4e-840e-b44e7158d798-storage-provisioner-token-4gk79") pod "storage-provisioner" (UID: "fdaf9d37-19ec-4a4e-840e-b44e7158d798")
	Sep 16 11:09:13 old-k8s-version-371039 kubelet[2075]: I0916 11:09:13.627999    2075 reconciler.go:319] Volume detached for volume "config-volume" (UniqueName: "kubernetes.io/configmap/30c8c5e2-3068-4ddf-bcfa-a514dee78dea-config-volume") on node "old-k8s-version-371039" DevicePath ""
	Sep 16 11:09:13 old-k8s-version-371039 kubelet[2075]: I0916 11:09:13.628011    2075 reconciler.go:319] Volume detached for volume "coredns-token-vcrsr" (UniqueName: "kubernetes.io/secret/30c8c5e2-3068-4ddf-bcfa-a514dee78dea-coredns-token-vcrsr") on node "old-k8s-version-371039" DevicePath ""
	
	
	==> storage-provisioner [bdf8504c18d6de710f0dc4d8c6f20cc1f4aa180f8b245b99545016551fa68aa3] <==
	I0916 11:09:13.972762       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 11:09:13.980679       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 11:09:13.980724       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 11:09:13.987659       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 11:09:13.987719       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3df43ad2-abd4-4d32-b26b-91fa0eea8673", APIVersion:"v1", ResourceVersion:"473", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-371039_ba54aacd-67bb-46f0-b4ba-d01754da79ef became leader
	I0916 11:09:13.987846       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-371039_ba54aacd-67bb-46f0-b4ba-d01754da79ef!
	I0916 11:09:14.088020       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-371039_ba54aacd-67bb-46f0-b4ba-d01754da79ef!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-371039 -n old-k8s-version-371039
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-371039 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context old-k8s-version-371039 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (531.211µs)
helpers_test.go:263: kubectl --context old-k8s-version-371039 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-371039
helpers_test.go:235: (dbg) docker inspect old-k8s-version-371039:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9e01fb8ba8f9390279ece7fa8549e0985cb4ee5a2abca405a94db6651b123b23",
	        "Created": "2024-09-16T11:08:26.808717426Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 262175,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T11:08:26.947014727Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/9e01fb8ba8f9390279ece7fa8549e0985cb4ee5a2abca405a94db6651b123b23/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9e01fb8ba8f9390279ece7fa8549e0985cb4ee5a2abca405a94db6651b123b23/hostname",
	        "HostsPath": "/var/lib/docker/containers/9e01fb8ba8f9390279ece7fa8549e0985cb4ee5a2abca405a94db6651b123b23/hosts",
	        "LogPath": "/var/lib/docker/containers/9e01fb8ba8f9390279ece7fa8549e0985cb4ee5a2abca405a94db6651b123b23/9e01fb8ba8f9390279ece7fa8549e0985cb4ee5a2abca405a94db6651b123b23-json.log",
	        "Name": "/old-k8s-version-371039",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-371039:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-371039",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a4c36d1f47b4f755139012e0a43ea6f04287ebb716fa5c832223a9a5a773adaf-init/diff:/var/lib/docker/overlay2/e391a31d00d345931d18da68f8a7d7338830f6d9b98fe203461225d03a65d594/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a4c36d1f47b4f755139012e0a43ea6f04287ebb716fa5c832223a9a5a773adaf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a4c36d1f47b4f755139012e0a43ea6f04287ebb716fa5c832223a9a5a773adaf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a4c36d1f47b4f755139012e0a43ea6f04287ebb716fa5c832223a9a5a773adaf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-371039",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-371039/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-371039",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-371039",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-371039",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cb344cb6ef2301f2020c4e997ddc256592ab1b779218cfb3d91a41736363c80c",
	            "SandboxKey": "/var/run/docker/netns/cb344cb6ef23",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-371039": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "617bc0338b3b0f6ed38b0b21b091e38e1d6c95398d3e053128c978435134833f",
	                    "EndpointID": "e8c6186d44336c3ccbe03bab444f7bdf6847c5d8aac6300c54bfe5f7be82eb5d",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-371039",
	                        "9e01fb8ba8f9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-371039 -n old-k8s-version-371039
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-371039 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-371039 logs -n 25: (1.121335164s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-771611 sudo                                  | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | containerd config dump                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo                                  | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | systemctl status crio --all                            |                           |         |         |                     |                     |
	|         | --full --no-pager                                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo                                  | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | systemctl cat crio --no-pager                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo find                             | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo crio                             | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | config                                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-771611                                       | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	| delete  | -p missing-upgrade-327796                              | missing-upgrade-327796    | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	| start   | -p cert-expiration-021107                              | cert-expiration-021107    | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:08 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| start   | -p force-systemd-flag-917705                           | force-systemd-flag-917705 | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:08 UTC |
	|         | --memory=2048 --force-systemd                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                                   |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-846070                               | force-systemd-env-846070  | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | ssh cat                                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-846070                            | force-systemd-env-846070  | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| stop    | -p kubernetes-upgrade-311911                           | kubernetes-upgrade-311911 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| start   | -p cert-options-840054                                 | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-311911                           | kubernetes-upgrade-311911 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                                   |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-917705                              | force-systemd-flag-917705 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | ssh cat                                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-917705                           | force-systemd-flag-917705 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| start   | -p old-k8s-version-371039                              | old-k8s-version-371039    | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:10 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| ssh     | cert-options-840054 ssh                                | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | openssl x509 -text -noout -in                          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-840054 -- sudo                         | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                           |         |         |                     |                     |
	| delete  | -p cert-options-840054                                 | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| start   | -p no-preload-349453                                   | no-preload-349453         | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:09 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-349453             | no-preload-349453         | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC | 16 Sep 24 11:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p no-preload-349453                                   | no-preload-349453         | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC | 16 Sep 24 11:09 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-349453                  | no-preload-349453         | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC | 16 Sep 24 11:09 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p no-preload-349453                                   | no-preload-349453         | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:09:48
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:09:48.774615  274695 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:09:48.774727  274695 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:09:48.774736  274695 out.go:358] Setting ErrFile to fd 2...
	I0916 11:09:48.774741  274695 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:09:48.774931  274695 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 11:09:48.775465  274695 out.go:352] Setting JSON to false
	I0916 11:09:48.776814  274695 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3133,"bootTime":1726481856,"procs":325,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:09:48.776915  274695 start.go:139] virtualization: kvm guest
	I0916 11:09:48.779376  274695 out.go:177] * [no-preload-349453] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:09:48.780693  274695 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:09:48.780693  274695 notify.go:220] Checking for updates...
	I0916 11:09:48.782771  274695 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:09:48.783942  274695 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:09:48.784951  274695 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 11:09:48.785992  274695 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:09:48.787325  274695 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:09:48.789055  274695 config.go:182] Loaded profile config "no-preload-349453": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:09:48.789761  274695 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:09:48.816058  274695 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 11:09:48.816173  274695 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:09:48.872573  274695 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:68 OomKillDisable:true NGoroutines:84 SystemTime:2024-09-16 11:09:48.861917048 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:09:48.872710  274695 docker.go:318] overlay module found
	I0916 11:09:48.874792  274695 out.go:177] * Using the docker driver based on existing profile
	I0916 11:09:48.876381  274695 start.go:297] selected driver: docker
	I0916 11:09:48.876396  274695 start.go:901] validating driver "docker" against &{Name:no-preload-349453 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-349453 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:09:48.876482  274695 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:09:48.877396  274695 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:09:48.937469  274695 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:68 OomKillDisable:true NGoroutines:84 SystemTime:2024-09-16 11:09:48.927117526 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:09:48.937828  274695 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:09:48.937861  274695 cni.go:84] Creating CNI manager for ""
	I0916 11:09:48.937920  274695 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:09:48.937961  274695 start.go:340] cluster config:
	{Name:no-preload-349453 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-349453 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:09:48.939958  274695 out.go:177] * Starting "no-preload-349453" primary control-plane node in "no-preload-349453" cluster
	I0916 11:09:48.941389  274695 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 11:09:48.942657  274695 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 11:09:48.943944  274695 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:09:48.944031  274695 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 11:09:48.944121  274695 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/config.json ...
	I0916 11:09:48.944323  274695 cache.go:107] acquiring lock: {Name:mk505f3dd823c459cfb83f2d2a39affe63c4c40e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:09:48.944366  274695 cache.go:107] acquiring lock: {Name:mk612053845ede903900e7b583df14a07089be08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:09:48.944387  274695 cache.go:107] acquiring lock: {Name:mkb7cb231873e7918d3e306be4ec4f6091d91485 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:09:48.944439  274695 cache.go:115] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0916 11:09:48.944446  274695 cache.go:115] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0916 11:09:48.944431  274695 cache.go:107] acquiring lock: {Name:mkd9c658f7569779b8a27d53e97cc0f70f55a845 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:09:48.944455  274695 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1" took 69.965µs
	I0916 11:09:48.944451  274695 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 91.023µs
	I0916 11:09:48.944470  274695 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0916 11:09:48.944470  274695 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0916 11:09:48.944322  274695 cache.go:107] acquiring lock: {Name:mk0f2d9e0670c46fe9eb165a8119acf30531a2e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:09:48.944483  274695 cache.go:107] acquiring lock: {Name:mk8275b1fd51b04034df297d05c3d74274567a6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:09:48.944498  274695 cache.go:115] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0916 11:09:48.944504  274695 cache.go:115] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0916 11:09:48.944507  274695 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1" took 77.168µs
	I0916 11:09:48.944511  274695 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1" took 202.159µs
	I0916 11:09:48.944515  274695 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0916 11:09:48.944519  274695 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0916 11:09:48.944519  274695 cache.go:115] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 exists
	I0916 11:09:48.944527  274695 cache.go:115] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0916 11:09:48.944530  274695 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0" took 49.3µs
	I0916 11:09:48.944527  274695 cache.go:107] acquiring lock: {Name:mk0b25b3ebef8c92ed85c693112bf4f2b400d9b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:09:48.944537  274695 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 221.354µs
	I0916 11:09:48.944545  274695 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0916 11:09:48.944537  274695 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0916 11:09:48.944548  274695 cache.go:107] acquiring lock: {Name:mkd90d764df5e26e345f1c24540d37a0e89a5b18 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:09:48.944560  274695 cache.go:115] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0916 11:09:48.944566  274695 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1" took 41.533µs
	I0916 11:09:48.944573  274695 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0916 11:09:48.944604  274695 cache.go:115] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0916 11:09:48.944610  274695 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 64.195µs
	I0916 11:09:48.944617  274695 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0916 11:09:48.944624  274695 cache.go:87] Successfully saved all images to host disk.
	W0916 11:09:48.969191  274695 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 11:09:48.969211  274695 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 11:09:48.969289  274695 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 11:09:48.969306  274695 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 11:09:48.969311  274695 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 11:09:48.969319  274695 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 11:09:48.969326  274695 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 11:09:49.025446  274695 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 11:09:49.025486  274695 cache.go:194] Successfully downloaded all kic artifacts
	I0916 11:09:49.025515  274695 start.go:360] acquireMachinesLock for no-preload-349453: {Name:mk8558ad422c1a28af392329b5800e6b7ec410a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:09:49.025584  274695 start.go:364] duration metric: took 51.504µs to acquireMachinesLock for "no-preload-349453"
	I0916 11:09:49.025602  274695 start.go:96] Skipping create...Using existing machine configuration
	I0916 11:09:49.025610  274695 fix.go:54] fixHost starting: 
	I0916 11:09:49.025910  274695 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Status}}
	I0916 11:09:49.044053  274695 fix.go:112] recreateIfNeeded on no-preload-349453: state=Stopped err=<nil>
	W0916 11:09:49.044108  274695 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 11:09:49.045989  274695 out.go:177] * Restarting existing docker container for "no-preload-349453" ...
	I0916 11:09:45.849283  260870 pod_ready.go:103] pod "etcd-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:48.349153  260870 pod_ready.go:103] pod "etcd-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:48.687452  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:48.687946  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:48.687995  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:09:48.688038  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:09:48.726246  254463 cri.go:89] found id: "f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd"
	I0916 11:09:48.726273  254463 cri.go:89] found id: ""
	I0916 11:09:48.726285  254463 logs.go:276] 1 containers: [f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd]
	I0916 11:09:48.726349  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:48.729998  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:09:48.730067  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:09:48.770403  254463 cri.go:89] found id: ""
	I0916 11:09:48.770433  254463 logs.go:276] 0 containers: []
	W0916 11:09:48.770443  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:09:48.770451  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:09:48.770511  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:09:48.807549  254463 cri.go:89] found id: ""
	I0916 11:09:48.807580  254463 logs.go:276] 0 containers: []
	W0916 11:09:48.807593  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:09:48.807601  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:09:48.807655  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:09:48.854558  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:09:48.854578  254463 cri.go:89] found id: ""
	I0916 11:09:48.854585  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:09:48.854629  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:48.858424  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:09:48.858482  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:09:48.893983  254463 cri.go:89] found id: ""
	I0916 11:09:48.894013  254463 logs.go:276] 0 containers: []
	W0916 11:09:48.894024  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:09:48.894032  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:09:48.894090  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:09:48.931964  254463 cri.go:89] found id: "cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:09:48.931987  254463 cri.go:89] found id: "b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9"
	I0916 11:09:48.931991  254463 cri.go:89] found id: ""
	I0916 11:09:48.932000  254463 logs.go:276] 2 containers: [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9]
	I0916 11:09:48.932050  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:48.936381  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:48.940101  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:09:48.940183  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:09:48.979539  254463 cri.go:89] found id: ""
	I0916 11:09:48.979566  254463 logs.go:276] 0 containers: []
	W0916 11:09:48.979578  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:09:48.979585  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:09:48.979645  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:09:49.014921  254463 cri.go:89] found id: ""
	I0916 11:09:49.014951  254463 logs.go:276] 0 containers: []
	W0916 11:09:49.014964  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:09:49.014983  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:09:49.014998  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:09:49.056665  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:09:49.056697  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:09:49.110424  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:09:49.110453  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:09:49.178554  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:09:49.178592  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:09:49.244586  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:09:49.244612  254463 logs.go:123] Gathering logs for kube-apiserver [f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd] ...
	I0916 11:09:49.244629  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd"
	I0916 11:09:49.285235  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:09:49.285264  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:09:49.385095  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:09:49.385133  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:09:49.409418  254463 logs.go:123] Gathering logs for kube-controller-manager [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff] ...
	I0916 11:09:49.409454  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:09:49.445392  254463 logs.go:123] Gathering logs for kube-controller-manager [b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9] ...
	I0916 11:09:49.445422  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9"
	I0916 11:09:51.983011  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:49.047145  274695 cli_runner.go:164] Run: docker start no-preload-349453
	I0916 11:09:49.345476  274695 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Status}}
	I0916 11:09:49.369895  274695 kic.go:430] container "no-preload-349453" state is running.
	I0916 11:09:49.370255  274695 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-349453
	I0916 11:09:49.390076  274695 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/config.json ...
	I0916 11:09:49.390324  274695 machine.go:93] provisionDockerMachine start ...
	I0916 11:09:49.390405  274695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:49.409420  274695 main.go:141] libmachine: Using SSH client type: native
	I0916 11:09:49.409726  274695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I0916 11:09:49.409751  274695 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 11:09:49.410474  274695 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33840->127.0.0.1:33068: read: connection reset by peer
	I0916 11:09:52.543274  274695 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-349453
	
	I0916 11:09:52.543304  274695 ubuntu.go:169] provisioning hostname "no-preload-349453"
	I0916 11:09:52.543357  274695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:52.561425  274695 main.go:141] libmachine: Using SSH client type: native
	I0916 11:09:52.561639  274695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I0916 11:09:52.561659  274695 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-349453 && echo "no-preload-349453" | sudo tee /etc/hostname
	I0916 11:09:52.702731  274695 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-349453
	
	I0916 11:09:52.702807  274695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:52.720926  274695 main.go:141] libmachine: Using SSH client type: native
	I0916 11:09:52.721115  274695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I0916 11:09:52.721133  274695 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-349453' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-349453/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-349453' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:09:52.852007  274695 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:09:52.852046  274695 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 11:09:52.852067  274695 ubuntu.go:177] setting up certificates
	I0916 11:09:52.852079  274695 provision.go:84] configureAuth start
	I0916 11:09:52.852141  274695 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-349453
	I0916 11:09:52.869844  274695 provision.go:143] copyHostCerts
	I0916 11:09:52.869915  274695 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 11:09:52.869927  274695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 11:09:52.869991  274695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 11:09:52.870107  274695 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 11:09:52.870119  274695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 11:09:52.870146  274695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 11:09:52.870211  274695 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 11:09:52.870219  274695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 11:09:52.870248  274695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 11:09:52.870308  274695 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.no-preload-349453 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-349453]
	I0916 11:09:53.005905  274695 provision.go:177] copyRemoteCerts
	I0916 11:09:53.005958  274695 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:09:53.005995  274695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:53.023517  274695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:09:53.120443  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 11:09:53.142805  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0916 11:09:53.166225  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 11:09:53.188868  274695 provision.go:87] duration metric: took 336.770749ms to configureAuth
	I0916 11:09:53.188907  274695 ubuntu.go:193] setting minikube options for container-runtime
	I0916 11:09:53.189114  274695 config.go:182] Loaded profile config "no-preload-349453": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:09:53.189127  274695 machine.go:96] duration metric: took 3.798788146s to provisionDockerMachine
	I0916 11:09:53.189135  274695 start.go:293] postStartSetup for "no-preload-349453" (driver="docker")
	I0916 11:09:53.189145  274695 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:09:53.189195  274695 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:09:53.189233  274695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:53.206547  274695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:09:53.304863  274695 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:09:53.308040  274695 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 11:09:53.308080  274695 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 11:09:53.308092  274695 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 11:09:53.308101  274695 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 11:09:53.308115  274695 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 11:09:53.308178  274695 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 11:09:53.308280  274695 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 11:09:53.308405  274695 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:09:53.316395  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:09:53.338548  274695 start.go:296] duration metric: took 149.394766ms for postStartSetup
	I0916 11:09:53.338650  274695 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 11:09:53.338694  274695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:53.357422  274695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:09:53.452877  274695 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 11:09:53.457360  274695 fix.go:56] duration metric: took 4.43174375s for fixHost
	I0916 11:09:53.457384  274695 start.go:83] releasing machines lock for "no-preload-349453", held for 4.431788357s
	I0916 11:09:53.457450  274695 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-349453
	I0916 11:09:53.475348  274695 ssh_runner.go:195] Run: cat /version.json
	I0916 11:09:53.475400  274695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:53.475417  274695 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:09:53.475476  274695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:53.493461  274695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:09:53.494009  274695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:09:53.583400  274695 ssh_runner.go:195] Run: systemctl --version
	I0916 11:09:53.664600  274695 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:09:53.669030  274695 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 11:09:53.686361  274695 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 11:09:53.686447  274695 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:09:53.694804  274695 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 11:09:53.694831  274695 start.go:495] detecting cgroup driver to use...
	I0916 11:09:53.694862  274695 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 11:09:53.694907  274695 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 11:09:53.707615  274695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 11:09:53.719106  274695 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:09:53.719198  274695 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:09:53.731307  274695 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:09:53.741993  274695 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:09:53.822112  274695 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:09:53.892551  274695 docker.go:233] disabling docker service ...
	I0916 11:09:53.892640  274695 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:09:53.904867  274695 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:09:53.915797  274695 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:09:53.997972  274695 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:09:54.077247  274695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:09:54.088231  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:09:54.104123  274695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 11:09:54.113650  274695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 11:09:54.123084  274695 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 11:09:54.123150  274695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 11:09:54.132500  274695 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 11:09:54.141637  274695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 11:09:54.150420  274695 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 11:09:54.159442  274695 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:09:54.169162  274695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 11:09:54.178447  274695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 11:09:54.187883  274695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 11:09:54.197946  274695 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:09:54.205872  274695 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:09:54.213572  274695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:09:54.289888  274695 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 11:09:54.379344  274695 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 11:09:54.379416  274695 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 11:09:54.383200  274695 start.go:563] Will wait 60s for crictl version
	I0916 11:09:54.383251  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:09:54.386338  274695 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:09:54.418191  274695 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 11:09:54.418249  274695 ssh_runner.go:195] Run: containerd --version
	I0916 11:09:54.441777  274695 ssh_runner.go:195] Run: containerd --version
	I0916 11:09:54.467613  274695 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 11:09:50.847763  260870 pod_ready.go:103] pod "etcd-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:52.849026  260870 pod_ready.go:103] pod "etcd-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:54.849276  260870 pod_ready.go:103] pod "etcd-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:54.468958  274695 cli_runner.go:164] Run: docker network inspect no-preload-349453 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:09:54.485947  274695 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0916 11:09:54.489631  274695 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:09:54.500473  274695 kubeadm.go:883] updating cluster {Name:no-preload-349453 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-349453 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenk
ins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:09:54.500611  274695 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:09:54.500665  274695 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:09:54.532760  274695 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 11:09:54.532781  274695 cache_images.go:84] Images are preloaded, skipping loading
	I0916 11:09:54.532790  274695 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.31.1 containerd true true} ...
	I0916 11:09:54.532898  274695 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-349453 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-349453 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:09:54.532956  274695 ssh_runner.go:195] Run: sudo crictl info
	I0916 11:09:54.565820  274695 cni.go:84] Creating CNI manager for ""
	I0916 11:09:54.565853  274695 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:09:54.565868  274695 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:09:54.565894  274695 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-349453 NodeName:no-preload-349453 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 11:09:54.566029  274695 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-349453"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:09:54.566101  274695 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 11:09:54.574595  274695 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 11:09:54.574664  274695 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:09:54.583330  274695 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0916 11:09:54.600902  274695 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:09:54.617863  274695 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2171 bytes)
	I0916 11:09:54.635791  274695 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0916 11:09:54.639161  274695 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:09:54.649784  274695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:09:54.733077  274695 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:09:54.746471  274695 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453 for IP: 192.168.94.2
	I0916 11:09:54.746493  274695 certs.go:194] generating shared ca certs ...
	I0916 11:09:54.746508  274695 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:09:54.746655  274695 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 11:09:54.746704  274695 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 11:09:54.746714  274695 certs.go:256] generating profile certs ...
	I0916 11:09:54.746801  274695 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.key
	I0916 11:09:54.746889  274695 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.key.85f7849d
	I0916 11:09:54.746961  274695 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/proxy-client.key
	I0916 11:09:54.747124  274695 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 11:09:54.747163  274695 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 11:09:54.747174  274695 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:09:54.747209  274695 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 11:09:54.747242  274695 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:09:54.747268  274695 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 11:09:54.747337  274695 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:09:54.748125  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:09:54.773659  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:09:54.798587  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:09:54.838039  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:09:54.866265  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 11:09:54.922112  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 11:09:54.949631  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:09:54.974851  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 11:09:54.998140  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:09:55.021759  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 11:09:55.047817  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 11:09:55.072006  274695 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:09:55.090041  274695 ssh_runner.go:195] Run: openssl version
	I0916 11:09:55.095459  274695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:09:55.104870  274695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:09:55.108622  274695 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:09:55.108679  274695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:09:55.115169  274695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:09:55.124341  274695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 11:09:55.134032  274695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 11:09:55.137540  274695 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 11:09:55.137603  274695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 11:09:55.144314  274695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 11:09:55.153020  274695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 11:09:55.162713  274695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 11:09:55.166242  274695 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 11:09:55.166294  274695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 11:09:55.172872  274695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:09:55.181466  274695 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:09:55.184964  274695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 11:09:55.191210  274695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 11:09:55.197521  274695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 11:09:55.204060  274695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 11:09:55.210455  274695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 11:09:55.217147  274695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 11:09:55.224151  274695 kubeadm.go:392] StartCluster: {Name:no-preload-349453 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-349453 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins
:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:09:55.224234  274695 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 11:09:55.224285  274695 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:09:55.259697  274695 cri.go:89] found id: "30acbc7b45e29f44070b3c2cd9e349609594ebbaf39bfad9077eca4ac3da4b6d"
	I0916 11:09:55.259720  274695 cri.go:89] found id: "b30641ccb64e362b81a950f5292970ba402b0ac24d54dd1f6caba97fe10efe1a"
	I0916 11:09:55.259775  274695 cri.go:89] found id: "6fe6dedc217401e73f5795b6cd5cfdd5d65a4314df29b9b5be8775dc661cbffa"
	I0916 11:09:55.259796  274695 cri.go:89] found id: "49542fa155836c3fde0549a1cb7c27e6491fe3d03f704b1dd6194e52b361cf04"
	I0916 11:09:55.259804  274695 cri.go:89] found id: "a4b95a39232c279fb958cb4f48f6828232143f571fa31629227ec0c0bfd79969"
	I0916 11:09:55.259808  274695 cri.go:89] found id: "5c82d38a57c77763e3457d1b66ad9e5b564c5c6137cef3dc16a48cd5d44dcf69"
	I0916 11:09:55.259812  274695 cri.go:89] found id: "0b8b34459e3719308549f98b1bdb7656a632bebe98e50614463b773f213a735d"
	I0916 11:09:55.259816  274695 cri.go:89] found id: "5d35346ecb3ed281435941a815ae449eb1e54a96a4798b3c0a33aa91f7f98817"
	I0916 11:09:55.259820  274695 cri.go:89] found id: ""
	I0916 11:09:55.259881  274695 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0916 11:09:55.273392  274695 cri.go:116] JSON = null
	W0916 11:09:55.273443  274695 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0916 11:09:55.273502  274695 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:09:55.282466  274695 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 11:09:55.282486  274695 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 11:09:55.282539  274695 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0916 11:09:55.291007  274695 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0916 11:09:55.291787  274695 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-349453" does not appear in /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:09:55.292250  274695 kubeconfig.go:62] /home/jenkins/minikube-integration/19651-3687/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-349453" cluster setting kubeconfig missing "no-preload-349453" context setting]
	I0916 11:09:55.292937  274695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:09:55.294364  274695 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 11:09:55.303573  274695 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.94.2
	I0916 11:09:55.303619  274695 kubeadm.go:597] duration metric: took 21.126232ms to restartPrimaryControlPlane
	I0916 11:09:55.303631  274695 kubeadm.go:394] duration metric: took 79.507692ms to StartCluster
	I0916 11:09:55.303656  274695 settings.go:142] acquiring lock: {Name:mk5f140da961bb985c566f732d611f3e238f1f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:09:55.303778  274695 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:09:55.304930  274695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:09:55.305137  274695 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 11:09:55.305211  274695 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:09:55.305322  274695 addons.go:69] Setting storage-provisioner=true in profile "no-preload-349453"
	I0916 11:09:55.305336  274695 addons.go:69] Setting default-storageclass=true in profile "no-preload-349453"
	I0916 11:09:55.305342  274695 config.go:182] Loaded profile config "no-preload-349453": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:09:55.305350  274695 addons.go:69] Setting dashboard=true in profile "no-preload-349453"
	I0916 11:09:55.305372  274695 addons.go:234] Setting addon dashboard=true in "no-preload-349453"
	W0916 11:09:55.305382  274695 addons.go:243] addon dashboard should already be in state true
	I0916 11:09:55.305353  274695 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-349453"
	I0916 11:09:55.305401  274695 addons.go:69] Setting metrics-server=true in profile "no-preload-349453"
	I0916 11:09:55.305426  274695 host.go:66] Checking if "no-preload-349453" exists ...
	I0916 11:09:55.305428  274695 addons.go:234] Setting addon metrics-server=true in "no-preload-349453"
	W0916 11:09:55.305438  274695 addons.go:243] addon metrics-server should already be in state true
	I0916 11:09:55.305485  274695 host.go:66] Checking if "no-preload-349453" exists ...
	I0916 11:09:55.305354  274695 addons.go:234] Setting addon storage-provisioner=true in "no-preload-349453"
	W0916 11:09:55.305501  274695 addons.go:243] addon storage-provisioner should already be in state true
	I0916 11:09:55.305532  274695 host.go:66] Checking if "no-preload-349453" exists ...
	I0916 11:09:55.305781  274695 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Status}}
	I0916 11:09:55.305926  274695 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Status}}
	I0916 11:09:55.305931  274695 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Status}}
	I0916 11:09:55.306010  274695 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Status}}
	I0916 11:09:55.307090  274695 out.go:177] * Verifying Kubernetes components...
	I0916 11:09:55.308706  274695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:09:55.330513  274695 addons.go:234] Setting addon default-storageclass=true in "no-preload-349453"
	W0916 11:09:55.330534  274695 addons.go:243] addon default-storageclass should already be in state true
	I0916 11:09:55.330561  274695 host.go:66] Checking if "no-preload-349453" exists ...
	I0916 11:09:55.330918  274695 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Status}}
	I0916 11:09:55.331334  274695 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0916 11:09:55.331338  274695 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0916 11:09:55.333189  274695 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:09:55.333205  274695 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 11:09:55.333269  274695 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 11:09:55.333352  274695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:55.334937  274695 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0916 11:09:56.983399  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 11:09:56.983465  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:09:56.983527  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:09:57.016275  254463 cri.go:89] found id: "5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:09:57.016298  254463 cri.go:89] found id: "f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd"
	I0916 11:09:57.016303  254463 cri.go:89] found id: ""
	I0916 11:09:57.016312  254463 logs.go:276] 2 containers: [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7 f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd]
	I0916 11:09:57.016363  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:57.019731  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:57.022928  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:09:57.022987  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:09:57.066015  254463 cri.go:89] found id: ""
	I0916 11:09:57.066043  254463 logs.go:276] 0 containers: []
	W0916 11:09:57.066055  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:09:57.066062  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:09:57.066116  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:09:57.100119  254463 cri.go:89] found id: ""
	I0916 11:09:57.100143  254463 logs.go:276] 0 containers: []
	W0916 11:09:57.100154  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:09:57.100161  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:09:57.100218  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:09:57.142278  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:09:57.142305  254463 cri.go:89] found id: ""
	I0916 11:09:57.142314  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:09:57.142369  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:57.146012  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:09:57.146093  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:09:57.180703  254463 cri.go:89] found id: ""
	I0916 11:09:57.180730  254463 logs.go:276] 0 containers: []
	W0916 11:09:57.180741  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:09:57.180749  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:09:57.180804  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:09:57.213555  254463 cri.go:89] found id: "cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:09:57.213576  254463 cri.go:89] found id: "b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9"
	I0916 11:09:57.213579  254463 cri.go:89] found id: ""
	I0916 11:09:57.213586  254463 logs.go:276] 2 containers: [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9]
	I0916 11:09:57.213630  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:57.216893  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:57.220067  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:09:57.220128  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:09:57.261058  254463 cri.go:89] found id: ""
	I0916 11:09:57.261086  254463 logs.go:276] 0 containers: []
	W0916 11:09:57.261098  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:09:57.261105  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:09:57.261163  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:09:57.296886  254463 cri.go:89] found id: ""
	I0916 11:09:57.296913  254463 logs.go:276] 0 containers: []
	W0916 11:09:57.296921  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:09:57.296936  254463 logs.go:123] Gathering logs for kube-controller-manager [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff] ...
	I0916 11:09:57.296951  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:09:57.333205  254463 logs.go:123] Gathering logs for kube-controller-manager [b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9] ...
	I0916 11:09:57.333242  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9"
	I0916 11:09:57.372259  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:09:57.372300  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:09:57.413680  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:09:57.413713  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:09:57.486222  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:09:57.486264  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:09:55.335030  274695 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:09:55.335047  274695 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:09:55.335088  274695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:55.336314  274695 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0916 11:09:55.336347  274695 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0916 11:09:55.336405  274695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:55.357420  274695 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:09:55.357447  274695 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:09:55.357506  274695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:55.366387  274695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:09:55.367347  274695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:09:55.369352  274695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:09:55.388562  274695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:09:55.520679  274695 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:09:55.542923  274695 node_ready.go:35] waiting up to 6m0s for node "no-preload-349453" to be "Ready" ...
	I0916 11:09:55.621336  274695 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 11:09:55.621435  274695 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0916 11:09:55.630720  274695 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0916 11:09:55.630753  274695 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0916 11:09:55.631139  274695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:09:55.647847  274695 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 11:09:55.647928  274695 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 11:09:55.728814  274695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:09:55.734435  274695 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0916 11:09:55.734467  274695 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0916 11:09:55.830027  274695 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 11:09:55.830070  274695 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 11:09:55.837018  274695 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0916 11:09:55.837046  274695 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0916 11:09:55.852569  274695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 11:09:55.933470  274695 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0916 11:09:55.933499  274695 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0916 11:09:56.036131  274695 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 11:09:56.036176  274695 retry.go:31] will retry after 327.547508ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 11:09:56.040318  274695 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0916 11:09:56.040402  274695 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0916 11:09:56.044576  274695 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 11:09:56.044610  274695 retry.go:31] will retry after 125.943539ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 11:09:56.133467  274695 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0916 11:09:56.133501  274695 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0916 11:09:56.171627  274695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:09:56.229693  274695 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0916 11:09:56.229778  274695 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W0916 11:09:56.324009  274695 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 11:09:56.324059  274695 retry.go:31] will retry after 179.364541ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 11:09:56.329914  274695 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0916 11:09:56.329944  274695 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0916 11:09:56.364514  274695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:09:56.424109  274695 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0916 11:09:56.424146  274695 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0916 11:09:56.503542  274695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 11:09:56.523382  274695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0916 11:09:56.849591  260870 pod_ready.go:103] pod "etcd-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:59.349876  260870 pod_ready.go:103] pod "etcd-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:59.129375  274695 node_ready.go:49] node "no-preload-349453" has status "Ready":"True"
	I0916 11:09:59.129482  274695 node_ready.go:38] duration metric: took 3.586509916s for node "no-preload-349453" to be "Ready" ...
	I0916 11:09:59.129511  274695 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:09:59.146545  274695 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:59.236575  274695 pod_ready.go:93] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"True"
	I0916 11:09:59.236620  274695 pod_ready.go:82] duration metric: took 90.034166ms for pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:59.236641  274695 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:59.244737  274695 pod_ready.go:93] pod "etcd-no-preload-349453" in "kube-system" namespace has status "Ready":"True"
	I0916 11:09:59.244763  274695 pod_ready.go:82] duration metric: took 8.113529ms for pod "etcd-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:59.244779  274695 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:59.326680  274695 pod_ready.go:93] pod "kube-apiserver-no-preload-349453" in "kube-system" namespace has status "Ready":"True"
	I0916 11:09:59.326711  274695 pod_ready.go:82] duration metric: took 81.923811ms for pod "kube-apiserver-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:59.326724  274695 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:59.331650  274695 pod_ready.go:93] pod "kube-controller-manager-no-preload-349453" in "kube-system" namespace has status "Ready":"True"
	I0916 11:09:59.331673  274695 pod_ready.go:82] duration metric: took 4.941014ms for pod "kube-controller-manager-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:59.331686  274695 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-n7m28" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:59.337818  274695 pod_ready.go:93] pod "kube-proxy-n7m28" in "kube-system" namespace has status "Ready":"True"
	I0916 11:09:59.337846  274695 pod_ready.go:82] duration metric: took 6.152494ms for pod "kube-proxy-n7m28" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:59.337858  274695 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:59.423673  274695 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (3.251989478s)
	I0916 11:09:59.732619  274695 pod_ready.go:93] pod "kube-scheduler-no-preload-349453" in "kube-system" namespace has status "Ready":"True"
	I0916 11:09:59.732647  274695 pod_ready.go:82] duration metric: took 394.781316ms for pod "kube-scheduler-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:59.732659  274695 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace to be "Ready" ...
	I0916 11:10:01.340867  274695 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.976318642s)
	I0916 11:10:01.340987  274695 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.837413291s)
	I0916 11:10:01.341014  274695 addons.go:475] Verifying addon metrics-server=true in "no-preload-349453"
	I0916 11:10:01.537050  274695 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.013618079s)
	I0916 11:10:01.538736  274695 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-349453 addons enable metrics-server
	
	I0916 11:10:01.540676  274695 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0916 11:10:01.542213  274695 addons.go:510] duration metric: took 6.237009332s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0916 11:10:01.741388  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:01.350460  260870 pod_ready.go:103] pod "etcd-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:03.848603  260870 pod_ready.go:103] pod "etcd-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:04.348851  260870 pod_ready.go:93] pod "etcd-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"True"
	I0916 11:10:04.348878  260870 pod_ready.go:82] duration metric: took 34.506013242s for pod "etcd-old-k8s-version-371039" in "kube-system" namespace to be "Ready" ...
	I0916 11:10:04.348893  260870 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-371039" in "kube-system" namespace to be "Ready" ...
	I0916 11:10:04.353032  260870 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"True"
	I0916 11:10:04.353051  260870 pod_ready.go:82] duration metric: took 4.150771ms for pod "kube-apiserver-old-k8s-version-371039" in "kube-system" namespace to be "Ready" ...
	I0916 11:10:04.353060  260870 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace to be "Ready" ...
	I0916 11:10:07.550714  254463 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.06442499s)
	W0916 11:10:07.550762  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I0916 11:10:07.550771  254463 logs.go:123] Gathering logs for kube-apiserver [f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd] ...
	I0916 11:10:07.550784  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd"
	I0916 11:10:07.596479  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:10:07.596522  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:10:07.640033  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:10:07.640079  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:10:07.665505  254463 logs.go:123] Gathering logs for kube-apiserver [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7] ...
	I0916 11:10:07.665549  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:04.238268  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:06.239302  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:08.243920  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:06.359545  260870 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:08.859689  260870 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:07.711821  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:10:07.711862  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:10.283999  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:10:12.114848  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:42500->192.168.76.2:8443: read: connection reset by peer
	I0916 11:10:12.114981  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:10:12.115056  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:10:12.152497  254463 cri.go:89] found id: "5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:12.152533  254463 cri.go:89] found id: "f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd"
	I0916 11:10:12.152540  254463 cri.go:89] found id: ""
	I0916 11:10:12.152548  254463 logs.go:276] 2 containers: [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7 f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd]
	I0916 11:10:12.152602  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:12.156067  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:12.159264  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:10:12.159327  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:10:12.190731  254463 cri.go:89] found id: ""
	I0916 11:10:12.190754  254463 logs.go:276] 0 containers: []
	W0916 11:10:12.190765  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:10:12.190772  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:10:12.190827  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:10:12.222220  254463 cri.go:89] found id: ""
	I0916 11:10:12.222242  254463 logs.go:276] 0 containers: []
	W0916 11:10:12.222250  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:10:12.222256  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:10:12.222298  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:10:12.255730  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:12.255822  254463 cri.go:89] found id: ""
	I0916 11:10:12.255829  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:10:12.255876  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:12.259472  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:10:12.259542  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:10:12.291555  254463 cri.go:89] found id: ""
	I0916 11:10:12.291579  254463 logs.go:276] 0 containers: []
	W0916 11:10:12.291589  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:10:12.291596  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:10:12.291651  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:10:12.324287  254463 cri.go:89] found id: "cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:12.324321  254463 cri.go:89] found id: "b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9"
	I0916 11:10:12.324328  254463 cri.go:89] found id: ""
	I0916 11:10:12.324337  254463 logs.go:276] 2 containers: [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9]
	I0916 11:10:12.324392  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:12.327731  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:12.330880  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:10:12.330944  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:10:12.375367  254463 cri.go:89] found id: ""
	I0916 11:10:12.375395  254463 logs.go:276] 0 containers: []
	W0916 11:10:12.375407  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:10:12.375415  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:10:12.375478  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:10:12.415075  254463 cri.go:89] found id: ""
	I0916 11:10:12.415095  254463 logs.go:276] 0 containers: []
	W0916 11:10:12.415103  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:10:12.415115  254463 logs.go:123] Gathering logs for kube-apiserver [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7] ...
	I0916 11:10:12.415126  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:12.458886  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:10:12.458930  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:10:12.496500  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:10:12.496530  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:10:12.567297  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:10:12.567333  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:10:12.624232  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:10:12.624255  254463 logs.go:123] Gathering logs for kube-apiserver [f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd] ...
	I0916 11:10:12.624270  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd"
	I0916 11:10:12.660261  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:10:12.660295  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:10.738756  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:13.238098  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:11.360052  260870 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:13.859124  260870 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:14.365001  260870 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"True"
	I0916 11:10:14.365026  260870 pod_ready.go:82] duration metric: took 10.011960541s for pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace to be "Ready" ...
	I0916 11:10:14.365036  260870 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w2kp4" in "kube-system" namespace to be "Ready" ...
	I0916 11:10:14.369505  260870 pod_ready.go:93] pod "kube-proxy-w2kp4" in "kube-system" namespace has status "Ready":"True"
	I0916 11:10:14.369528  260870 pod_ready.go:82] duration metric: took 4.48629ms for pod "kube-proxy-w2kp4" in "kube-system" namespace to be "Ready" ...
	I0916 11:10:14.369536  260870 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-371039" in "kube-system" namespace to be "Ready" ...
	I0916 11:10:12.718187  254463 logs.go:123] Gathering logs for kube-controller-manager [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff] ...
	I0916 11:10:12.718226  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:12.753095  254463 logs.go:123] Gathering logs for kube-controller-manager [b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9] ...
	I0916 11:10:12.753121  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9"
	I0916 11:10:12.786230  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:10:12.786255  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:10:12.828221  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:10:12.828253  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:10:15.348814  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:10:15.349283  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:10:15.349344  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:10:15.349400  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:10:15.384332  254463 cri.go:89] found id: "5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:15.384353  254463 cri.go:89] found id: ""
	I0916 11:10:15.384362  254463 logs.go:276] 1 containers: [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7]
	I0916 11:10:15.384418  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:15.387695  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:10:15.387808  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:10:15.420398  254463 cri.go:89] found id: ""
	I0916 11:10:15.420425  254463 logs.go:276] 0 containers: []
	W0916 11:10:15.420438  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:10:15.420447  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:10:15.420496  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:10:15.454005  254463 cri.go:89] found id: ""
	I0916 11:10:15.454035  254463 logs.go:276] 0 containers: []
	W0916 11:10:15.454049  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:10:15.454057  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:10:15.454111  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:10:15.488040  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:15.488065  254463 cri.go:89] found id: ""
	I0916 11:10:15.488072  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:10:15.488121  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:15.491658  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:10:15.491730  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:10:15.526243  254463 cri.go:89] found id: ""
	I0916 11:10:15.526276  254463 logs.go:276] 0 containers: []
	W0916 11:10:15.526289  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:10:15.526297  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:10:15.526356  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:10:15.563058  254463 cri.go:89] found id: "cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:15.563078  254463 cri.go:89] found id: ""
	I0916 11:10:15.563085  254463 logs.go:276] 1 containers: [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff]
	I0916 11:10:15.563129  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:15.566707  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:10:15.566775  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:10:15.600693  254463 cri.go:89] found id: ""
	I0916 11:10:15.600719  254463 logs.go:276] 0 containers: []
	W0916 11:10:15.600728  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:10:15.600734  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:10:15.600786  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:10:15.634854  254463 cri.go:89] found id: ""
	I0916 11:10:15.634878  254463 logs.go:276] 0 containers: []
	W0916 11:10:15.634886  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:10:15.634894  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:10:15.634912  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:10:15.656900  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:10:15.656944  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:10:15.716708  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:10:15.716734  254463 logs.go:123] Gathering logs for kube-apiserver [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7] ...
	I0916 11:10:15.716750  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:15.756043  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:10:15.756072  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:15.815128  254463 logs.go:123] Gathering logs for kube-controller-manager [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff] ...
	I0916 11:10:15.815167  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:15.851703  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:10:15.851729  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:10:15.896779  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:10:15.896822  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:10:15.933761  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:10:15.933790  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:10:15.738612  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:18.238493  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:16.375521  260870 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:18.876191  260870 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:18.508158  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:10:18.508652  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:10:18.508704  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:10:18.508768  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:10:18.541635  254463 cri.go:89] found id: "5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:18.541657  254463 cri.go:89] found id: ""
	I0916 11:10:18.541666  254463 logs.go:276] 1 containers: [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7]
	I0916 11:10:18.541721  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:18.545157  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:10:18.545220  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:10:18.577944  254463 cri.go:89] found id: ""
	I0916 11:10:18.577967  254463 logs.go:276] 0 containers: []
	W0916 11:10:18.577978  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:10:18.577985  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:10:18.578041  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:10:18.610307  254463 cri.go:89] found id: ""
	I0916 11:10:18.610334  254463 logs.go:276] 0 containers: []
	W0916 11:10:18.610345  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:10:18.610353  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:10:18.610410  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:10:18.643372  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:18.643398  254463 cri.go:89] found id: ""
	I0916 11:10:18.643409  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:10:18.643473  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:18.647339  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:10:18.647416  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:10:18.683669  254463 cri.go:89] found id: ""
	I0916 11:10:18.683696  254463 logs.go:276] 0 containers: []
	W0916 11:10:18.683708  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:10:18.683716  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:10:18.683813  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:10:18.717547  254463 cri.go:89] found id: "cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:18.717569  254463 cri.go:89] found id: ""
	I0916 11:10:18.717578  254463 logs.go:276] 1 containers: [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff]
	I0916 11:10:18.717635  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:18.721314  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:10:18.721386  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:10:18.756024  254463 cri.go:89] found id: ""
	I0916 11:10:18.756055  254463 logs.go:276] 0 containers: []
	W0916 11:10:18.756065  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:10:18.756071  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:10:18.756120  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:10:18.789325  254463 cri.go:89] found id: ""
	I0916 11:10:18.789350  254463 logs.go:276] 0 containers: []
	W0916 11:10:18.789359  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:10:18.789370  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:10:18.789384  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:10:18.860240  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:10:18.860279  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:10:18.882796  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:10:18.882826  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:10:18.941553  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:10:18.941577  254463 logs.go:123] Gathering logs for kube-apiserver [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7] ...
	I0916 11:10:18.941593  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:18.979008  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:10:18.979039  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:19.039131  254463 logs.go:123] Gathering logs for kube-controller-manager [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff] ...
	I0916 11:10:19.039170  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:19.075898  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:10:19.075929  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:10:19.119292  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:10:19.119332  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:10:21.657984  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:10:21.658407  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:10:21.658456  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:10:21.658511  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:10:21.692596  254463 cri.go:89] found id: "5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:21.692621  254463 cri.go:89] found id: ""
	I0916 11:10:21.692630  254463 logs.go:276] 1 containers: [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7]
	I0916 11:10:21.692685  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:21.696206  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:10:21.696264  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:10:21.729888  254463 cri.go:89] found id: ""
	I0916 11:10:21.729910  254463 logs.go:276] 0 containers: []
	W0916 11:10:21.729918  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:10:21.729937  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:10:21.729981  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:10:21.763929  254463 cri.go:89] found id: ""
	I0916 11:10:21.763962  254463 logs.go:276] 0 containers: []
	W0916 11:10:21.763974  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:10:21.763981  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:10:21.764047  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:10:21.799235  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:21.799256  254463 cri.go:89] found id: ""
	I0916 11:10:21.799264  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:10:21.799318  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:21.802780  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:10:21.802855  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:10:21.839854  254463 cri.go:89] found id: ""
	I0916 11:10:21.839880  254463 logs.go:276] 0 containers: []
	W0916 11:10:21.839888  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:10:21.839894  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:10:21.839953  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:10:21.873977  254463 cri.go:89] found id: "cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:21.874003  254463 cri.go:89] found id: ""
	I0916 11:10:21.874013  254463 logs.go:276] 1 containers: [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff]
	I0916 11:10:21.874068  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:21.878108  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:10:21.878178  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:10:21.911328  254463 cri.go:89] found id: ""
	I0916 11:10:21.911357  254463 logs.go:276] 0 containers: []
	W0916 11:10:21.911366  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:10:21.911372  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:10:21.911425  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:10:21.946393  254463 cri.go:89] found id: ""
	I0916 11:10:21.946423  254463 logs.go:276] 0 containers: []
	W0916 11:10:21.946435  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:10:21.946446  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:10:21.946461  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:10:21.990397  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:10:21.990439  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:10:22.027571  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:10:22.027598  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:10:22.101651  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:10:22.101686  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:10:22.122234  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:10:22.122269  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:10:22.180802  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:10:22.180833  254463 logs.go:123] Gathering logs for kube-apiserver [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7] ...
	I0916 11:10:22.180848  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:22.216487  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:10:22.216515  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:22.279504  254463 logs.go:123] Gathering logs for kube-controller-manager [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff] ...
	I0916 11:10:22.279551  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:20.240044  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:22.738619  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:21.375180  260870 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:23.375730  260870 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:24.375698  260870 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"True"
	I0916 11:10:24.375722  260870 pod_ready.go:82] duration metric: took 10.006179243s for pod "kube-scheduler-old-k8s-version-371039" in "kube-system" namespace to be "Ready" ...
	I0916 11:10:24.375730  260870 pod_ready.go:39] duration metric: took 1m11.05010529s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:10:24.375761  260870 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:10:24.375792  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:10:24.375850  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:10:24.410054  260870 cri.go:89] found id: "5e66ac9a14fe5a4058d0e0a13c533aa90cf43e7e2829bee2c2b36cb49e2bdefd"
	I0916 11:10:24.410074  260870 cri.go:89] found id: ""
	I0916 11:10:24.410084  260870 logs.go:276] 1 containers: [5e66ac9a14fe5a4058d0e0a13c533aa90cf43e7e2829bee2c2b36cb49e2bdefd]
	I0916 11:10:24.410144  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:24.413762  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:10:24.413822  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:10:24.446581  260870 cri.go:89] found id: "34eff189102300711c29178081cd84ee3bb91bbadd42e71611f1e6cad730b927"
	I0916 11:10:24.446609  260870 cri.go:89] found id: ""
	I0916 11:10:24.446619  260870 logs.go:276] 1 containers: [34eff189102300711c29178081cd84ee3bb91bbadd42e71611f1e6cad730b927]
	I0916 11:10:24.446679  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:24.450048  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:10:24.450108  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:10:24.483854  260870 cri.go:89] found id: "47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c"
	I0916 11:10:24.483876  260870 cri.go:89] found id: ""
	I0916 11:10:24.483883  260870 logs.go:276] 1 containers: [47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c]
	I0916 11:10:24.483937  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:24.487518  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:10:24.487579  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:10:24.520237  260870 cri.go:89] found id: "6b3b4e782188a028982f995826169c7b0e2d5db28caae22d39c553f66b1dbf58"
	I0916 11:10:24.520257  260870 cri.go:89] found id: ""
	I0916 11:10:24.520265  260870 logs.go:276] 1 containers: [6b3b4e782188a028982f995826169c7b0e2d5db28caae22d39c553f66b1dbf58]
	I0916 11:10:24.520325  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:24.523786  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:10:24.523857  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:10:24.556906  260870 cri.go:89] found id: "a442530bd3eed8c0c19667f9fb758696fcd53da538f2ae250a155372b1f2574e"
	I0916 11:10:24.556931  260870 cri.go:89] found id: ""
	I0916 11:10:24.556938  260870 logs.go:276] 1 containers: [a442530bd3eed8c0c19667f9fb758696fcd53da538f2ae250a155372b1f2574e]
	I0916 11:10:24.556982  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:24.560497  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:10:24.560571  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:10:24.593490  260870 cri.go:89] found id: "8e878c306812f7218314ae530ad95c7f7a54c927d987295008ebfa1401dcc21c"
	I0916 11:10:24.593510  260870 cri.go:89] found id: ""
	I0916 11:10:24.593517  260870 logs.go:276] 1 containers: [8e878c306812f7218314ae530ad95c7f7a54c927d987295008ebfa1401dcc21c]
	I0916 11:10:24.593558  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:24.597013  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:10:24.597068  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:10:24.629128  260870 cri.go:89] found id: "00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285"
	I0916 11:10:24.629149  260870 cri.go:89] found id: ""
	I0916 11:10:24.629155  260870 logs.go:276] 1 containers: [00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285]
	I0916 11:10:24.629201  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:24.632565  260870 logs.go:123] Gathering logs for dmesg ...
	I0916 11:10:24.632588  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:10:24.653890  260870 logs.go:123] Gathering logs for etcd [34eff189102300711c29178081cd84ee3bb91bbadd42e71611f1e6cad730b927] ...
	I0916 11:10:24.653925  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34eff189102300711c29178081cd84ee3bb91bbadd42e71611f1e6cad730b927"
	I0916 11:10:24.689516  260870 logs.go:123] Gathering logs for coredns [47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c] ...
	I0916 11:10:24.689544  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c"
	I0916 11:10:24.723583  260870 logs.go:123] Gathering logs for kube-scheduler [6b3b4e782188a028982f995826169c7b0e2d5db28caae22d39c553f66b1dbf58] ...
	I0916 11:10:24.723610  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b3b4e782188a028982f995826169c7b0e2d5db28caae22d39c553f66b1dbf58"
	I0916 11:10:24.761101  260870 logs.go:123] Gathering logs for container status ...
	I0916 11:10:24.761135  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:10:24.798289  260870 logs.go:123] Gathering logs for containerd ...
	I0916 11:10:24.798316  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:10:24.858329  260870 logs.go:123] Gathering logs for kubelet ...
	I0916 11:10:24.858366  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:10:24.924002  260870 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:10:24.924042  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:10:25.040339  260870 logs.go:123] Gathering logs for kube-apiserver [5e66ac9a14fe5a4058d0e0a13c533aa90cf43e7e2829bee2c2b36cb49e2bdefd] ...
	I0916 11:10:25.040371  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e66ac9a14fe5a4058d0e0a13c533aa90cf43e7e2829bee2c2b36cb49e2bdefd"
	I0916 11:10:25.092353  260870 logs.go:123] Gathering logs for kube-proxy [a442530bd3eed8c0c19667f9fb758696fcd53da538f2ae250a155372b1f2574e] ...
	I0916 11:10:25.092390  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a442530bd3eed8c0c19667f9fb758696fcd53da538f2ae250a155372b1f2574e"
	I0916 11:10:25.129881  260870 logs.go:123] Gathering logs for kube-controller-manager [8e878c306812f7218314ae530ad95c7f7a54c927d987295008ebfa1401dcc21c] ...
	I0916 11:10:25.129913  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e878c306812f7218314ae530ad95c7f7a54c927d987295008ebfa1401dcc21c"
	I0916 11:10:25.176606  260870 logs.go:123] Gathering logs for kindnet [00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285] ...
	I0916 11:10:25.176643  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285"
	I0916 11:10:24.814913  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:10:24.815331  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:10:24.815406  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:10:24.815468  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:10:24.851174  254463 cri.go:89] found id: "5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:24.851217  254463 cri.go:89] found id: ""
	I0916 11:10:24.851226  254463 logs.go:276] 1 containers: [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7]
	I0916 11:10:24.851290  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:24.855458  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:10:24.855530  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:10:24.894464  254463 cri.go:89] found id: ""
	I0916 11:10:24.894484  254463 logs.go:276] 0 containers: []
	W0916 11:10:24.894491  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:10:24.894498  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:10:24.894540  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:10:24.932639  254463 cri.go:89] found id: ""
	I0916 11:10:24.932678  254463 logs.go:276] 0 containers: []
	W0916 11:10:24.932686  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:10:24.932691  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:10:24.932736  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:10:24.969712  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:24.969795  254463 cri.go:89] found id: ""
	I0916 11:10:24.969807  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:10:24.969872  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:24.973484  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:10:24.973557  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:10:25.014852  254463 cri.go:89] found id: ""
	I0916 11:10:25.014926  254463 logs.go:276] 0 containers: []
	W0916 11:10:25.014938  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:10:25.014944  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:10:25.015001  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:10:25.051032  254463 cri.go:89] found id: "cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:25.051057  254463 cri.go:89] found id: ""
	I0916 11:10:25.051067  254463 logs.go:276] 1 containers: [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff]
	I0916 11:10:25.051128  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:25.054719  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:10:25.054797  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:10:25.093048  254463 cri.go:89] found id: ""
	I0916 11:10:25.093074  254463 logs.go:276] 0 containers: []
	W0916 11:10:25.093084  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:10:25.093092  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:10:25.093144  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:10:25.131337  254463 cri.go:89] found id: ""
	I0916 11:10:25.131374  254463 logs.go:276] 0 containers: []
	W0916 11:10:25.131387  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:10:25.131405  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:10:25.131426  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:25.195758  254463 logs.go:123] Gathering logs for kube-controller-manager [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff] ...
	I0916 11:10:25.195798  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:25.232113  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:10:25.232141  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:10:25.277260  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:10:25.277300  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:10:25.314477  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:10:25.314503  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:10:25.391725  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:10:25.391784  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:10:25.413044  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:10:25.413079  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:10:25.474224  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:10:25.474246  254463 logs.go:123] Gathering logs for kube-apiserver [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7] ...
	I0916 11:10:25.474258  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:25.238565  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:27.737733  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:27.712923  260870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:10:27.724989  260870 api_server.go:72] duration metric: took 1m15.390531014s to wait for apiserver process to appear ...
	I0916 11:10:27.725015  260870 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:10:27.725048  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:10:27.725090  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:10:27.758530  260870 cri.go:89] found id: "5e66ac9a14fe5a4058d0e0a13c533aa90cf43e7e2829bee2c2b36cb49e2bdefd"
	I0916 11:10:27.758558  260870 cri.go:89] found id: ""
	I0916 11:10:27.758567  260870 logs.go:276] 1 containers: [5e66ac9a14fe5a4058d0e0a13c533aa90cf43e7e2829bee2c2b36cb49e2bdefd]
	I0916 11:10:27.758613  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:27.762091  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:10:27.762160  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:10:27.794955  260870 cri.go:89] found id: "34eff189102300711c29178081cd84ee3bb91bbadd42e71611f1e6cad730b927"
	I0916 11:10:27.794975  260870 cri.go:89] found id: ""
	I0916 11:10:27.794982  260870 logs.go:276] 1 containers: [34eff189102300711c29178081cd84ee3bb91bbadd42e71611f1e6cad730b927]
	I0916 11:10:27.795027  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:27.798651  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:10:27.798729  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:10:27.832743  260870 cri.go:89] found id: "47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c"
	I0916 11:10:27.832764  260870 cri.go:89] found id: ""
	I0916 11:10:27.832772  260870 logs.go:276] 1 containers: [47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c]
	I0916 11:10:27.832815  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:27.836354  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:10:27.836425  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:10:27.869614  260870 cri.go:89] found id: "6b3b4e782188a028982f995826169c7b0e2d5db28caae22d39c553f66b1dbf58"
	I0916 11:10:27.869635  260870 cri.go:89] found id: ""
	I0916 11:10:27.869644  260870 logs.go:276] 1 containers: [6b3b4e782188a028982f995826169c7b0e2d5db28caae22d39c553f66b1dbf58]
	I0916 11:10:27.869703  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:27.873305  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:10:27.873379  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:10:27.906796  260870 cri.go:89] found id: "a442530bd3eed8c0c19667f9fb758696fcd53da538f2ae250a155372b1f2574e"
	I0916 11:10:27.906818  260870 cri.go:89] found id: ""
	I0916 11:10:27.906827  260870 logs.go:276] 1 containers: [a442530bd3eed8c0c19667f9fb758696fcd53da538f2ae250a155372b1f2574e]
	I0916 11:10:27.906881  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:27.910467  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:10:27.910528  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:10:27.947119  260870 cri.go:89] found id: "8e878c306812f7218314ae530ad95c7f7a54c927d987295008ebfa1401dcc21c"
	I0916 11:10:27.947147  260870 cri.go:89] found id: ""
	I0916 11:10:27.947156  260870 logs.go:276] 1 containers: [8e878c306812f7218314ae530ad95c7f7a54c927d987295008ebfa1401dcc21c]
	I0916 11:10:27.947216  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:27.951709  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:10:27.951800  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:10:27.984740  260870 cri.go:89] found id: "00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285"
	I0916 11:10:27.984762  260870 cri.go:89] found id: ""
	I0916 11:10:27.984771  260870 logs.go:276] 1 containers: [00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285]
	I0916 11:10:27.984830  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:27.988397  260870 logs.go:123] Gathering logs for container status ...
	I0916 11:10:27.988425  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:10:28.025884  260870 logs.go:123] Gathering logs for kube-apiserver [5e66ac9a14fe5a4058d0e0a13c533aa90cf43e7e2829bee2c2b36cb49e2bdefd] ...
	I0916 11:10:28.025924  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e66ac9a14fe5a4058d0e0a13c533aa90cf43e7e2829bee2c2b36cb49e2bdefd"
	I0916 11:10:28.077609  260870 logs.go:123] Gathering logs for coredns [47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c] ...
	I0916 11:10:28.077647  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c"
	I0916 11:10:28.116119  260870 logs.go:123] Gathering logs for kube-scheduler [6b3b4e782188a028982f995826169c7b0e2d5db28caae22d39c553f66b1dbf58] ...
	I0916 11:10:28.116146  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b3b4e782188a028982f995826169c7b0e2d5db28caae22d39c553f66b1dbf58"
	I0916 11:10:28.154443  260870 logs.go:123] Gathering logs for kube-proxy [a442530bd3eed8c0c19667f9fb758696fcd53da538f2ae250a155372b1f2574e] ...
	I0916 11:10:28.154480  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a442530bd3eed8c0c19667f9fb758696fcd53da538f2ae250a155372b1f2574e"
	I0916 11:10:28.192048  260870 logs.go:123] Gathering logs for kindnet [00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285] ...
	I0916 11:10:28.192076  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285"
	I0916 11:10:28.230393  260870 logs.go:123] Gathering logs for containerd ...
	I0916 11:10:28.230435  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:10:28.293330  260870 logs.go:123] Gathering logs for kubelet ...
	I0916 11:10:28.293363  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:10:28.355035  260870 logs.go:123] Gathering logs for dmesg ...
	I0916 11:10:28.355073  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:10:28.376404  260870 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:10:28.376441  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:10:28.485749  260870 logs.go:123] Gathering logs for etcd [34eff189102300711c29178081cd84ee3bb91bbadd42e71611f1e6cad730b927] ...
	I0916 11:10:28.485786  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34eff189102300711c29178081cd84ee3bb91bbadd42e71611f1e6cad730b927"
	I0916 11:10:28.526060  260870 logs.go:123] Gathering logs for kube-controller-manager [8e878c306812f7218314ae530ad95c7f7a54c927d987295008ebfa1401dcc21c] ...
	I0916 11:10:28.526099  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e878c306812f7218314ae530ad95c7f7a54c927d987295008ebfa1401dcc21c"
	I0916 11:10:28.013215  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:10:28.013660  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:10:28.013720  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:10:28.013775  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:10:28.052332  254463 cri.go:89] found id: "5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:28.052358  254463 cri.go:89] found id: ""
	I0916 11:10:28.052366  254463 logs.go:276] 1 containers: [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7]
	I0916 11:10:28.052414  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:28.056409  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:10:28.056477  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:10:28.091702  254463 cri.go:89] found id: ""
	I0916 11:10:28.091731  254463 logs.go:276] 0 containers: []
	W0916 11:10:28.091784  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:10:28.091792  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:10:28.091851  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:10:28.126028  254463 cri.go:89] found id: ""
	I0916 11:10:28.126052  254463 logs.go:276] 0 containers: []
	W0916 11:10:28.126063  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:10:28.126076  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:10:28.126133  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:10:28.163202  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:28.163249  254463 cri.go:89] found id: ""
	I0916 11:10:28.163257  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:10:28.163299  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:28.166659  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:10:28.166722  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:10:28.201886  254463 cri.go:89] found id: ""
	I0916 11:10:28.201910  254463 logs.go:276] 0 containers: []
	W0916 11:10:28.201919  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:10:28.201926  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:10:28.201984  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:10:28.246518  254463 cri.go:89] found id: "cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:28.246616  254463 cri.go:89] found id: ""
	I0916 11:10:28.246637  254463 logs.go:276] 1 containers: [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff]
	I0916 11:10:28.246722  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:28.252289  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:10:28.252395  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:10:28.286426  254463 cri.go:89] found id: ""
	I0916 11:10:28.286449  254463 logs.go:276] 0 containers: []
	W0916 11:10:28.286457  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:10:28.286463  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:10:28.286519  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:10:28.321297  254463 cri.go:89] found id: ""
	I0916 11:10:28.321321  254463 logs.go:276] 0 containers: []
	W0916 11:10:28.321328  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:10:28.321336  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:10:28.321348  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:10:28.403374  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:10:28.403422  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:10:28.426647  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:10:28.426684  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:10:28.496928  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:10:28.496947  254463 logs.go:123] Gathering logs for kube-apiserver [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7] ...
	I0916 11:10:28.496957  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:28.538666  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:10:28.538694  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:28.607309  254463 logs.go:123] Gathering logs for kube-controller-manager [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff] ...
	I0916 11:10:28.607350  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:28.641335  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:10:28.641365  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:10:28.687488  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:10:28.687527  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:10:31.224849  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:10:31.225350  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:10:31.225417  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:10:31.225483  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:10:31.262633  254463 cri.go:89] found id: "5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:31.262659  254463 cri.go:89] found id: ""
	I0916 11:10:31.262668  254463 logs.go:276] 1 containers: [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7]
	I0916 11:10:31.262726  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:31.266801  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:10:31.266884  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:10:31.302134  254463 cri.go:89] found id: ""
	I0916 11:10:31.302165  254463 logs.go:276] 0 containers: []
	W0916 11:10:31.302176  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:10:31.302183  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:10:31.302239  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:10:31.338759  254463 cri.go:89] found id: ""
	I0916 11:10:31.338781  254463 logs.go:276] 0 containers: []
	W0916 11:10:31.338789  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:10:31.338796  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:10:31.338874  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:10:31.375371  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:31.375400  254463 cri.go:89] found id: ""
	I0916 11:10:31.375410  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:10:31.375462  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:31.379039  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:10:31.379109  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:10:31.414260  254463 cri.go:89] found id: ""
	I0916 11:10:31.414282  254463 logs.go:276] 0 containers: []
	W0916 11:10:31.414290  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:10:31.414295  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:10:31.414353  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:10:31.450723  254463 cri.go:89] found id: "cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:31.450747  254463 cri.go:89] found id: ""
	I0916 11:10:31.450760  254463 logs.go:276] 1 containers: [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff]
	I0916 11:10:31.450816  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:31.454785  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:10:31.454864  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:10:31.497353  254463 cri.go:89] found id: ""
	I0916 11:10:31.497385  254463 logs.go:276] 0 containers: []
	W0916 11:10:31.497398  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:10:31.497409  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:10:31.497458  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:10:31.532978  254463 cri.go:89] found id: ""
	I0916 11:10:31.533013  254463 logs.go:276] 0 containers: []
	W0916 11:10:31.533022  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:10:31.533031  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:10:31.533042  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:10:31.613145  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:10:31.613191  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:10:31.634722  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:10:31.634750  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:10:31.702216  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:10:31.702243  254463 logs.go:123] Gathering logs for kube-apiserver [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7] ...
	I0916 11:10:31.702257  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:31.744782  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:10:31.744814  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:31.811622  254463 logs.go:123] Gathering logs for kube-controller-manager [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff] ...
	I0916 11:10:31.811663  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:31.849645  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:10:31.849684  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:10:31.895810  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:10:31.895846  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:10:29.738050  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:31.738832  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:31.079119  260870 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0916 11:10:31.085468  260870 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I0916 11:10:31.086440  260870 api_server.go:141] control plane version: v1.20.0
	I0916 11:10:31.086462  260870 api_server.go:131] duration metric: took 3.361442023s to wait for apiserver health ...
	I0916 11:10:31.086470  260870 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 11:10:31.086489  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:10:31.086546  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:10:31.119570  260870 cri.go:89] found id: "5e66ac9a14fe5a4058d0e0a13c533aa90cf43e7e2829bee2c2b36cb49e2bdefd"
	I0916 11:10:31.119594  260870 cri.go:89] found id: ""
	I0916 11:10:31.119604  260870 logs.go:276] 1 containers: [5e66ac9a14fe5a4058d0e0a13c533aa90cf43e7e2829bee2c2b36cb49e2bdefd]
	I0916 11:10:31.119659  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:31.123250  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:10:31.123324  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:10:31.156789  260870 cri.go:89] found id: "34eff189102300711c29178081cd84ee3bb91bbadd42e71611f1e6cad730b927"
	I0916 11:10:31.156812  260870 cri.go:89] found id: ""
	I0916 11:10:31.156821  260870 logs.go:276] 1 containers: [34eff189102300711c29178081cd84ee3bb91bbadd42e71611f1e6cad730b927]
	I0916 11:10:31.156877  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:31.160589  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:10:31.160666  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:10:31.193841  260870 cri.go:89] found id: "47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c"
	I0916 11:10:31.193868  260870 cri.go:89] found id: ""
	I0916 11:10:31.193877  260870 logs.go:276] 1 containers: [47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c]
	I0916 11:10:31.193919  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:31.197415  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:10:31.197484  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:10:31.230161  260870 cri.go:89] found id: "6b3b4e782188a028982f995826169c7b0e2d5db28caae22d39c553f66b1dbf58"
	I0916 11:10:31.230184  260870 cri.go:89] found id: ""
	I0916 11:10:31.230193  260870 logs.go:276] 1 containers: [6b3b4e782188a028982f995826169c7b0e2d5db28caae22d39c553f66b1dbf58]
	I0916 11:10:31.230253  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:31.233951  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:10:31.234023  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:10:31.272769  260870 cri.go:89] found id: "a442530bd3eed8c0c19667f9fb758696fcd53da538f2ae250a155372b1f2574e"
	I0916 11:10:31.272795  260870 cri.go:89] found id: ""
	I0916 11:10:31.272804  260870 logs.go:276] 1 containers: [a442530bd3eed8c0c19667f9fb758696fcd53da538f2ae250a155372b1f2574e]
	I0916 11:10:31.272867  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:31.276486  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:10:31.276554  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:10:31.312467  260870 cri.go:89] found id: "8e878c306812f7218314ae530ad95c7f7a54c927d987295008ebfa1401dcc21c"
	I0916 11:10:31.312494  260870 cri.go:89] found id: ""
	I0916 11:10:31.312502  260870 logs.go:276] 1 containers: [8e878c306812f7218314ae530ad95c7f7a54c927d987295008ebfa1401dcc21c]
	I0916 11:10:31.312560  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:31.316419  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:10:31.316486  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:10:31.353043  260870 cri.go:89] found id: "00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285"
	I0916 11:10:31.353069  260870 cri.go:89] found id: ""
	I0916 11:10:31.353078  260870 logs.go:276] 1 containers: [00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285]
	I0916 11:10:31.353140  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:31.356964  260870 logs.go:123] Gathering logs for coredns [47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c] ...
	I0916 11:10:31.356998  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c"
	I0916 11:10:31.393983  260870 logs.go:123] Gathering logs for kube-scheduler [6b3b4e782188a028982f995826169c7b0e2d5db28caae22d39c553f66b1dbf58] ...
	I0916 11:10:31.394010  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b3b4e782188a028982f995826169c7b0e2d5db28caae22d39c553f66b1dbf58"
	I0916 11:10:31.433018  260870 logs.go:123] Gathering logs for kube-proxy [a442530bd3eed8c0c19667f9fb758696fcd53da538f2ae250a155372b1f2574e] ...
	I0916 11:10:31.433050  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a442530bd3eed8c0c19667f9fb758696fcd53da538f2ae250a155372b1f2574e"
	I0916 11:10:31.474201  260870 logs.go:123] Gathering logs for kube-controller-manager [8e878c306812f7218314ae530ad95c7f7a54c927d987295008ebfa1401dcc21c] ...
	I0916 11:10:31.474228  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e878c306812f7218314ae530ad95c7f7a54c927d987295008ebfa1401dcc21c"
	I0916 11:10:31.526211  260870 logs.go:123] Gathering logs for kindnet [00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285] ...
	I0916 11:10:31.526302  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285"
	I0916 11:10:31.564909  260870 logs.go:123] Gathering logs for kubelet ...
	I0916 11:10:31.564938  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:10:31.624407  260870 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:10:31.624443  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:10:31.729709  260870 logs.go:123] Gathering logs for etcd [34eff189102300711c29178081cd84ee3bb91bbadd42e71611f1e6cad730b927] ...
	I0916 11:10:31.729740  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34eff189102300711c29178081cd84ee3bb91bbadd42e71611f1e6cad730b927"
	I0916 11:10:31.767848  260870 logs.go:123] Gathering logs for containerd ...
	I0916 11:10:31.767879  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:10:31.825821  260870 logs.go:123] Gathering logs for container status ...
	I0916 11:10:31.825856  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:10:31.866717  260870 logs.go:123] Gathering logs for dmesg ...
	I0916 11:10:31.866752  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:10:31.888660  260870 logs.go:123] Gathering logs for kube-apiserver [5e66ac9a14fe5a4058d0e0a13c533aa90cf43e7e2829bee2c2b36cb49e2bdefd] ...
	I0916 11:10:31.888704  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e66ac9a14fe5a4058d0e0a13c533aa90cf43e7e2829bee2c2b36cb49e2bdefd"
	I0916 11:10:34.446916  260870 system_pods.go:59] 8 kube-system pods found
	I0916 11:10:34.446949  260870 system_pods.go:61] "coredns-74ff55c5b-78djj" [c118a29b-0828-40a2-9653-f2d3268eb8cd] Running
	I0916 11:10:34.446957  260870 system_pods.go:61] "etcd-old-k8s-version-371039" [2ba7f794-26f1-44cb-a895-77d6e4f40f11] Running
	I0916 11:10:34.446962  260870 system_pods.go:61] "kindnet-txszz" [55ac8e8a-b323-4c4a-a7d5-3c069e89deb8] Running
	I0916 11:10:34.446967  260870 system_pods.go:61] "kube-apiserver-old-k8s-version-371039" [4964def7-7f4b-46ff-b6d0-7122a46ed405] Running
	I0916 11:10:34.446972  260870 system_pods.go:61] "kube-controller-manager-old-k8s-version-371039" [8ab8368c-496d-417a-998c-8996a091c17d] Running
	I0916 11:10:34.446977  260870 system_pods.go:61] "kube-proxy-w2kp4" [fe617d0b-b789-47b3-b18f-0f9602e3873d] Running
	I0916 11:10:34.446982  260870 system_pods.go:61] "kube-scheduler-old-k8s-version-371039" [d00cbb62-128c-4108-a3ce-c3c38c3ec762] Running
	I0916 11:10:34.446987  260870 system_pods.go:61] "storage-provisioner" [fdaf9d37-19ec-4a4e-840e-b44e7158d798] Running
	I0916 11:10:34.446996  260870 system_pods.go:74] duration metric: took 3.360519154s to wait for pod list to return data ...
	I0916 11:10:34.447006  260870 default_sa.go:34] waiting for default service account to be created ...
	I0916 11:10:34.449463  260870 default_sa.go:45] found service account: "default"
	I0916 11:10:34.449496  260870 default_sa.go:55] duration metric: took 2.482731ms for default service account to be created ...
	I0916 11:10:34.449506  260870 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 11:10:34.454401  260870 system_pods.go:86] 8 kube-system pods found
	I0916 11:10:34.454432  260870 system_pods.go:89] "coredns-74ff55c5b-78djj" [c118a29b-0828-40a2-9653-f2d3268eb8cd] Running
	I0916 11:10:34.454439  260870 system_pods.go:89] "etcd-old-k8s-version-371039" [2ba7f794-26f1-44cb-a895-77d6e4f40f11] Running
	I0916 11:10:34.454445  260870 system_pods.go:89] "kindnet-txszz" [55ac8e8a-b323-4c4a-a7d5-3c069e89deb8] Running
	I0916 11:10:34.454450  260870 system_pods.go:89] "kube-apiserver-old-k8s-version-371039" [4964def7-7f4b-46ff-b6d0-7122a46ed405] Running
	I0916 11:10:34.454456  260870 system_pods.go:89] "kube-controller-manager-old-k8s-version-371039" [8ab8368c-496d-417a-998c-8996a091c17d] Running
	I0916 11:10:34.454462  260870 system_pods.go:89] "kube-proxy-w2kp4" [fe617d0b-b789-47b3-b18f-0f9602e3873d] Running
	I0916 11:10:34.454468  260870 system_pods.go:89] "kube-scheduler-old-k8s-version-371039" [d00cbb62-128c-4108-a3ce-c3c38c3ec762] Running
	I0916 11:10:34.454472  260870 system_pods.go:89] "storage-provisioner" [fdaf9d37-19ec-4a4e-840e-b44e7158d798] Running
	I0916 11:10:34.454481  260870 system_pods.go:126] duration metric: took 4.967785ms to wait for k8s-apps to be running ...
	I0916 11:10:34.454492  260870 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 11:10:34.454539  260870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:10:34.467176  260870 system_svc.go:56] duration metric: took 12.679137ms WaitForService to wait for kubelet
	I0916 11:10:34.467202  260870 kubeadm.go:582] duration metric: took 1m22.132748603s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:10:34.467229  260870 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:10:34.470211  260870 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 11:10:34.470253  260870 node_conditions.go:123] node cpu capacity is 8
	I0916 11:10:34.470270  260870 node_conditions.go:105] duration metric: took 3.035491ms to run NodePressure ...
	I0916 11:10:34.470283  260870 start.go:241] waiting for startup goroutines ...
	I0916 11:10:34.470302  260870 start.go:246] waiting for cluster config update ...
	I0916 11:10:34.470319  260870 start.go:255] writing updated cluster config ...
	I0916 11:10:34.470680  260870 ssh_runner.go:195] Run: rm -f paused
	I0916 11:10:34.479027  260870 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-371039" cluster and "default" namespace by default
	E0916 11:10:34.480271  260870 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	47a31e8c5ac3a       bfe3a36ebd252       About a minute ago   Running             coredns                   0                   fdced5247c6ff       coredns-74ff55c5b-78djj
	00416422d2a43       12968670680f4       About a minute ago   Running             kindnet-cni               0                   6d12bd53c3747       kindnet-txszz
	bdf8504c18d6d       6e38f40d628db       About a minute ago   Running             storage-provisioner       0                   ee17991909f8c       storage-provisioner
	a442530bd3eed       10cc881966cfd       About a minute ago   Running             kube-proxy                0                   a4449e8cd9394       kube-proxy-w2kp4
	8e878c306812f       b9fa1895dcaa6       About a minute ago   Running             kube-controller-manager   0                   5a0a25910c3e4       kube-controller-manager-old-k8s-version-371039
	34eff18910230       0369cf4303ffd       About a minute ago   Running             etcd                      0                   cdb2422929db2       etcd-old-k8s-version-371039
	6b3b4e782188a       3138b6e3d4712       About a minute ago   Running             kube-scheduler            0                   92988ff644b2d       kube-scheduler-old-k8s-version-371039
	5e66ac9a14fe5       ca9843d3b5454       About a minute ago   Running             kube-apiserver            0                   e0135df35da04       kube-apiserver-old-k8s-version-371039
	
	
	==> containerd <==
	Sep 16 11:09:13 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:13.901056543Z" level=info msg="RunPodSandbox for name:\"storage-provisioner\" uid:\"fdaf9d37-19ec-4a4e-840e-b44e7158d798\" namespace:\"kube-system\" returns sandbox id \"ee17991909f8cf296e2235c0255b87d5be5775f909f9e5f300e9aa92d85bdd2b\""
	Sep 16 11:09:13 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:13.903511210Z" level=info msg="CreateContainer within sandbox \"ee17991909f8cf296e2235c0255b87d5be5775f909f9e5f300e9aa92d85bdd2b\" for container name:\"storage-provisioner\""
	Sep 16 11:09:13 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:13.916582954Z" level=info msg="CreateContainer within sandbox \"ee17991909f8cf296e2235c0255b87d5be5775f909f9e5f300e9aa92d85bdd2b\" for name:\"storage-provisioner\" returns container id \"bdf8504c18d6de710f0dc4d8c6f20cc1f4aa180f8b245b99545016551fa68aa3\""
	Sep 16 11:09:13 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:13.917059700Z" level=info msg="StartContainer for \"bdf8504c18d6de710f0dc4d8c6f20cc1f4aa180f8b245b99545016551fa68aa3\""
	Sep 16 11:09:13 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:13.964149291Z" level=info msg="StartContainer for \"bdf8504c18d6de710f0dc4d8c6f20cc1f4aa180f8b245b99545016551fa68aa3\" returns successfully"
	Sep 16 11:09:15 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:15.845118475Z" level=info msg="ImageCreate event name:\"docker.io/kindest/kindnetd:v20240813-c6f155d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 11:09:15 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:15.845810292Z" level=info msg="stop pulling image docker.io/kindest/kindnetd:v20240813-c6f155d6: active requests=0, bytes read=36804223"
	Sep 16 11:09:15 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:15.848404015Z" level=info msg="ImageCreate event name:\"sha256:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 11:09:15 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:15.850977569Z" level=info msg="ImageCreate event name:\"docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 11:09:15 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:15.851442555Z" level=info msg="Pulled image \"docker.io/kindest/kindnetd:v20240813-c6f155d6\" with image id \"sha256:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f\", repo tag \"docker.io/kindest/kindnetd:v20240813-c6f155d6\", repo digest \"docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166\", size \"36793393\" in 2.825942222s"
	Sep 16 11:09:15 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:15.851519488Z" level=info msg="PullImage \"docker.io/kindest/kindnetd:v20240813-c6f155d6\" returns image reference \"sha256:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f\""
	Sep 16 11:09:15 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:15.853492988Z" level=info msg="CreateContainer within sandbox \"6d12bd53c3747a9cedf8034bc1c60eb2f6de1b1f45b50a747c26d2d773a72512\" for container name:\"kindnet-cni\""
	Sep 16 11:09:15 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:15.865468393Z" level=info msg="CreateContainer within sandbox \"6d12bd53c3747a9cedf8034bc1c60eb2f6de1b1f45b50a747c26d2d773a72512\" for name:\"kindnet-cni\" returns container id \"00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285\""
	Sep 16 11:09:15 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:15.866095221Z" level=info msg="StartContainer for \"00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285\""
	Sep 16 11:09:15 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:15.938701932Z" level=info msg="StartContainer for \"00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285\" returns successfully"
	Sep 16 11:09:26 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:26.897556537Z" level=info msg="RunPodSandbox for name:\"coredns-74ff55c5b-78djj\" uid:\"c118a29b-0828-40a2-9653-f2d3268eb8cd\" namespace:\"kube-system\""
	Sep 16 11:09:26 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:26.932620392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 11:09:26 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:26.932681735Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 11:09:26 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:26.932692030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:09:26 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:26.932785803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:09:26 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:26.982343321Z" level=info msg="RunPodSandbox for name:\"coredns-74ff55c5b-78djj\" uid:\"c118a29b-0828-40a2-9653-f2d3268eb8cd\" namespace:\"kube-system\" returns sandbox id \"fdced5247c6ffaaed8e609287c84bd12fb6a78c58d70bf4755db09f4f4712d8a\""
	Sep 16 11:09:26 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:26.988709862Z" level=info msg="CreateContainer within sandbox \"fdced5247c6ffaaed8e609287c84bd12fb6a78c58d70bf4755db09f4f4712d8a\" for container name:\"coredns\""
	Sep 16 11:09:27 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:27.003135375Z" level=info msg="CreateContainer within sandbox \"fdced5247c6ffaaed8e609287c84bd12fb6a78c58d70bf4755db09f4f4712d8a\" for name:\"coredns\" returns container id \"47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c\""
	Sep 16 11:09:27 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:27.003815790Z" level=info msg="StartContainer for \"47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c\""
	Sep 16 11:09:27 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:27.047962732Z" level=info msg="StartContainer for \"47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c\" returns successfully"
	
	
	==> coredns [47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 77006bb880cc2529c537c33b8dc6eabc
	CoreDNS-1.7.0
	linux/amd64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:49206 - 19492 "HINFO IN 2568215532487827892.8058846988098566839. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014231723s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-371039
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-371039
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=old-k8s-version-371039
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T11_08_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:08:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-371039
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:10:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:09:31 +0000   Mon, 16 Sep 2024 11:08:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:09:31 +0000   Mon, 16 Sep 2024 11:08:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:09:31 +0000   Mon, 16 Sep 2024 11:08:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:09:31 +0000   Mon, 16 Sep 2024 11:09:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-371039
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 9635bab378394b3cbc8d38b8b7ea27c5
	  System UUID:                5a808ec9-2d43-4212-9e81-7580afba2fbc
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-74ff55c5b-78djj                           100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     85s
	  kube-system                 etcd-old-k8s-version-371039                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         95s
	  kube-system                 kindnet-txszz                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      85s
	  kube-system                 kube-apiserver-old-k8s-version-371039             250m (3%)     0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 kube-controller-manager-old-k8s-version-371039    200m (2%)     0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 kube-proxy-w2kp4                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-scheduler-old-k8s-version-371039             100m (1%)     0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  Starting                 111s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  111s (x5 over 111s)  kubelet     Node old-k8s-version-371039 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s (x4 over 111s)  kubelet     Node old-k8s-version-371039 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     111s (x3 over 111s)  kubelet     Node old-k8s-version-371039 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  111s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 96s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  96s                  kubelet     Node old-k8s-version-371039 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    96s                  kubelet     Node old-k8s-version-371039 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     96s                  kubelet     Node old-k8s-version-371039 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  95s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                86s                  kubelet     Node old-k8s-version-371039 status is now: NodeReady
	  Normal  Starting                 84s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000001] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000001] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +1.003295] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000012] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.003959] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000006] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +2.011810] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000005] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.000001] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +4.063628] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000008] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.000030] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000007] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.003992] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000006] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +8.187268] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000006] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.000063] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000005] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.003939] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000006] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	
	
	==> etcd [34eff189102300711c29178081cd84ee3bb91bbadd42e71611f1e6cad730b927] <==
	2024-09-16 11:08:49.167631 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-09-16 11:08:49.167680 I | embed: listening for peers on 192.168.103.2:2380
	2024-09-16 11:08:49.167826 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2024/09/16 11:08:50 INFO: f23060b075c4c089 is starting a new election at term 1
	raft2024/09/16 11:08:50 INFO: f23060b075c4c089 became candidate at term 2
	raft2024/09/16 11:08:50 INFO: f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2
	raft2024/09/16 11:08:50 INFO: f23060b075c4c089 became leader at term 2
	raft2024/09/16 11:08:50 INFO: raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2
	2024-09-16 11:08:50.056510 I | etcdserver: published {Name:old-k8s-version-371039 ClientURLs:[https://192.168.103.2:2379]} to cluster 3336683c081d149d
	2024-09-16 11:08:50.056532 I | embed: ready to serve client requests
	2024-09-16 11:08:50.057110 I | embed: ready to serve client requests
	2024-09-16 11:08:50.058044 I | embed: serving client requests on 127.0.0.1:2379
	2024-09-16 11:08:50.058490 I | etcdserver: setting up the initial cluster version to 3.4
	2024-09-16 11:08:50.068100 I | embed: serving client requests on 192.168.103.2:2379
	2024-09-16 11:08:50.070326 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-09-16 11:08:50.070887 I | etcdserver/api: enabled capabilities for version 3.4
	2024-09-16 11:09:11.117153 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:09:20.292389 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:09:30.292398 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:09:40.292229 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:09:50.292279 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:10:00.292399 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:10:10.292409 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:10:20.292351 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:10:30.292340 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 11:10:37 up 53 min,  0 users,  load average: 2.59, 3.31, 2.17
	Linux old-k8s-version-371039 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285] <==
	I0916 11:09:16.122984       1 main.go:148] setting mtu 1500 for CNI 
	I0916 11:09:16.123005       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 11:09:16.123030       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 11:09:16.440841       1 controller.go:334] Starting controller kube-network-policies
	I0916 11:09:16.440859       1 controller.go:338] Waiting for informer caches to sync
	I0916 11:09:16.440866       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 11:09:16.741421       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 11:09:16.741449       1 metrics.go:61] Registering metrics
	I0916 11:09:16.741493       1 controller.go:374] Syncing nftables rules
	I0916 11:09:26.443817       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:09:26.443889       1 main.go:299] handling current node
	I0916 11:09:36.443817       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:09:36.443873       1 main.go:299] handling current node
	I0916 11:09:46.444993       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:09:46.445026       1 main.go:299] handling current node
	I0916 11:09:56.448730       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:09:56.448775       1 main.go:299] handling current node
	I0916 11:10:06.442481       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:10:06.442528       1 main.go:299] handling current node
	I0916 11:10:16.440913       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:10:16.440948       1 main.go:299] handling current node
	I0916 11:10:26.440992       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:10:26.441030       1 main.go:299] handling current node
	I0916 11:10:36.443855       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:10:36.443895       1 main.go:299] handling current node
	
	
	==> kube-apiserver [5e66ac9a14fe5a4058d0e0a13c533aa90cf43e7e2829bee2c2b36cb49e2bdefd] <==
	I0916 11:08:53.520192       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 11:08:53.521141       1 apf_controller.go:253] Running API Priority and Fairness config worker
	I0916 11:08:53.520210       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0916 11:08:54.353949       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0916 11:08:54.353987       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0916 11:08:54.361543       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0916 11:08:54.365717       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0916 11:08:54.365739       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0916 11:08:54.756312       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 11:08:54.792245       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0916 11:08:54.860469       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I0916 11:08:54.861548       1 controller.go:606] quota admission added evaluator for: endpoints
	I0916 11:08:54.865453       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 11:08:55.892699       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0916 11:08:56.495384       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0916 11:08:56.665190       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0916 11:09:01.882424       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 11:09:12.124421       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0916 11:09:12.248848       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0916 11:09:28.780739       1 client.go:360] parsed scheme: "passthrough"
	I0916 11:09:28.780781       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 11:09:28.780806       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0916 11:10:06.948682       1 client.go:360] parsed scheme: "passthrough"
	I0916 11:10:06.948905       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 11:10:06.948926       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [8e878c306812f7218314ae530ad95c7f7a54c927d987295008ebfa1401dcc21c] <==
	I0916 11:09:12.046184       1 shared_informer.go:247] Caches are synced for HPA 
	I0916 11:09:12.046335       1 shared_informer.go:247] Caches are synced for endpoint 
	I0916 11:09:12.046684       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0916 11:09:12.046840       1 shared_informer.go:247] Caches are synced for GC 
	I0916 11:09:12.048076       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0916 11:09:12.119917       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0916 11:09:12.130067       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-txszz"
	I0916 11:09:12.131917       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-w2kp4"
	I0916 11:09:12.246156       1 shared_informer.go:247] Caches are synced for deployment 
	I0916 11:09:12.246176       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0916 11:09:12.246193       1 shared_informer.go:247] Caches are synced for disruption 
	I0916 11:09:12.246220       1 disruption.go:339] Sending events to api server.
	I0916 11:09:12.248247       1 shared_informer.go:247] Caches are synced for resource quota 
	I0916 11:09:12.250904       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0916 11:09:12.254472       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-lgf42"
	I0916 11:09:12.261635       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-78djj"
	I0916 11:09:12.425820       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0916 11:09:12.726025       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0916 11:09:12.819908       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0916 11:09:12.819938       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0916 11:09:13.096725       1 request.go:655] Throttling request took 1.049972591s, request: GET:https://192.168.103.2:8443/apis/autoscaling/v2beta1?timeout=32s
	I0916 11:09:13.339089       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0916 11:09:13.344204       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-lgf42"
	I0916 11:09:13.897597       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0916 11:09:13.897639       1 shared_informer.go:247] Caches are synced for resource quota 
	
	
	==> kube-proxy [a442530bd3eed8c0c19667f9fb758696fcd53da538f2ae250a155372b1f2574e] <==
	I0916 11:09:13.322536       1 node.go:172] Successfully retrieved node IP: 192.168.103.2
	I0916 11:09:13.322732       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.103.2), assume IPv4 operation
	W0916 11:09:13.345840       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0916 11:09:13.345951       1 server_others.go:185] Using iptables Proxier.
	I0916 11:09:13.346284       1 server.go:650] Version: v1.20.0
	I0916 11:09:13.347687       1 config.go:315] Starting service config controller
	I0916 11:09:13.349932       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0916 11:09:13.347841       1 config.go:224] Starting endpoint slice config controller
	I0916 11:09:13.420415       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0916 11:09:13.420676       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0916 11:09:13.450370       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [6b3b4e782188a028982f995826169c7b0e2d5db28caae22d39c553f66b1dbf58] <==
	W0916 11:08:53.425390       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 11:08:53.425498       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 11:08:53.425546       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 11:08:53.425566       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 11:08:53.445666       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 11:08:53.445756       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 11:08:53.445770       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0916 11:08:53.445859       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0916 11:08:53.447314       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 11:08:53.447706       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 11:08:53.447999       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 11:08:53.448116       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:08:53.448269       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 11:08:53.448478       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 11:08:53.448860       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 11:08:53.448864       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 11:08:53.449019       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 11:08:53.449164       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 11:08:53.450105       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 11:08:53.450247       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:08:54.410511       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 11:08:54.433702       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 11:08:54.472291       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:08:54.592362       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0916 11:08:56.246004       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Sep 16 11:09:12 old-k8s-version-371039 kubelet[2075]: I0916 11:09:12.321030    2075 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kindnet-token-xjzl9" (UniqueName: "kubernetes.io/secret/55ac8e8a-b323-4c4a-a7d5-3c069e89deb8-kindnet-token-xjzl9") pod "kindnet-txszz" (UID: "55ac8e8a-b323-4c4a-a7d5-3c069e89deb8")
	Sep 16 11:09:12 old-k8s-version-371039 kubelet[2075]: I0916 11:09:12.321249    2075 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/55ac8e8a-b323-4c4a-a7d5-3c069e89deb8-xtables-lock") pod "kindnet-txszz" (UID: "55ac8e8a-b323-4c4a-a7d5-3c069e89deb8")
	Sep 16 11:09:12 old-k8s-version-371039 kubelet[2075]: I0916 11:09:12.321298    2075 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-k6lnv" (UniqueName: "kubernetes.io/secret/fe617d0b-b789-47b3-b18f-0f9602e3873d-kube-proxy-token-k6lnv") pod "kube-proxy-w2kp4" (UID: "fe617d0b-b789-47b3-b18f-0f9602e3873d")
	Sep 16 11:09:12 old-k8s-version-371039 kubelet[2075]: I0916 11:09:12.421811    2075 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/c118a29b-0828-40a2-9653-f2d3268eb8cd-config-volume") pod "coredns-74ff55c5b-78djj" (UID: "c118a29b-0828-40a2-9653-f2d3268eb8cd")
	Sep 16 11:09:12 old-k8s-version-371039 kubelet[2075]: I0916 11:09:12.421866    2075 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-vcrsr" (UniqueName: "kubernetes.io/secret/c118a29b-0828-40a2-9653-f2d3268eb8cd-coredns-token-vcrsr") pod "coredns-74ff55c5b-78djj" (UID: "c118a29b-0828-40a2-9653-f2d3268eb8cd")
	Sep 16 11:09:12 old-k8s-version-371039 kubelet[2075]: I0916 11:09:12.421921    2075 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-vcrsr" (UniqueName: "kubernetes.io/secret/30c8c5e2-3068-4ddf-bcfa-a514dee78dea-coredns-token-vcrsr") pod "coredns-74ff55c5b-lgf42" (UID: "30c8c5e2-3068-4ddf-bcfa-a514dee78dea")
	Sep 16 11:09:12 old-k8s-version-371039 kubelet[2075]: I0916 11:09:12.421971    2075 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/30c8c5e2-3068-4ddf-bcfa-a514dee78dea-config-volume") pod "coredns-74ff55c5b-lgf42" (UID: "30c8c5e2-3068-4ddf-bcfa-a514dee78dea")
	Sep 16 11:09:12 old-k8s-version-371039 kubelet[2075]: E0916 11:09:12.829232    2075 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "3a6e2da6a3e56c5a6f73b13065dcd336f8732b250981a6b8f6de9228323aab79": failed to find network info for sandbox "3a6e2da6a3e56c5a6f73b13065dcd336f8732b250981a6b8f6de9228323aab79"
	Sep 16 11:09:12 old-k8s-version-371039 kubelet[2075]: E0916 11:09:12.829320    2075 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "coredns-74ff55c5b-lgf42_kube-system(30c8c5e2-3068-4ddf-bcfa-a514dee78dea)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "3a6e2da6a3e56c5a6f73b13065dcd336f8732b250981a6b8f6de9228323aab79": failed to find network info for sandbox "3a6e2da6a3e56c5a6f73b13065dcd336f8732b250981a6b8f6de9228323aab79"
	Sep 16 11:09:12 old-k8s-version-371039 kubelet[2075]: E0916 11:09:12.829339    2075 kuberuntime_manager.go:755] createPodSandbox for pod "coredns-74ff55c5b-lgf42_kube-system(30c8c5e2-3068-4ddf-bcfa-a514dee78dea)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "3a6e2da6a3e56c5a6f73b13065dcd336f8732b250981a6b8f6de9228323aab79": failed to find network info for sandbox "3a6e2da6a3e56c5a6f73b13065dcd336f8732b250981a6b8f6de9228323aab79"
	Sep 16 11:09:12 old-k8s-version-371039 kubelet[2075]: E0916 11:09:12.829403    2075 pod_workers.go:191] Error syncing pod 30c8c5e2-3068-4ddf-bcfa-a514dee78dea ("coredns-74ff55c5b-lgf42_kube-system(30c8c5e2-3068-4ddf-bcfa-a514dee78dea)"), skipping: failed to "CreatePodSandbox" for "coredns-74ff55c5b-lgf42_kube-system(30c8c5e2-3068-4ddf-bcfa-a514dee78dea)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-74ff55c5b-lgf42_kube-system(30c8c5e2-3068-4ddf-bcfa-a514dee78dea)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"3a6e2da6a3e56c5a6f73b13065dcd336f8732b250981a6b8f6de9228323aab79\": failed to find network info for sandbox \"3a6e2da6a3e56c5a6f73b13065dcd336f8732b250981a6b8f6de9228323aab79\""
	Sep 16 11:09:12 old-k8s-version-371039 kubelet[2075]: E0916 11:09:12.836368    2075 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "6262a29949d3b2dd4f285aa4b3f78d4dda3258584a03aab14b23a9e8529c8fe5": failed to find network info for sandbox "6262a29949d3b2dd4f285aa4b3f78d4dda3258584a03aab14b23a9e8529c8fe5"
	Sep 16 11:09:12 old-k8s-version-371039 kubelet[2075]: E0916 11:09:12.836458    2075 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "coredns-74ff55c5b-78djj_kube-system(c118a29b-0828-40a2-9653-f2d3268eb8cd)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "6262a29949d3b2dd4f285aa4b3f78d4dda3258584a03aab14b23a9e8529c8fe5": failed to find network info for sandbox "6262a29949d3b2dd4f285aa4b3f78d4dda3258584a03aab14b23a9e8529c8fe5"
	Sep 16 11:09:12 old-k8s-version-371039 kubelet[2075]: E0916 11:09:12.836480    2075 kuberuntime_manager.go:755] createPodSandbox for pod "coredns-74ff55c5b-78djj_kube-system(c118a29b-0828-40a2-9653-f2d3268eb8cd)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "6262a29949d3b2dd4f285aa4b3f78d4dda3258584a03aab14b23a9e8529c8fe5": failed to find network info for sandbox "6262a29949d3b2dd4f285aa4b3f78d4dda3258584a03aab14b23a9e8529c8fe5"
	Sep 16 11:09:12 old-k8s-version-371039 kubelet[2075]: E0916 11:09:12.836537    2075 pod_workers.go:191] Error syncing pod c118a29b-0828-40a2-9653-f2d3268eb8cd ("coredns-74ff55c5b-78djj_kube-system(c118a29b-0828-40a2-9653-f2d3268eb8cd)"), skipping: failed to "CreatePodSandbox" for "coredns-74ff55c5b-78djj_kube-system(c118a29b-0828-40a2-9653-f2d3268eb8cd)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-74ff55c5b-78djj_kube-system(c118a29b-0828-40a2-9653-f2d3268eb8cd)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"6262a29949d3b2dd4f285aa4b3f78d4dda3258584a03aab14b23a9e8529c8fe5\": failed to find network info for sandbox \"6262a29949d3b2dd4f285aa4b3f78d4dda3258584a03aab14b23a9e8529c8fe5\""
	Sep 16 11:09:13 old-k8s-version-371039 kubelet[2075]: I0916 11:09:13.521958    2075 topology_manager.go:187] [topologymanager] Topology Admit Handler
	Sep 16 11:09:13 old-k8s-version-371039 kubelet[2075]: I0916 11:09:13.527538    2075 reconciler.go:196] operationExecutor.UnmountVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/30c8c5e2-3068-4ddf-bcfa-a514dee78dea-config-volume") pod "30c8c5e2-3068-4ddf-bcfa-a514dee78dea" (UID: "30c8c5e2-3068-4ddf-bcfa-a514dee78dea")
	Sep 16 11:09:13 old-k8s-version-371039 kubelet[2075]: I0916 11:09:13.527585    2075 reconciler.go:196] operationExecutor.UnmountVolume started for volume "coredns-token-vcrsr" (UniqueName: "kubernetes.io/secret/30c8c5e2-3068-4ddf-bcfa-a514dee78dea-coredns-token-vcrsr") pod "30c8c5e2-3068-4ddf-bcfa-a514dee78dea" (UID: "30c8c5e2-3068-4ddf-bcfa-a514dee78dea")
	Sep 16 11:09:13 old-k8s-version-371039 kubelet[2075]: W0916 11:09:13.527857    2075 empty_dir.go:520] Warning: Failed to clear quota on /var/lib/kubelet/pods/30c8c5e2-3068-4ddf-bcfa-a514dee78dea/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Sep 16 11:09:13 old-k8s-version-371039 kubelet[2075]: I0916 11:09:13.528062    2075 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30c8c5e2-3068-4ddf-bcfa-a514dee78dea-config-volume" (OuterVolumeSpecName: "config-volume") pod "30c8c5e2-3068-4ddf-bcfa-a514dee78dea" (UID: "30c8c5e2-3068-4ddf-bcfa-a514dee78dea"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Sep 16 11:09:13 old-k8s-version-371039 kubelet[2075]: I0916 11:09:13.530229    2075 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30c8c5e2-3068-4ddf-bcfa-a514dee78dea-coredns-token-vcrsr" (OuterVolumeSpecName: "coredns-token-vcrsr") pod "30c8c5e2-3068-4ddf-bcfa-a514dee78dea" (UID: "30c8c5e2-3068-4ddf-bcfa-a514dee78dea"). InnerVolumeSpecName "coredns-token-vcrsr". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 16 11:09:13 old-k8s-version-371039 kubelet[2075]: I0916 11:09:13.627931    2075 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/fdaf9d37-19ec-4a4e-840e-b44e7158d798-tmp") pod "storage-provisioner" (UID: "fdaf9d37-19ec-4a4e-840e-b44e7158d798")
	Sep 16 11:09:13 old-k8s-version-371039 kubelet[2075]: I0916 11:09:13.627974    2075 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-4gk79" (UniqueName: "kubernetes.io/secret/fdaf9d37-19ec-4a4e-840e-b44e7158d798-storage-provisioner-token-4gk79") pod "storage-provisioner" (UID: "fdaf9d37-19ec-4a4e-840e-b44e7158d798")
	Sep 16 11:09:13 old-k8s-version-371039 kubelet[2075]: I0916 11:09:13.627999    2075 reconciler.go:319] Volume detached for volume "config-volume" (UniqueName: "kubernetes.io/configmap/30c8c5e2-3068-4ddf-bcfa-a514dee78dea-config-volume") on node "old-k8s-version-371039" DevicePath ""
	Sep 16 11:09:13 old-k8s-version-371039 kubelet[2075]: I0916 11:09:13.628011    2075 reconciler.go:319] Volume detached for volume "coredns-token-vcrsr" (UniqueName: "kubernetes.io/secret/30c8c5e2-3068-4ddf-bcfa-a514dee78dea-coredns-token-vcrsr") on node "old-k8s-version-371039" DevicePath ""
	
	
	==> storage-provisioner [bdf8504c18d6de710f0dc4d8c6f20cc1f4aa180f8b245b99545016551fa68aa3] <==
	I0916 11:09:13.972762       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 11:09:13.980679       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 11:09:13.980724       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 11:09:13.987659       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 11:09:13.987719       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3df43ad2-abd4-4d32-b26b-91fa0eea8673", APIVersion:"v1", ResourceVersion:"473", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-371039_ba54aacd-67bb-46f0-b4ba-d01754da79ef became leader
	I0916 11:09:13.987846       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-371039_ba54aacd-67bb-46f0-b4ba-d01754da79ef!
	I0916 11:09:14.088020       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-371039_ba54aacd-67bb-46f0-b4ba-d01754da79ef!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-371039 -n old-k8s-version-371039
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-371039 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context old-k8s-version-371039 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (480.516µs)
helpers_test.go:263: kubectl --context old-k8s-version-371039 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (3.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-371039 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-371039 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-371039 describe deploy/metrics-server -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (453.567µs)
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-371039 describe deploy/metrics-server -n kube-system": fork/exec /usr/local/bin/kubectl: exec format error
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-371039
helpers_test.go:235: (dbg) docker inspect old-k8s-version-371039:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9e01fb8ba8f9390279ece7fa8549e0985cb4ee5a2abca405a94db6651b123b23",
	        "Created": "2024-09-16T11:08:26.808717426Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 262175,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T11:08:26.947014727Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/9e01fb8ba8f9390279ece7fa8549e0985cb4ee5a2abca405a94db6651b123b23/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9e01fb8ba8f9390279ece7fa8549e0985cb4ee5a2abca405a94db6651b123b23/hostname",
	        "HostsPath": "/var/lib/docker/containers/9e01fb8ba8f9390279ece7fa8549e0985cb4ee5a2abca405a94db6651b123b23/hosts",
	        "LogPath": "/var/lib/docker/containers/9e01fb8ba8f9390279ece7fa8549e0985cb4ee5a2abca405a94db6651b123b23/9e01fb8ba8f9390279ece7fa8549e0985cb4ee5a2abca405a94db6651b123b23-json.log",
	        "Name": "/old-k8s-version-371039",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-371039:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-371039",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a4c36d1f47b4f755139012e0a43ea6f04287ebb716fa5c832223a9a5a773adaf-init/diff:/var/lib/docker/overlay2/e391a31d00d345931d18da68f8a7d7338830f6d9b98fe203461225d03a65d594/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a4c36d1f47b4f755139012e0a43ea6f04287ebb716fa5c832223a9a5a773adaf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a4c36d1f47b4f755139012e0a43ea6f04287ebb716fa5c832223a9a5a773adaf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a4c36d1f47b4f755139012e0a43ea6f04287ebb716fa5c832223a9a5a773adaf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-371039",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-371039/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-371039",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-371039",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-371039",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cb344cb6ef2301f2020c4e997ddc256592ab1b779218cfb3d91a41736363c80c",
	            "SandboxKey": "/var/run/docker/netns/cb344cb6ef23",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33059"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33062"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33060"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33061"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-371039": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "617bc0338b3b0f6ed38b0b21b091e38e1d6c95398d3e053128c978435134833f",
	                    "EndpointID": "e8c6186d44336c3ccbe03bab444f7bdf6847c5d8aac6300c54bfe5f7be82eb5d",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-371039",
	                        "9e01fb8ba8f9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-371039 -n old-k8s-version-371039
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-371039 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-371039 logs -n 25: (1.153890173s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-771611 sudo                                  | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | systemctl status crio --all                            |                           |         |         |                     |                     |
	|         | --full --no-pager                                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo                                  | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | systemctl cat crio --no-pager                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo find                             | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-771611 sudo crio                             | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC |                     |
	|         | config                                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-771611                                       | cilium-771611             | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	| delete  | -p missing-upgrade-327796                              | missing-upgrade-327796    | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:07 UTC |
	| start   | -p cert-expiration-021107                              | cert-expiration-021107    | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:08 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| start   | -p force-systemd-flag-917705                           | force-systemd-flag-917705 | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:08 UTC |
	|         | --memory=2048 --force-systemd                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                                   |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-846070                               | force-systemd-env-846070  | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | ssh cat                                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-846070                            | force-systemd-env-846070  | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| stop    | -p kubernetes-upgrade-311911                           | kubernetes-upgrade-311911 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| start   | -p cert-options-840054                                 | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-311911                           | kubernetes-upgrade-311911 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                                   |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-917705                              | force-systemd-flag-917705 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | ssh cat                                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-917705                           | force-systemd-flag-917705 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| start   | -p old-k8s-version-371039                              | old-k8s-version-371039    | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:10 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| ssh     | cert-options-840054 ssh                                | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | openssl x509 -text -noout -in                          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-840054 -- sudo                         | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                           |         |         |                     |                     |
	| delete  | -p cert-options-840054                                 | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| start   | -p no-preload-349453                                   | no-preload-349453         | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:09 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-349453             | no-preload-349453         | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC | 16 Sep 24 11:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p no-preload-349453                                   | no-preload-349453         | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC | 16 Sep 24 11:09 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-349453                  | no-preload-349453         | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC | 16 Sep 24 11:09 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p no-preload-349453                                   | no-preload-349453         | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-371039        | old-k8s-version-371039    | jenkins | v1.34.0 | 16 Sep 24 11:10 UTC | 16 Sep 24 11:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:09:48
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:09:48.774615  274695 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:09:48.774727  274695 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:09:48.774736  274695 out.go:358] Setting ErrFile to fd 2...
	I0916 11:09:48.774741  274695 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:09:48.774931  274695 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 11:09:48.775465  274695 out.go:352] Setting JSON to false
	I0916 11:09:48.776814  274695 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3133,"bootTime":1726481856,"procs":325,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:09:48.776915  274695 start.go:139] virtualization: kvm guest
	I0916 11:09:48.779376  274695 out.go:177] * [no-preload-349453] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:09:48.780693  274695 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:09:48.780693  274695 notify.go:220] Checking for updates...
	I0916 11:09:48.782771  274695 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:09:48.783942  274695 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:09:48.784951  274695 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 11:09:48.785992  274695 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:09:48.787325  274695 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:09:48.789055  274695 config.go:182] Loaded profile config "no-preload-349453": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:09:48.789761  274695 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:09:48.816058  274695 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 11:09:48.816173  274695 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:09:48.872573  274695 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:68 OomKillDisable:true NGoroutines:84 SystemTime:2024-09-16 11:09:48.861917048 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:09:48.872710  274695 docker.go:318] overlay module found
	I0916 11:09:48.874792  274695 out.go:177] * Using the docker driver based on existing profile
	I0916 11:09:48.876381  274695 start.go:297] selected driver: docker
	I0916 11:09:48.876396  274695 start.go:901] validating driver "docker" against &{Name:no-preload-349453 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-349453 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:09:48.876482  274695 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:09:48.877396  274695 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:09:48.937469  274695 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:68 OomKillDisable:true NGoroutines:84 SystemTime:2024-09-16 11:09:48.927117526 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:09:48.937828  274695 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:09:48.937861  274695 cni.go:84] Creating CNI manager for ""
	I0916 11:09:48.937920  274695 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:09:48.937961  274695 start.go:340] cluster config:
	{Name:no-preload-349453 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-349453 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:09:48.939958  274695 out.go:177] * Starting "no-preload-349453" primary control-plane node in "no-preload-349453" cluster
	I0916 11:09:48.941389  274695 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 11:09:48.942657  274695 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 11:09:48.943944  274695 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:09:48.944031  274695 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 11:09:48.944121  274695 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/config.json ...
	I0916 11:09:48.944323  274695 cache.go:107] acquiring lock: {Name:mk505f3dd823c459cfb83f2d2a39affe63c4c40e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:09:48.944366  274695 cache.go:107] acquiring lock: {Name:mk612053845ede903900e7b583df14a07089be08 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:09:48.944387  274695 cache.go:107] acquiring lock: {Name:mkb7cb231873e7918d3e306be4ec4f6091d91485 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:09:48.944439  274695 cache.go:115] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 exists
	I0916 11:09:48.944446  274695 cache.go:115] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0916 11:09:48.944431  274695 cache.go:107] acquiring lock: {Name:mkd9c658f7569779b8a27d53e97cc0f70f55a845 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:09:48.944455  274695 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1" took 69.965µs
	I0916 11:09:48.944451  274695 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10" took 91.023µs
	I0916 11:09:48.944470  274695 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0916 11:09:48.944470  274695 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10 succeeded
	I0916 11:09:48.944322  274695 cache.go:107] acquiring lock: {Name:mk0f2d9e0670c46fe9eb165a8119acf30531a2e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:09:48.944483  274695 cache.go:107] acquiring lock: {Name:mk8275b1fd51b04034df297d05c3d74274567a6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:09:48.944498  274695 cache.go:115] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0916 11:09:48.944504  274695 cache.go:115] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0916 11:09:48.944507  274695 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1" took 77.168µs
	I0916 11:09:48.944511  274695 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1" took 202.159µs
	I0916 11:09:48.944515  274695 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0916 11:09:48.944519  274695 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0916 11:09:48.944519  274695 cache.go:115] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 exists
	I0916 11:09:48.944527  274695 cache.go:115] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0916 11:09:48.944530  274695 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0" took 49.3µs
	I0916 11:09:48.944527  274695 cache.go:107] acquiring lock: {Name:mk0b25b3ebef8c92ed85c693112bf4f2b400d9b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:09:48.944537  274695 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 221.354µs
	I0916 11:09:48.944545  274695 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0916 11:09:48.944537  274695 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0916 11:09:48.944548  274695 cache.go:107] acquiring lock: {Name:mkd90d764df5e26e345f1c24540d37a0e89a5b18 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:09:48.944560  274695 cache.go:115] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0916 11:09:48.944566  274695 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1" took 41.533µs
	I0916 11:09:48.944573  274695 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0916 11:09:48.944604  274695 cache.go:115] /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0916 11:09:48.944610  274695 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3" took 64.195µs
	I0916 11:09:48.944617  274695 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0916 11:09:48.944624  274695 cache.go:87] Successfully saved all images to host disk.
	W0916 11:09:48.969191  274695 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 11:09:48.969211  274695 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 11:09:48.969289  274695 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 11:09:48.969306  274695 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 11:09:48.969311  274695 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 11:09:48.969319  274695 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 11:09:48.969326  274695 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 11:09:49.025446  274695 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 11:09:49.025486  274695 cache.go:194] Successfully downloaded all kic artifacts
	I0916 11:09:49.025515  274695 start.go:360] acquireMachinesLock for no-preload-349453: {Name:mk8558ad422c1a28af392329b5800e6b7ec410a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:09:49.025584  274695 start.go:364] duration metric: took 51.504µs to acquireMachinesLock for "no-preload-349453"
	I0916 11:09:49.025602  274695 start.go:96] Skipping create...Using existing machine configuration
	I0916 11:09:49.025610  274695 fix.go:54] fixHost starting: 
	I0916 11:09:49.025910  274695 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Status}}
	I0916 11:09:49.044053  274695 fix.go:112] recreateIfNeeded on no-preload-349453: state=Stopped err=<nil>
	W0916 11:09:49.044108  274695 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 11:09:49.045989  274695 out.go:177] * Restarting existing docker container for "no-preload-349453" ...
	I0916 11:09:45.849283  260870 pod_ready.go:103] pod "etcd-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:48.349153  260870 pod_ready.go:103] pod "etcd-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:48.687452  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:48.687946  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:09:48.687995  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:09:48.688038  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:09:48.726246  254463 cri.go:89] found id: "f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd"
	I0916 11:09:48.726273  254463 cri.go:89] found id: ""
	I0916 11:09:48.726285  254463 logs.go:276] 1 containers: [f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd]
	I0916 11:09:48.726349  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:48.729998  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:09:48.730067  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:09:48.770403  254463 cri.go:89] found id: ""
	I0916 11:09:48.770433  254463 logs.go:276] 0 containers: []
	W0916 11:09:48.770443  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:09:48.770451  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:09:48.770511  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:09:48.807549  254463 cri.go:89] found id: ""
	I0916 11:09:48.807580  254463 logs.go:276] 0 containers: []
	W0916 11:09:48.807593  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:09:48.807601  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:09:48.807655  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:09:48.854558  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:09:48.854578  254463 cri.go:89] found id: ""
	I0916 11:09:48.854585  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:09:48.854629  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:48.858424  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:09:48.858482  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:09:48.893983  254463 cri.go:89] found id: ""
	I0916 11:09:48.894013  254463 logs.go:276] 0 containers: []
	W0916 11:09:48.894024  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:09:48.894032  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:09:48.894090  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:09:48.931964  254463 cri.go:89] found id: "cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:09:48.931987  254463 cri.go:89] found id: "b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9"
	I0916 11:09:48.931991  254463 cri.go:89] found id: ""
	I0916 11:09:48.932000  254463 logs.go:276] 2 containers: [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9]
	I0916 11:09:48.932050  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:48.936381  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:48.940101  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:09:48.940183  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:09:48.979539  254463 cri.go:89] found id: ""
	I0916 11:09:48.979566  254463 logs.go:276] 0 containers: []
	W0916 11:09:48.979578  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:09:48.979585  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:09:48.979645  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:09:49.014921  254463 cri.go:89] found id: ""
	I0916 11:09:49.014951  254463 logs.go:276] 0 containers: []
	W0916 11:09:49.014964  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:09:49.014983  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:09:49.014998  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:09:49.056665  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:09:49.056697  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:09:49.110424  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:09:49.110453  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:09:49.178554  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:09:49.178592  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:09:49.244586  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:09:49.244612  254463 logs.go:123] Gathering logs for kube-apiserver [f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd] ...
	I0916 11:09:49.244629  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd"
	I0916 11:09:49.285235  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:09:49.285264  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:09:49.385095  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:09:49.385133  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:09:49.409418  254463 logs.go:123] Gathering logs for kube-controller-manager [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff] ...
	I0916 11:09:49.409454  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:09:49.445392  254463 logs.go:123] Gathering logs for kube-controller-manager [b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9] ...
	I0916 11:09:49.445422  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9"
	I0916 11:09:51.983011  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:09:49.047145  274695 cli_runner.go:164] Run: docker start no-preload-349453
	I0916 11:09:49.345476  274695 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Status}}
	I0916 11:09:49.369895  274695 kic.go:430] container "no-preload-349453" state is running.
	I0916 11:09:49.370255  274695 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-349453
	I0916 11:09:49.390076  274695 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/config.json ...
	I0916 11:09:49.390324  274695 machine.go:93] provisionDockerMachine start ...
	I0916 11:09:49.390405  274695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:49.409420  274695 main.go:141] libmachine: Using SSH client type: native
	I0916 11:09:49.409726  274695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I0916 11:09:49.409751  274695 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 11:09:49.410474  274695 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33840->127.0.0.1:33068: read: connection reset by peer
	I0916 11:09:52.543274  274695 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-349453
	
	I0916 11:09:52.543304  274695 ubuntu.go:169] provisioning hostname "no-preload-349453"
	I0916 11:09:52.543357  274695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:52.561425  274695 main.go:141] libmachine: Using SSH client type: native
	I0916 11:09:52.561639  274695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I0916 11:09:52.561659  274695 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-349453 && echo "no-preload-349453" | sudo tee /etc/hostname
	I0916 11:09:52.702731  274695 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-349453
	
	I0916 11:09:52.702807  274695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:52.720926  274695 main.go:141] libmachine: Using SSH client type: native
	I0916 11:09:52.721115  274695 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I0916 11:09:52.721133  274695 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-349453' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-349453/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-349453' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:09:52.852007  274695 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:09:52.852046  274695 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 11:09:52.852067  274695 ubuntu.go:177] setting up certificates
	I0916 11:09:52.852079  274695 provision.go:84] configureAuth start
	I0916 11:09:52.852141  274695 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-349453
	I0916 11:09:52.869844  274695 provision.go:143] copyHostCerts
	I0916 11:09:52.869915  274695 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 11:09:52.869927  274695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 11:09:52.869991  274695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 11:09:52.870107  274695 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 11:09:52.870119  274695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 11:09:52.870146  274695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 11:09:52.870211  274695 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 11:09:52.870219  274695 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 11:09:52.870248  274695 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 11:09:52.870308  274695 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.no-preload-349453 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-349453]
	I0916 11:09:53.005905  274695 provision.go:177] copyRemoteCerts
	I0916 11:09:53.005958  274695 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:09:53.005995  274695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:53.023517  274695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:09:53.120443  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 11:09:53.142805  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0916 11:09:53.166225  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 11:09:53.188868  274695 provision.go:87] duration metric: took 336.770749ms to configureAuth
	I0916 11:09:53.188907  274695 ubuntu.go:193] setting minikube options for container-runtime
	I0916 11:09:53.189114  274695 config.go:182] Loaded profile config "no-preload-349453": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:09:53.189127  274695 machine.go:96] duration metric: took 3.798788146s to provisionDockerMachine
	I0916 11:09:53.189135  274695 start.go:293] postStartSetup for "no-preload-349453" (driver="docker")
	I0916 11:09:53.189145  274695 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:09:53.189195  274695 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:09:53.189233  274695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:53.206547  274695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:09:53.304863  274695 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:09:53.308040  274695 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 11:09:53.308080  274695 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 11:09:53.308092  274695 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 11:09:53.308101  274695 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 11:09:53.308115  274695 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 11:09:53.308178  274695 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 11:09:53.308280  274695 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 11:09:53.308405  274695 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:09:53.316395  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:09:53.338548  274695 start.go:296] duration metric: took 149.394766ms for postStartSetup
	I0916 11:09:53.338650  274695 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 11:09:53.338694  274695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:53.357422  274695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:09:53.452877  274695 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 11:09:53.457360  274695 fix.go:56] duration metric: took 4.43174375s for fixHost
	I0916 11:09:53.457384  274695 start.go:83] releasing machines lock for "no-preload-349453", held for 4.431788357s
	I0916 11:09:53.457450  274695 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-349453
	I0916 11:09:53.475348  274695 ssh_runner.go:195] Run: cat /version.json
	I0916 11:09:53.475400  274695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:53.475417  274695 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:09:53.475476  274695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:53.493461  274695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:09:53.494009  274695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:09:53.583400  274695 ssh_runner.go:195] Run: systemctl --version
	I0916 11:09:53.664600  274695 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:09:53.669030  274695 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 11:09:53.686361  274695 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 11:09:53.686447  274695 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:09:53.694804  274695 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 11:09:53.694831  274695 start.go:495] detecting cgroup driver to use...
	I0916 11:09:53.694862  274695 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 11:09:53.694907  274695 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 11:09:53.707615  274695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 11:09:53.719106  274695 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:09:53.719198  274695 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:09:53.731307  274695 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:09:53.741993  274695 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:09:53.822112  274695 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:09:53.892551  274695 docker.go:233] disabling docker service ...
	I0916 11:09:53.892640  274695 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:09:53.904867  274695 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:09:53.915797  274695 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:09:53.997972  274695 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:09:54.077247  274695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:09:54.088231  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:09:54.104123  274695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 11:09:54.113650  274695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 11:09:54.123084  274695 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 11:09:54.123150  274695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 11:09:54.132500  274695 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 11:09:54.141637  274695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 11:09:54.150420  274695 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 11:09:54.159442  274695 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:09:54.169162  274695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 11:09:54.178447  274695 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 11:09:54.187883  274695 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 11:09:54.197946  274695 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:09:54.205872  274695 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:09:54.213572  274695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:09:54.289888  274695 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 11:09:54.379344  274695 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 11:09:54.379416  274695 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 11:09:54.383200  274695 start.go:563] Will wait 60s for crictl version
	I0916 11:09:54.383251  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:09:54.386338  274695 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:09:54.418191  274695 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 11:09:54.418249  274695 ssh_runner.go:195] Run: containerd --version
	I0916 11:09:54.441777  274695 ssh_runner.go:195] Run: containerd --version
	I0916 11:09:54.467613  274695 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 11:09:50.847763  260870 pod_ready.go:103] pod "etcd-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:52.849026  260870 pod_ready.go:103] pod "etcd-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:54.849276  260870 pod_ready.go:103] pod "etcd-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:54.468958  274695 cli_runner.go:164] Run: docker network inspect no-preload-349453 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:09:54.485947  274695 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0916 11:09:54.489631  274695 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:09:54.500473  274695 kubeadm.go:883] updating cluster {Name:no-preload-349453 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-349453 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenk
ins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:09:54.500611  274695 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:09:54.500665  274695 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:09:54.532760  274695 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 11:09:54.532781  274695 cache_images.go:84] Images are preloaded, skipping loading
	I0916 11:09:54.532790  274695 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.31.1 containerd true true} ...
	I0916 11:09:54.532898  274695 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-349453 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-349453 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:09:54.532956  274695 ssh_runner.go:195] Run: sudo crictl info
	I0916 11:09:54.565820  274695 cni.go:84] Creating CNI manager for ""
	I0916 11:09:54.565853  274695 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:09:54.565868  274695 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:09:54.565894  274695 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-349453 NodeName:no-preload-349453 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 11:09:54.566029  274695 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-349453"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:09:54.566101  274695 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 11:09:54.574595  274695 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 11:09:54.574664  274695 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:09:54.583330  274695 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0916 11:09:54.600902  274695 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:09:54.617863  274695 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2171 bytes)
	I0916 11:09:54.635791  274695 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0916 11:09:54.639161  274695 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:09:54.649784  274695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:09:54.733077  274695 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:09:54.746471  274695 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453 for IP: 192.168.94.2
	I0916 11:09:54.746493  274695 certs.go:194] generating shared ca certs ...
	I0916 11:09:54.746508  274695 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:09:54.746655  274695 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 11:09:54.746704  274695 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 11:09:54.746714  274695 certs.go:256] generating profile certs ...
	I0916 11:09:54.746801  274695 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.key
	I0916 11:09:54.746889  274695 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.key.85f7849d
	I0916 11:09:54.746961  274695 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/proxy-client.key
	I0916 11:09:54.747124  274695 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 11:09:54.747163  274695 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 11:09:54.747174  274695 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:09:54.747209  274695 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 11:09:54.747242  274695 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:09:54.747268  274695 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 11:09:54.747337  274695 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:09:54.748125  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:09:54.773659  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:09:54.798587  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:09:54.838039  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:09:54.866265  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0916 11:09:54.922112  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 11:09:54.949631  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:09:54.974851  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 11:09:54.998140  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:09:55.021759  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 11:09:55.047817  274695 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 11:09:55.072006  274695 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:09:55.090041  274695 ssh_runner.go:195] Run: openssl version
	I0916 11:09:55.095459  274695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:09:55.104870  274695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:09:55.108622  274695 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:09:55.108679  274695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:09:55.115169  274695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:09:55.124341  274695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 11:09:55.134032  274695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 11:09:55.137540  274695 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 11:09:55.137603  274695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 11:09:55.144314  274695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 11:09:55.153020  274695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 11:09:55.162713  274695 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 11:09:55.166242  274695 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 11:09:55.166294  274695 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 11:09:55.172872  274695 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:09:55.181466  274695 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:09:55.184964  274695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 11:09:55.191210  274695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 11:09:55.197521  274695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 11:09:55.204060  274695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 11:09:55.210455  274695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 11:09:55.217147  274695 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 11:09:55.224151  274695 kubeadm.go:392] StartCluster: {Name:no-preload-349453 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-349453 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins
:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:09:55.224234  274695 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 11:09:55.224285  274695 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:09:55.259697  274695 cri.go:89] found id: "30acbc7b45e29f44070b3c2cd9e349609594ebbaf39bfad9077eca4ac3da4b6d"
	I0916 11:09:55.259720  274695 cri.go:89] found id: "b30641ccb64e362b81a950f5292970ba402b0ac24d54dd1f6caba97fe10efe1a"
	I0916 11:09:55.259775  274695 cri.go:89] found id: "6fe6dedc217401e73f5795b6cd5cfdd5d65a4314df29b9b5be8775dc661cbffa"
	I0916 11:09:55.259796  274695 cri.go:89] found id: "49542fa155836c3fde0549a1cb7c27e6491fe3d03f704b1dd6194e52b361cf04"
	I0916 11:09:55.259804  274695 cri.go:89] found id: "a4b95a39232c279fb958cb4f48f6828232143f571fa31629227ec0c0bfd79969"
	I0916 11:09:55.259808  274695 cri.go:89] found id: "5c82d38a57c77763e3457d1b66ad9e5b564c5c6137cef3dc16a48cd5d44dcf69"
	I0916 11:09:55.259812  274695 cri.go:89] found id: "0b8b34459e3719308549f98b1bdb7656a632bebe98e50614463b773f213a735d"
	I0916 11:09:55.259816  274695 cri.go:89] found id: "5d35346ecb3ed281435941a815ae449eb1e54a96a4798b3c0a33aa91f7f98817"
	I0916 11:09:55.259820  274695 cri.go:89] found id: ""
	I0916 11:09:55.259881  274695 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0916 11:09:55.273392  274695 cri.go:116] JSON = null
	W0916 11:09:55.273443  274695 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0916 11:09:55.273502  274695 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:09:55.282466  274695 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 11:09:55.282486  274695 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 11:09:55.282539  274695 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0916 11:09:55.291007  274695 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0916 11:09:55.291787  274695 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-349453" does not appear in /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:09:55.292250  274695 kubeconfig.go:62] /home/jenkins/minikube-integration/19651-3687/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-349453" cluster setting kubeconfig missing "no-preload-349453" context setting]
	I0916 11:09:55.292937  274695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:09:55.294364  274695 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 11:09:55.303573  274695 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.94.2
	I0916 11:09:55.303619  274695 kubeadm.go:597] duration metric: took 21.126232ms to restartPrimaryControlPlane
	I0916 11:09:55.303631  274695 kubeadm.go:394] duration metric: took 79.507692ms to StartCluster
	I0916 11:09:55.303656  274695 settings.go:142] acquiring lock: {Name:mk5f140da961bb985c566f732d611f3e238f1f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:09:55.303778  274695 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:09:55.304930  274695 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:09:55.305137  274695 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 11:09:55.305211  274695 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:09:55.305322  274695 addons.go:69] Setting storage-provisioner=true in profile "no-preload-349453"
	I0916 11:09:55.305336  274695 addons.go:69] Setting default-storageclass=true in profile "no-preload-349453"
	I0916 11:09:55.305342  274695 config.go:182] Loaded profile config "no-preload-349453": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:09:55.305350  274695 addons.go:69] Setting dashboard=true in profile "no-preload-349453"
	I0916 11:09:55.305372  274695 addons.go:234] Setting addon dashboard=true in "no-preload-349453"
	W0916 11:09:55.305382  274695 addons.go:243] addon dashboard should already be in state true
	I0916 11:09:55.305353  274695 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-349453"
	I0916 11:09:55.305401  274695 addons.go:69] Setting metrics-server=true in profile "no-preload-349453"
	I0916 11:09:55.305426  274695 host.go:66] Checking if "no-preload-349453" exists ...
	I0916 11:09:55.305428  274695 addons.go:234] Setting addon metrics-server=true in "no-preload-349453"
	W0916 11:09:55.305438  274695 addons.go:243] addon metrics-server should already be in state true
	I0916 11:09:55.305485  274695 host.go:66] Checking if "no-preload-349453" exists ...
	I0916 11:09:55.305354  274695 addons.go:234] Setting addon storage-provisioner=true in "no-preload-349453"
	W0916 11:09:55.305501  274695 addons.go:243] addon storage-provisioner should already be in state true
	I0916 11:09:55.305532  274695 host.go:66] Checking if "no-preload-349453" exists ...
	I0916 11:09:55.305781  274695 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Status}}
	I0916 11:09:55.305926  274695 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Status}}
	I0916 11:09:55.305931  274695 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Status}}
	I0916 11:09:55.306010  274695 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Status}}
	I0916 11:09:55.307090  274695 out.go:177] * Verifying Kubernetes components...
	I0916 11:09:55.308706  274695 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:09:55.330513  274695 addons.go:234] Setting addon default-storageclass=true in "no-preload-349453"
	W0916 11:09:55.330534  274695 addons.go:243] addon default-storageclass should already be in state true
	I0916 11:09:55.330561  274695 host.go:66] Checking if "no-preload-349453" exists ...
	I0916 11:09:55.330918  274695 cli_runner.go:164] Run: docker container inspect no-preload-349453 --format={{.State.Status}}
	I0916 11:09:55.331334  274695 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0916 11:09:55.331338  274695 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0916 11:09:55.333189  274695 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:09:55.333205  274695 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 11:09:55.333269  274695 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 11:09:55.333352  274695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:55.334937  274695 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0916 11:09:56.983399  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0916 11:09:56.983465  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:09:56.983527  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:09:57.016275  254463 cri.go:89] found id: "5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:09:57.016298  254463 cri.go:89] found id: "f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd"
	I0916 11:09:57.016303  254463 cri.go:89] found id: ""
	I0916 11:09:57.016312  254463 logs.go:276] 2 containers: [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7 f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd]
	I0916 11:09:57.016363  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:57.019731  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:57.022928  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:09:57.022987  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:09:57.066015  254463 cri.go:89] found id: ""
	I0916 11:09:57.066043  254463 logs.go:276] 0 containers: []
	W0916 11:09:57.066055  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:09:57.066062  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:09:57.066116  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:09:57.100119  254463 cri.go:89] found id: ""
	I0916 11:09:57.100143  254463 logs.go:276] 0 containers: []
	W0916 11:09:57.100154  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:09:57.100161  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:09:57.100218  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:09:57.142278  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:09:57.142305  254463 cri.go:89] found id: ""
	I0916 11:09:57.142314  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:09:57.142369  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:57.146012  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:09:57.146093  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:09:57.180703  254463 cri.go:89] found id: ""
	I0916 11:09:57.180730  254463 logs.go:276] 0 containers: []
	W0916 11:09:57.180741  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:09:57.180749  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:09:57.180804  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:09:57.213555  254463 cri.go:89] found id: "cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:09:57.213576  254463 cri.go:89] found id: "b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9"
	I0916 11:09:57.213579  254463 cri.go:89] found id: ""
	I0916 11:09:57.213586  254463 logs.go:276] 2 containers: [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9]
	I0916 11:09:57.213630  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:57.216893  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:09:57.220067  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:09:57.220128  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:09:57.261058  254463 cri.go:89] found id: ""
	I0916 11:09:57.261086  254463 logs.go:276] 0 containers: []
	W0916 11:09:57.261098  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:09:57.261105  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:09:57.261163  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:09:57.296886  254463 cri.go:89] found id: ""
	I0916 11:09:57.296913  254463 logs.go:276] 0 containers: []
	W0916 11:09:57.296921  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:09:57.296936  254463 logs.go:123] Gathering logs for kube-controller-manager [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff] ...
	I0916 11:09:57.296951  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:09:57.333205  254463 logs.go:123] Gathering logs for kube-controller-manager [b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9] ...
	I0916 11:09:57.333242  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9"
	I0916 11:09:57.372259  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:09:57.372300  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:09:57.413680  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:09:57.413713  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:09:57.486222  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:09:57.486264  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:09:55.335030  274695 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:09:55.335047  274695 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:09:55.335088  274695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:55.336314  274695 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0916 11:09:55.336347  274695 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0916 11:09:55.336405  274695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:55.357420  274695 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:09:55.357447  274695 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:09:55.357506  274695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-349453
	I0916 11:09:55.366387  274695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:09:55.367347  274695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:09:55.369352  274695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:09:55.388562  274695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/no-preload-349453/id_rsa Username:docker}
	I0916 11:09:55.520679  274695 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:09:55.542923  274695 node_ready.go:35] waiting up to 6m0s for node "no-preload-349453" to be "Ready" ...
	I0916 11:09:55.621336  274695 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 11:09:55.621435  274695 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0916 11:09:55.630720  274695 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0916 11:09:55.630753  274695 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0916 11:09:55.631139  274695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:09:55.647847  274695 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 11:09:55.647928  274695 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 11:09:55.728814  274695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:09:55.734435  274695 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0916 11:09:55.734467  274695 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0916 11:09:55.830027  274695 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 11:09:55.830070  274695 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 11:09:55.837018  274695 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0916 11:09:55.837046  274695 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0916 11:09:55.852569  274695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 11:09:55.933470  274695 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0916 11:09:55.933499  274695 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0916 11:09:56.036131  274695 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 11:09:56.036176  274695 retry.go:31] will retry after 327.547508ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 11:09:56.040318  274695 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0916 11:09:56.040402  274695 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0916 11:09:56.044576  274695 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 11:09:56.044610  274695 retry.go:31] will retry after 125.943539ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 11:09:56.133467  274695 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0916 11:09:56.133501  274695 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0916 11:09:56.171627  274695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:09:56.229693  274695 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0916 11:09:56.229778  274695 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W0916 11:09:56.324009  274695 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 11:09:56.324059  274695 retry.go:31] will retry after 179.364541ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 11:09:56.329914  274695 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0916 11:09:56.329944  274695 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0916 11:09:56.364514  274695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:09:56.424109  274695 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0916 11:09:56.424146  274695 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0916 11:09:56.503542  274695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 11:09:56.523382  274695 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0916 11:09:56.849591  260870 pod_ready.go:103] pod "etcd-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:59.349876  260870 pod_ready.go:103] pod "etcd-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:09:59.129375  274695 node_ready.go:49] node "no-preload-349453" has status "Ready":"True"
	I0916 11:09:59.129482  274695 node_ready.go:38] duration metric: took 3.586509916s for node "no-preload-349453" to be "Ready" ...
	I0916 11:09:59.129511  274695 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:09:59.146545  274695 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:59.236575  274695 pod_ready.go:93] pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace has status "Ready":"True"
	I0916 11:09:59.236620  274695 pod_ready.go:82] duration metric: took 90.034166ms for pod "coredns-7c65d6cfc9-9zbwk" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:59.236641  274695 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:59.244737  274695 pod_ready.go:93] pod "etcd-no-preload-349453" in "kube-system" namespace has status "Ready":"True"
	I0916 11:09:59.244763  274695 pod_ready.go:82] duration metric: took 8.113529ms for pod "etcd-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:59.244779  274695 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:59.326680  274695 pod_ready.go:93] pod "kube-apiserver-no-preload-349453" in "kube-system" namespace has status "Ready":"True"
	I0916 11:09:59.326711  274695 pod_ready.go:82] duration metric: took 81.923811ms for pod "kube-apiserver-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:59.326724  274695 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:59.331650  274695 pod_ready.go:93] pod "kube-controller-manager-no-preload-349453" in "kube-system" namespace has status "Ready":"True"
	I0916 11:09:59.331673  274695 pod_ready.go:82] duration metric: took 4.941014ms for pod "kube-controller-manager-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:59.331686  274695 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-n7m28" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:59.337818  274695 pod_ready.go:93] pod "kube-proxy-n7m28" in "kube-system" namespace has status "Ready":"True"
	I0916 11:09:59.337846  274695 pod_ready.go:82] duration metric: took 6.152494ms for pod "kube-proxy-n7m28" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:59.337858  274695 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:59.423673  274695 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (3.251989478s)
	I0916 11:09:59.732619  274695 pod_ready.go:93] pod "kube-scheduler-no-preload-349453" in "kube-system" namespace has status "Ready":"True"
	I0916 11:09:59.732647  274695 pod_ready.go:82] duration metric: took 394.781316ms for pod "kube-scheduler-no-preload-349453" in "kube-system" namespace to be "Ready" ...
	I0916 11:09:59.732659  274695 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace to be "Ready" ...
	I0916 11:10:01.340867  274695 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.976318642s)
	I0916 11:10:01.340987  274695 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.837413291s)
	I0916 11:10:01.341014  274695 addons.go:475] Verifying addon metrics-server=true in "no-preload-349453"
	I0916 11:10:01.537050  274695 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.013618079s)
	I0916 11:10:01.538736  274695 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-349453 addons enable metrics-server
	
	I0916 11:10:01.540676  274695 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0916 11:10:01.542213  274695 addons.go:510] duration metric: took 6.237009332s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0916 11:10:01.741388  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:01.350460  260870 pod_ready.go:103] pod "etcd-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:03.848603  260870 pod_ready.go:103] pod "etcd-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:04.348851  260870 pod_ready.go:93] pod "etcd-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"True"
	I0916 11:10:04.348878  260870 pod_ready.go:82] duration metric: took 34.506013242s for pod "etcd-old-k8s-version-371039" in "kube-system" namespace to be "Ready" ...
	I0916 11:10:04.348893  260870 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-371039" in "kube-system" namespace to be "Ready" ...
	I0916 11:10:04.353032  260870 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"True"
	I0916 11:10:04.353051  260870 pod_ready.go:82] duration metric: took 4.150771ms for pod "kube-apiserver-old-k8s-version-371039" in "kube-system" namespace to be "Ready" ...
	I0916 11:10:04.353060  260870 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace to be "Ready" ...
	I0916 11:10:07.550714  254463 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.06442499s)
	W0916 11:10:07.550762  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I0916 11:10:07.550771  254463 logs.go:123] Gathering logs for kube-apiserver [f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd] ...
	I0916 11:10:07.550784  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd"
	I0916 11:10:07.596479  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:10:07.596522  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:10:07.640033  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:10:07.640079  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:10:07.665505  254463 logs.go:123] Gathering logs for kube-apiserver [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7] ...
	I0916 11:10:07.665549  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:04.238268  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:06.239302  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:08.243920  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:06.359545  260870 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:08.859689  260870 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:07.711821  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:10:07.711862  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:10.283999  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:10:12.114848  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:42500->192.168.76.2:8443: read: connection reset by peer
	I0916 11:10:12.114981  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:10:12.115056  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:10:12.152497  254463 cri.go:89] found id: "5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:12.152533  254463 cri.go:89] found id: "f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd"
	I0916 11:10:12.152540  254463 cri.go:89] found id: ""
	I0916 11:10:12.152548  254463 logs.go:276] 2 containers: [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7 f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd]
	I0916 11:10:12.152602  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:12.156067  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:12.159264  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:10:12.159327  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:10:12.190731  254463 cri.go:89] found id: ""
	I0916 11:10:12.190754  254463 logs.go:276] 0 containers: []
	W0916 11:10:12.190765  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:10:12.190772  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:10:12.190827  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:10:12.222220  254463 cri.go:89] found id: ""
	I0916 11:10:12.222242  254463 logs.go:276] 0 containers: []
	W0916 11:10:12.222250  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:10:12.222256  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:10:12.222298  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:10:12.255730  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:12.255822  254463 cri.go:89] found id: ""
	I0916 11:10:12.255829  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:10:12.255876  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:12.259472  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:10:12.259542  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:10:12.291555  254463 cri.go:89] found id: ""
	I0916 11:10:12.291579  254463 logs.go:276] 0 containers: []
	W0916 11:10:12.291589  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:10:12.291596  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:10:12.291651  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:10:12.324287  254463 cri.go:89] found id: "cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:12.324321  254463 cri.go:89] found id: "b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9"
	I0916 11:10:12.324328  254463 cri.go:89] found id: ""
	I0916 11:10:12.324337  254463 logs.go:276] 2 containers: [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9]
	I0916 11:10:12.324392  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:12.327731  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:12.330880  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:10:12.330944  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:10:12.375367  254463 cri.go:89] found id: ""
	I0916 11:10:12.375395  254463 logs.go:276] 0 containers: []
	W0916 11:10:12.375407  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:10:12.375415  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:10:12.375478  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:10:12.415075  254463 cri.go:89] found id: ""
	I0916 11:10:12.415095  254463 logs.go:276] 0 containers: []
	W0916 11:10:12.415103  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:10:12.415115  254463 logs.go:123] Gathering logs for kube-apiserver [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7] ...
	I0916 11:10:12.415126  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:12.458886  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:10:12.458930  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:10:12.496500  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:10:12.496530  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:10:12.567297  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:10:12.567333  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:10:12.624232  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:10:12.624255  254463 logs.go:123] Gathering logs for kube-apiserver [f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd] ...
	I0916 11:10:12.624270  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f6591f6937fcba8bc801ac029590e516d80684fdff2982606336f8d24a81fddd"
	I0916 11:10:12.660261  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:10:12.660295  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:10.738756  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:13.238098  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:11.360052  260870 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:13.859124  260870 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:14.365001  260870 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"True"
	I0916 11:10:14.365026  260870 pod_ready.go:82] duration metric: took 10.011960541s for pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace to be "Ready" ...
	I0916 11:10:14.365036  260870 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w2kp4" in "kube-system" namespace to be "Ready" ...
	I0916 11:10:14.369505  260870 pod_ready.go:93] pod "kube-proxy-w2kp4" in "kube-system" namespace has status "Ready":"True"
	I0916 11:10:14.369528  260870 pod_ready.go:82] duration metric: took 4.48629ms for pod "kube-proxy-w2kp4" in "kube-system" namespace to be "Ready" ...
	I0916 11:10:14.369536  260870 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-371039" in "kube-system" namespace to be "Ready" ...
	I0916 11:10:12.718187  254463 logs.go:123] Gathering logs for kube-controller-manager [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff] ...
	I0916 11:10:12.718226  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:12.753095  254463 logs.go:123] Gathering logs for kube-controller-manager [b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9] ...
	I0916 11:10:12.753121  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b1f0f23e5e416a17a03ef608310e82a9f0da6e49d37cf267d431760bd9f7bcd9"
	I0916 11:10:12.786230  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:10:12.786255  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:10:12.828221  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:10:12.828253  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:10:15.348814  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:10:15.349283  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:10:15.349344  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:10:15.349400  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:10:15.384332  254463 cri.go:89] found id: "5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:15.384353  254463 cri.go:89] found id: ""
	I0916 11:10:15.384362  254463 logs.go:276] 1 containers: [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7]
	I0916 11:10:15.384418  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:15.387695  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:10:15.387808  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:10:15.420398  254463 cri.go:89] found id: ""
	I0916 11:10:15.420425  254463 logs.go:276] 0 containers: []
	W0916 11:10:15.420438  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:10:15.420447  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:10:15.420496  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:10:15.454005  254463 cri.go:89] found id: ""
	I0916 11:10:15.454035  254463 logs.go:276] 0 containers: []
	W0916 11:10:15.454049  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:10:15.454057  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:10:15.454111  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:10:15.488040  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:15.488065  254463 cri.go:89] found id: ""
	I0916 11:10:15.488072  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:10:15.488121  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:15.491658  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:10:15.491730  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:10:15.526243  254463 cri.go:89] found id: ""
	I0916 11:10:15.526276  254463 logs.go:276] 0 containers: []
	W0916 11:10:15.526289  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:10:15.526297  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:10:15.526356  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:10:15.563058  254463 cri.go:89] found id: "cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:15.563078  254463 cri.go:89] found id: ""
	I0916 11:10:15.563085  254463 logs.go:276] 1 containers: [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff]
	I0916 11:10:15.563129  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:15.566707  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:10:15.566775  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:10:15.600693  254463 cri.go:89] found id: ""
	I0916 11:10:15.600719  254463 logs.go:276] 0 containers: []
	W0916 11:10:15.600728  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:10:15.600734  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:10:15.600786  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:10:15.634854  254463 cri.go:89] found id: ""
	I0916 11:10:15.634878  254463 logs.go:276] 0 containers: []
	W0916 11:10:15.634886  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:10:15.634894  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:10:15.634912  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:10:15.656900  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:10:15.656944  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:10:15.716708  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:10:15.716734  254463 logs.go:123] Gathering logs for kube-apiserver [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7] ...
	I0916 11:10:15.716750  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:15.756043  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:10:15.756072  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:15.815128  254463 logs.go:123] Gathering logs for kube-controller-manager [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff] ...
	I0916 11:10:15.815167  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:15.851703  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:10:15.851729  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:10:15.896779  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:10:15.896822  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:10:15.933761  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:10:15.933790  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:10:15.738612  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:18.238493  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:16.375521  260870 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:18.876191  260870 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:18.508158  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:10:18.508652  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:10:18.508704  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:10:18.508768  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:10:18.541635  254463 cri.go:89] found id: "5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:18.541657  254463 cri.go:89] found id: ""
	I0916 11:10:18.541666  254463 logs.go:276] 1 containers: [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7]
	I0916 11:10:18.541721  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:18.545157  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:10:18.545220  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:10:18.577944  254463 cri.go:89] found id: ""
	I0916 11:10:18.577967  254463 logs.go:276] 0 containers: []
	W0916 11:10:18.577978  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:10:18.577985  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:10:18.578041  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:10:18.610307  254463 cri.go:89] found id: ""
	I0916 11:10:18.610334  254463 logs.go:276] 0 containers: []
	W0916 11:10:18.610345  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:10:18.610353  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:10:18.610410  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:10:18.643372  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:18.643398  254463 cri.go:89] found id: ""
	I0916 11:10:18.643409  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:10:18.643473  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:18.647339  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:10:18.647416  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:10:18.683669  254463 cri.go:89] found id: ""
	I0916 11:10:18.683696  254463 logs.go:276] 0 containers: []
	W0916 11:10:18.683708  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:10:18.683716  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:10:18.683813  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:10:18.717547  254463 cri.go:89] found id: "cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:18.717569  254463 cri.go:89] found id: ""
	I0916 11:10:18.717578  254463 logs.go:276] 1 containers: [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff]
	I0916 11:10:18.717635  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:18.721314  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:10:18.721386  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:10:18.756024  254463 cri.go:89] found id: ""
	I0916 11:10:18.756055  254463 logs.go:276] 0 containers: []
	W0916 11:10:18.756065  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:10:18.756071  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:10:18.756120  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:10:18.789325  254463 cri.go:89] found id: ""
	I0916 11:10:18.789350  254463 logs.go:276] 0 containers: []
	W0916 11:10:18.789359  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:10:18.789370  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:10:18.789384  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:10:18.860240  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:10:18.860279  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:10:18.882796  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:10:18.882826  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:10:18.941553  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:10:18.941577  254463 logs.go:123] Gathering logs for kube-apiserver [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7] ...
	I0916 11:10:18.941593  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:18.979008  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:10:18.979039  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:19.039131  254463 logs.go:123] Gathering logs for kube-controller-manager [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff] ...
	I0916 11:10:19.039170  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:19.075898  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:10:19.075929  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:10:19.119292  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:10:19.119332  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:10:21.657984  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:10:21.658407  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:10:21.658456  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:10:21.658511  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:10:21.692596  254463 cri.go:89] found id: "5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:21.692621  254463 cri.go:89] found id: ""
	I0916 11:10:21.692630  254463 logs.go:276] 1 containers: [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7]
	I0916 11:10:21.692685  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:21.696206  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:10:21.696264  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:10:21.729888  254463 cri.go:89] found id: ""
	I0916 11:10:21.729910  254463 logs.go:276] 0 containers: []
	W0916 11:10:21.729918  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:10:21.729937  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:10:21.729981  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:10:21.763929  254463 cri.go:89] found id: ""
	I0916 11:10:21.763962  254463 logs.go:276] 0 containers: []
	W0916 11:10:21.763974  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:10:21.763981  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:10:21.764047  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:10:21.799235  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:21.799256  254463 cri.go:89] found id: ""
	I0916 11:10:21.799264  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:10:21.799318  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:21.802780  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:10:21.802855  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:10:21.839854  254463 cri.go:89] found id: ""
	I0916 11:10:21.839880  254463 logs.go:276] 0 containers: []
	W0916 11:10:21.839888  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:10:21.839894  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:10:21.839953  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:10:21.873977  254463 cri.go:89] found id: "cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:21.874003  254463 cri.go:89] found id: ""
	I0916 11:10:21.874013  254463 logs.go:276] 1 containers: [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff]
	I0916 11:10:21.874068  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:21.878108  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:10:21.878178  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:10:21.911328  254463 cri.go:89] found id: ""
	I0916 11:10:21.911357  254463 logs.go:276] 0 containers: []
	W0916 11:10:21.911366  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:10:21.911372  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:10:21.911425  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:10:21.946393  254463 cri.go:89] found id: ""
	I0916 11:10:21.946423  254463 logs.go:276] 0 containers: []
	W0916 11:10:21.946435  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:10:21.946446  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:10:21.946461  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:10:21.990397  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:10:21.990439  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:10:22.027571  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:10:22.027598  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:10:22.101651  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:10:22.101686  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:10:22.122234  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:10:22.122269  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:10:22.180802  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:10:22.180833  254463 logs.go:123] Gathering logs for kube-apiserver [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7] ...
	I0916 11:10:22.180848  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:22.216487  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:10:22.216515  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:22.279504  254463 logs.go:123] Gathering logs for kube-controller-manager [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff] ...
	I0916 11:10:22.279551  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:20.240044  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:22.738619  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:21.375180  260870 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:23.375730  260870 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:24.375698  260870 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"True"
	I0916 11:10:24.375722  260870 pod_ready.go:82] duration metric: took 10.006179243s for pod "kube-scheduler-old-k8s-version-371039" in "kube-system" namespace to be "Ready" ...
	I0916 11:10:24.375730  260870 pod_ready.go:39] duration metric: took 1m11.05010529s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:10:24.375761  260870 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:10:24.375792  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:10:24.375850  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:10:24.410054  260870 cri.go:89] found id: "5e66ac9a14fe5a4058d0e0a13c533aa90cf43e7e2829bee2c2b36cb49e2bdefd"
	I0916 11:10:24.410074  260870 cri.go:89] found id: ""
	I0916 11:10:24.410084  260870 logs.go:276] 1 containers: [5e66ac9a14fe5a4058d0e0a13c533aa90cf43e7e2829bee2c2b36cb49e2bdefd]
	I0916 11:10:24.410144  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:24.413762  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:10:24.413822  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:10:24.446581  260870 cri.go:89] found id: "34eff189102300711c29178081cd84ee3bb91bbadd42e71611f1e6cad730b927"
	I0916 11:10:24.446609  260870 cri.go:89] found id: ""
	I0916 11:10:24.446619  260870 logs.go:276] 1 containers: [34eff189102300711c29178081cd84ee3bb91bbadd42e71611f1e6cad730b927]
	I0916 11:10:24.446679  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:24.450048  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:10:24.450108  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:10:24.483854  260870 cri.go:89] found id: "47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c"
	I0916 11:10:24.483876  260870 cri.go:89] found id: ""
	I0916 11:10:24.483883  260870 logs.go:276] 1 containers: [47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c]
	I0916 11:10:24.483937  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:24.487518  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:10:24.487579  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:10:24.520237  260870 cri.go:89] found id: "6b3b4e782188a028982f995826169c7b0e2d5db28caae22d39c553f66b1dbf58"
	I0916 11:10:24.520257  260870 cri.go:89] found id: ""
	I0916 11:10:24.520265  260870 logs.go:276] 1 containers: [6b3b4e782188a028982f995826169c7b0e2d5db28caae22d39c553f66b1dbf58]
	I0916 11:10:24.520325  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:24.523786  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:10:24.523857  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:10:24.556906  260870 cri.go:89] found id: "a442530bd3eed8c0c19667f9fb758696fcd53da538f2ae250a155372b1f2574e"
	I0916 11:10:24.556931  260870 cri.go:89] found id: ""
	I0916 11:10:24.556938  260870 logs.go:276] 1 containers: [a442530bd3eed8c0c19667f9fb758696fcd53da538f2ae250a155372b1f2574e]
	I0916 11:10:24.556982  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:24.560497  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:10:24.560571  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:10:24.593490  260870 cri.go:89] found id: "8e878c306812f7218314ae530ad95c7f7a54c927d987295008ebfa1401dcc21c"
	I0916 11:10:24.593510  260870 cri.go:89] found id: ""
	I0916 11:10:24.593517  260870 logs.go:276] 1 containers: [8e878c306812f7218314ae530ad95c7f7a54c927d987295008ebfa1401dcc21c]
	I0916 11:10:24.593558  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:24.597013  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:10:24.597068  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:10:24.629128  260870 cri.go:89] found id: "00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285"
	I0916 11:10:24.629149  260870 cri.go:89] found id: ""
	I0916 11:10:24.629155  260870 logs.go:276] 1 containers: [00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285]
	I0916 11:10:24.629201  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:24.632565  260870 logs.go:123] Gathering logs for dmesg ...
	I0916 11:10:24.632588  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:10:24.653890  260870 logs.go:123] Gathering logs for etcd [34eff189102300711c29178081cd84ee3bb91bbadd42e71611f1e6cad730b927] ...
	I0916 11:10:24.653925  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34eff189102300711c29178081cd84ee3bb91bbadd42e71611f1e6cad730b927"
	I0916 11:10:24.689516  260870 logs.go:123] Gathering logs for coredns [47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c] ...
	I0916 11:10:24.689544  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c"
	I0916 11:10:24.723583  260870 logs.go:123] Gathering logs for kube-scheduler [6b3b4e782188a028982f995826169c7b0e2d5db28caae22d39c553f66b1dbf58] ...
	I0916 11:10:24.723610  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b3b4e782188a028982f995826169c7b0e2d5db28caae22d39c553f66b1dbf58"
	I0916 11:10:24.761101  260870 logs.go:123] Gathering logs for container status ...
	I0916 11:10:24.761135  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:10:24.798289  260870 logs.go:123] Gathering logs for containerd ...
	I0916 11:10:24.798316  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:10:24.858329  260870 logs.go:123] Gathering logs for kubelet ...
	I0916 11:10:24.858366  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:10:24.924002  260870 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:10:24.924042  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:10:25.040339  260870 logs.go:123] Gathering logs for kube-apiserver [5e66ac9a14fe5a4058d0e0a13c533aa90cf43e7e2829bee2c2b36cb49e2bdefd] ...
	I0916 11:10:25.040371  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e66ac9a14fe5a4058d0e0a13c533aa90cf43e7e2829bee2c2b36cb49e2bdefd"
	I0916 11:10:25.092353  260870 logs.go:123] Gathering logs for kube-proxy [a442530bd3eed8c0c19667f9fb758696fcd53da538f2ae250a155372b1f2574e] ...
	I0916 11:10:25.092390  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a442530bd3eed8c0c19667f9fb758696fcd53da538f2ae250a155372b1f2574e"
	I0916 11:10:25.129881  260870 logs.go:123] Gathering logs for kube-controller-manager [8e878c306812f7218314ae530ad95c7f7a54c927d987295008ebfa1401dcc21c] ...
	I0916 11:10:25.129913  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e878c306812f7218314ae530ad95c7f7a54c927d987295008ebfa1401dcc21c"
	I0916 11:10:25.176606  260870 logs.go:123] Gathering logs for kindnet [00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285] ...
	I0916 11:10:25.176643  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285"
	I0916 11:10:24.814913  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:10:24.815331  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:10:24.815406  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:10:24.815468  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:10:24.851174  254463 cri.go:89] found id: "5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:24.851217  254463 cri.go:89] found id: ""
	I0916 11:10:24.851226  254463 logs.go:276] 1 containers: [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7]
	I0916 11:10:24.851290  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:24.855458  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:10:24.855530  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:10:24.894464  254463 cri.go:89] found id: ""
	I0916 11:10:24.894484  254463 logs.go:276] 0 containers: []
	W0916 11:10:24.894491  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:10:24.894498  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:10:24.894540  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:10:24.932639  254463 cri.go:89] found id: ""
	I0916 11:10:24.932678  254463 logs.go:276] 0 containers: []
	W0916 11:10:24.932686  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:10:24.932691  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:10:24.932736  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:10:24.969712  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:24.969795  254463 cri.go:89] found id: ""
	I0916 11:10:24.969807  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:10:24.969872  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:24.973484  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:10:24.973557  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:10:25.014852  254463 cri.go:89] found id: ""
	I0916 11:10:25.014926  254463 logs.go:276] 0 containers: []
	W0916 11:10:25.014938  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:10:25.014944  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:10:25.015001  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:10:25.051032  254463 cri.go:89] found id: "cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:25.051057  254463 cri.go:89] found id: ""
	I0916 11:10:25.051067  254463 logs.go:276] 1 containers: [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff]
	I0916 11:10:25.051128  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:25.054719  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:10:25.054797  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:10:25.093048  254463 cri.go:89] found id: ""
	I0916 11:10:25.093074  254463 logs.go:276] 0 containers: []
	W0916 11:10:25.093084  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:10:25.093092  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:10:25.093144  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:10:25.131337  254463 cri.go:89] found id: ""
	I0916 11:10:25.131374  254463 logs.go:276] 0 containers: []
	W0916 11:10:25.131387  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:10:25.131405  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:10:25.131426  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:25.195758  254463 logs.go:123] Gathering logs for kube-controller-manager [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff] ...
	I0916 11:10:25.195798  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:25.232113  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:10:25.232141  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:10:25.277260  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:10:25.277300  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:10:25.314477  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:10:25.314503  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:10:25.391725  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:10:25.391784  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:10:25.413044  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:10:25.413079  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:10:25.474224  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:10:25.474246  254463 logs.go:123] Gathering logs for kube-apiserver [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7] ...
	I0916 11:10:25.474258  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:25.238565  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:27.737733  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:27.712923  260870 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:10:27.724989  260870 api_server.go:72] duration metric: took 1m15.390531014s to wait for apiserver process to appear ...
	I0916 11:10:27.725015  260870 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:10:27.725048  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:10:27.725090  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:10:27.758530  260870 cri.go:89] found id: "5e66ac9a14fe5a4058d0e0a13c533aa90cf43e7e2829bee2c2b36cb49e2bdefd"
	I0916 11:10:27.758558  260870 cri.go:89] found id: ""
	I0916 11:10:27.758567  260870 logs.go:276] 1 containers: [5e66ac9a14fe5a4058d0e0a13c533aa90cf43e7e2829bee2c2b36cb49e2bdefd]
	I0916 11:10:27.758613  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:27.762091  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:10:27.762160  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:10:27.794955  260870 cri.go:89] found id: "34eff189102300711c29178081cd84ee3bb91bbadd42e71611f1e6cad730b927"
	I0916 11:10:27.794975  260870 cri.go:89] found id: ""
	I0916 11:10:27.794982  260870 logs.go:276] 1 containers: [34eff189102300711c29178081cd84ee3bb91bbadd42e71611f1e6cad730b927]
	I0916 11:10:27.795027  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:27.798651  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:10:27.798729  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:10:27.832743  260870 cri.go:89] found id: "47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c"
	I0916 11:10:27.832764  260870 cri.go:89] found id: ""
	I0916 11:10:27.832772  260870 logs.go:276] 1 containers: [47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c]
	I0916 11:10:27.832815  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:27.836354  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:10:27.836425  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:10:27.869614  260870 cri.go:89] found id: "6b3b4e782188a028982f995826169c7b0e2d5db28caae22d39c553f66b1dbf58"
	I0916 11:10:27.869635  260870 cri.go:89] found id: ""
	I0916 11:10:27.869644  260870 logs.go:276] 1 containers: [6b3b4e782188a028982f995826169c7b0e2d5db28caae22d39c553f66b1dbf58]
	I0916 11:10:27.869703  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:27.873305  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:10:27.873379  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:10:27.906796  260870 cri.go:89] found id: "a442530bd3eed8c0c19667f9fb758696fcd53da538f2ae250a155372b1f2574e"
	I0916 11:10:27.906818  260870 cri.go:89] found id: ""
	I0916 11:10:27.906827  260870 logs.go:276] 1 containers: [a442530bd3eed8c0c19667f9fb758696fcd53da538f2ae250a155372b1f2574e]
	I0916 11:10:27.906881  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:27.910467  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:10:27.910528  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:10:27.947119  260870 cri.go:89] found id: "8e878c306812f7218314ae530ad95c7f7a54c927d987295008ebfa1401dcc21c"
	I0916 11:10:27.947147  260870 cri.go:89] found id: ""
	I0916 11:10:27.947156  260870 logs.go:276] 1 containers: [8e878c306812f7218314ae530ad95c7f7a54c927d987295008ebfa1401dcc21c]
	I0916 11:10:27.947216  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:27.951709  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:10:27.951800  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:10:27.984740  260870 cri.go:89] found id: "00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285"
	I0916 11:10:27.984762  260870 cri.go:89] found id: ""
	I0916 11:10:27.984771  260870 logs.go:276] 1 containers: [00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285]
	I0916 11:10:27.984830  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:27.988397  260870 logs.go:123] Gathering logs for container status ...
	I0916 11:10:27.988425  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:10:28.025884  260870 logs.go:123] Gathering logs for kube-apiserver [5e66ac9a14fe5a4058d0e0a13c533aa90cf43e7e2829bee2c2b36cb49e2bdefd] ...
	I0916 11:10:28.025924  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e66ac9a14fe5a4058d0e0a13c533aa90cf43e7e2829bee2c2b36cb49e2bdefd"
	I0916 11:10:28.077609  260870 logs.go:123] Gathering logs for coredns [47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c] ...
	I0916 11:10:28.077647  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c"
	I0916 11:10:28.116119  260870 logs.go:123] Gathering logs for kube-scheduler [6b3b4e782188a028982f995826169c7b0e2d5db28caae22d39c553f66b1dbf58] ...
	I0916 11:10:28.116146  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b3b4e782188a028982f995826169c7b0e2d5db28caae22d39c553f66b1dbf58"
	I0916 11:10:28.154443  260870 logs.go:123] Gathering logs for kube-proxy [a442530bd3eed8c0c19667f9fb758696fcd53da538f2ae250a155372b1f2574e] ...
	I0916 11:10:28.154480  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a442530bd3eed8c0c19667f9fb758696fcd53da538f2ae250a155372b1f2574e"
	I0916 11:10:28.192048  260870 logs.go:123] Gathering logs for kindnet [00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285] ...
	I0916 11:10:28.192076  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285"
	I0916 11:10:28.230393  260870 logs.go:123] Gathering logs for containerd ...
	I0916 11:10:28.230435  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:10:28.293330  260870 logs.go:123] Gathering logs for kubelet ...
	I0916 11:10:28.293363  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:10:28.355035  260870 logs.go:123] Gathering logs for dmesg ...
	I0916 11:10:28.355073  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:10:28.376404  260870 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:10:28.376441  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:10:28.485749  260870 logs.go:123] Gathering logs for etcd [34eff189102300711c29178081cd84ee3bb91bbadd42e71611f1e6cad730b927] ...
	I0916 11:10:28.485786  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34eff189102300711c29178081cd84ee3bb91bbadd42e71611f1e6cad730b927"
	I0916 11:10:28.526060  260870 logs.go:123] Gathering logs for kube-controller-manager [8e878c306812f7218314ae530ad95c7f7a54c927d987295008ebfa1401dcc21c] ...
	I0916 11:10:28.526099  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e878c306812f7218314ae530ad95c7f7a54c927d987295008ebfa1401dcc21c"
	I0916 11:10:28.013215  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:10:28.013660  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:10:28.013720  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:10:28.013775  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:10:28.052332  254463 cri.go:89] found id: "5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:28.052358  254463 cri.go:89] found id: ""
	I0916 11:10:28.052366  254463 logs.go:276] 1 containers: [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7]
	I0916 11:10:28.052414  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:28.056409  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:10:28.056477  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:10:28.091702  254463 cri.go:89] found id: ""
	I0916 11:10:28.091731  254463 logs.go:276] 0 containers: []
	W0916 11:10:28.091784  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:10:28.091792  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:10:28.091851  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:10:28.126028  254463 cri.go:89] found id: ""
	I0916 11:10:28.126052  254463 logs.go:276] 0 containers: []
	W0916 11:10:28.126063  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:10:28.126076  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:10:28.126133  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:10:28.163202  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:28.163249  254463 cri.go:89] found id: ""
	I0916 11:10:28.163257  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:10:28.163299  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:28.166659  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:10:28.166722  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:10:28.201886  254463 cri.go:89] found id: ""
	I0916 11:10:28.201910  254463 logs.go:276] 0 containers: []
	W0916 11:10:28.201919  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:10:28.201926  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:10:28.201984  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:10:28.246518  254463 cri.go:89] found id: "cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:28.246616  254463 cri.go:89] found id: ""
	I0916 11:10:28.246637  254463 logs.go:276] 1 containers: [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff]
	I0916 11:10:28.246722  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:28.252289  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:10:28.252395  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:10:28.286426  254463 cri.go:89] found id: ""
	I0916 11:10:28.286449  254463 logs.go:276] 0 containers: []
	W0916 11:10:28.286457  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:10:28.286463  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:10:28.286519  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:10:28.321297  254463 cri.go:89] found id: ""
	I0916 11:10:28.321321  254463 logs.go:276] 0 containers: []
	W0916 11:10:28.321328  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:10:28.321336  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:10:28.321348  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:10:28.403374  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:10:28.403422  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:10:28.426647  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:10:28.426684  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:10:28.496928  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:10:28.496947  254463 logs.go:123] Gathering logs for kube-apiserver [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7] ...
	I0916 11:10:28.496957  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:28.538666  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:10:28.538694  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:28.607309  254463 logs.go:123] Gathering logs for kube-controller-manager [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff] ...
	I0916 11:10:28.607350  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:28.641335  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:10:28.641365  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:10:28.687488  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:10:28.687527  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:10:31.224849  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:10:31.225350  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:10:31.225417  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:10:31.225483  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:10:31.262633  254463 cri.go:89] found id: "5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:31.262659  254463 cri.go:89] found id: ""
	I0916 11:10:31.262668  254463 logs.go:276] 1 containers: [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7]
	I0916 11:10:31.262726  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:31.266801  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:10:31.266884  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:10:31.302134  254463 cri.go:89] found id: ""
	I0916 11:10:31.302165  254463 logs.go:276] 0 containers: []
	W0916 11:10:31.302176  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:10:31.302183  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:10:31.302239  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:10:31.338759  254463 cri.go:89] found id: ""
	I0916 11:10:31.338781  254463 logs.go:276] 0 containers: []
	W0916 11:10:31.338789  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:10:31.338796  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:10:31.338874  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:10:31.375371  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:31.375400  254463 cri.go:89] found id: ""
	I0916 11:10:31.375410  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:10:31.375462  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:31.379039  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:10:31.379109  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:10:31.414260  254463 cri.go:89] found id: ""
	I0916 11:10:31.414282  254463 logs.go:276] 0 containers: []
	W0916 11:10:31.414290  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:10:31.414295  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:10:31.414353  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:10:31.450723  254463 cri.go:89] found id: "cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:31.450747  254463 cri.go:89] found id: ""
	I0916 11:10:31.450760  254463 logs.go:276] 1 containers: [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff]
	I0916 11:10:31.450816  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:31.454785  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:10:31.454864  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:10:31.497353  254463 cri.go:89] found id: ""
	I0916 11:10:31.497385  254463 logs.go:276] 0 containers: []
	W0916 11:10:31.497398  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:10:31.497409  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:10:31.497458  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:10:31.532978  254463 cri.go:89] found id: ""
	I0916 11:10:31.533013  254463 logs.go:276] 0 containers: []
	W0916 11:10:31.533022  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:10:31.533031  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:10:31.533042  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:10:31.613145  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:10:31.613191  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:10:31.634722  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:10:31.634750  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:10:31.702216  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:10:31.702243  254463 logs.go:123] Gathering logs for kube-apiserver [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7] ...
	I0916 11:10:31.702257  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:31.744782  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:10:31.744814  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:31.811622  254463 logs.go:123] Gathering logs for kube-controller-manager [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff] ...
	I0916 11:10:31.811663  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:31.849645  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:10:31.849684  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:10:31.895810  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:10:31.895846  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:10:29.738050  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:31.738832  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:31.079119  260870 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0916 11:10:31.085468  260870 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I0916 11:10:31.086440  260870 api_server.go:141] control plane version: v1.20.0
	I0916 11:10:31.086462  260870 api_server.go:131] duration metric: took 3.361442023s to wait for apiserver health ...
	I0916 11:10:31.086470  260870 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 11:10:31.086489  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:10:31.086546  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:10:31.119570  260870 cri.go:89] found id: "5e66ac9a14fe5a4058d0e0a13c533aa90cf43e7e2829bee2c2b36cb49e2bdefd"
	I0916 11:10:31.119594  260870 cri.go:89] found id: ""
	I0916 11:10:31.119604  260870 logs.go:276] 1 containers: [5e66ac9a14fe5a4058d0e0a13c533aa90cf43e7e2829bee2c2b36cb49e2bdefd]
	I0916 11:10:31.119659  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:31.123250  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:10:31.123324  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:10:31.156789  260870 cri.go:89] found id: "34eff189102300711c29178081cd84ee3bb91bbadd42e71611f1e6cad730b927"
	I0916 11:10:31.156812  260870 cri.go:89] found id: ""
	I0916 11:10:31.156821  260870 logs.go:276] 1 containers: [34eff189102300711c29178081cd84ee3bb91bbadd42e71611f1e6cad730b927]
	I0916 11:10:31.156877  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:31.160589  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:10:31.160666  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:10:31.193841  260870 cri.go:89] found id: "47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c"
	I0916 11:10:31.193868  260870 cri.go:89] found id: ""
	I0916 11:10:31.193877  260870 logs.go:276] 1 containers: [47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c]
	I0916 11:10:31.193919  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:31.197415  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:10:31.197484  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:10:31.230161  260870 cri.go:89] found id: "6b3b4e782188a028982f995826169c7b0e2d5db28caae22d39c553f66b1dbf58"
	I0916 11:10:31.230184  260870 cri.go:89] found id: ""
	I0916 11:10:31.230193  260870 logs.go:276] 1 containers: [6b3b4e782188a028982f995826169c7b0e2d5db28caae22d39c553f66b1dbf58]
	I0916 11:10:31.230253  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:31.233951  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:10:31.234023  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:10:31.272769  260870 cri.go:89] found id: "a442530bd3eed8c0c19667f9fb758696fcd53da538f2ae250a155372b1f2574e"
	I0916 11:10:31.272795  260870 cri.go:89] found id: ""
	I0916 11:10:31.272804  260870 logs.go:276] 1 containers: [a442530bd3eed8c0c19667f9fb758696fcd53da538f2ae250a155372b1f2574e]
	I0916 11:10:31.272867  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:31.276486  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:10:31.276554  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:10:31.312467  260870 cri.go:89] found id: "8e878c306812f7218314ae530ad95c7f7a54c927d987295008ebfa1401dcc21c"
	I0916 11:10:31.312494  260870 cri.go:89] found id: ""
	I0916 11:10:31.312502  260870 logs.go:276] 1 containers: [8e878c306812f7218314ae530ad95c7f7a54c927d987295008ebfa1401dcc21c]
	I0916 11:10:31.312560  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:31.316419  260870 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:10:31.316486  260870 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:10:31.353043  260870 cri.go:89] found id: "00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285"
	I0916 11:10:31.353069  260870 cri.go:89] found id: ""
	I0916 11:10:31.353078  260870 logs.go:276] 1 containers: [00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285]
	I0916 11:10:31.353140  260870 ssh_runner.go:195] Run: which crictl
	I0916 11:10:31.356964  260870 logs.go:123] Gathering logs for coredns [47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c] ...
	I0916 11:10:31.356998  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c"
	I0916 11:10:31.393983  260870 logs.go:123] Gathering logs for kube-scheduler [6b3b4e782188a028982f995826169c7b0e2d5db28caae22d39c553f66b1dbf58] ...
	I0916 11:10:31.394010  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b3b4e782188a028982f995826169c7b0e2d5db28caae22d39c553f66b1dbf58"
	I0916 11:10:31.433018  260870 logs.go:123] Gathering logs for kube-proxy [a442530bd3eed8c0c19667f9fb758696fcd53da538f2ae250a155372b1f2574e] ...
	I0916 11:10:31.433050  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a442530bd3eed8c0c19667f9fb758696fcd53da538f2ae250a155372b1f2574e"
	I0916 11:10:31.474201  260870 logs.go:123] Gathering logs for kube-controller-manager [8e878c306812f7218314ae530ad95c7f7a54c927d987295008ebfa1401dcc21c] ...
	I0916 11:10:31.474228  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e878c306812f7218314ae530ad95c7f7a54c927d987295008ebfa1401dcc21c"
	I0916 11:10:31.526211  260870 logs.go:123] Gathering logs for kindnet [00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285] ...
	I0916 11:10:31.526302  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285"
	I0916 11:10:31.564909  260870 logs.go:123] Gathering logs for kubelet ...
	I0916 11:10:31.564938  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:10:31.624407  260870 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:10:31.624443  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:10:31.729709  260870 logs.go:123] Gathering logs for etcd [34eff189102300711c29178081cd84ee3bb91bbadd42e71611f1e6cad730b927] ...
	I0916 11:10:31.729740  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34eff189102300711c29178081cd84ee3bb91bbadd42e71611f1e6cad730b927"
	I0916 11:10:31.767848  260870 logs.go:123] Gathering logs for containerd ...
	I0916 11:10:31.767879  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:10:31.825821  260870 logs.go:123] Gathering logs for container status ...
	I0916 11:10:31.825856  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:10:31.866717  260870 logs.go:123] Gathering logs for dmesg ...
	I0916 11:10:31.866752  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:10:31.888660  260870 logs.go:123] Gathering logs for kube-apiserver [5e66ac9a14fe5a4058d0e0a13c533aa90cf43e7e2829bee2c2b36cb49e2bdefd] ...
	I0916 11:10:31.888704  260870 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e66ac9a14fe5a4058d0e0a13c533aa90cf43e7e2829bee2c2b36cb49e2bdefd"
	I0916 11:10:34.446916  260870 system_pods.go:59] 8 kube-system pods found
	I0916 11:10:34.446949  260870 system_pods.go:61] "coredns-74ff55c5b-78djj" [c118a29b-0828-40a2-9653-f2d3268eb8cd] Running
	I0916 11:10:34.446957  260870 system_pods.go:61] "etcd-old-k8s-version-371039" [2ba7f794-26f1-44cb-a895-77d6e4f40f11] Running
	I0916 11:10:34.446962  260870 system_pods.go:61] "kindnet-txszz" [55ac8e8a-b323-4c4a-a7d5-3c069e89deb8] Running
	I0916 11:10:34.446967  260870 system_pods.go:61] "kube-apiserver-old-k8s-version-371039" [4964def7-7f4b-46ff-b6d0-7122a46ed405] Running
	I0916 11:10:34.446972  260870 system_pods.go:61] "kube-controller-manager-old-k8s-version-371039" [8ab8368c-496d-417a-998c-8996a091c17d] Running
	I0916 11:10:34.446977  260870 system_pods.go:61] "kube-proxy-w2kp4" [fe617d0b-b789-47b3-b18f-0f9602e3873d] Running
	I0916 11:10:34.446982  260870 system_pods.go:61] "kube-scheduler-old-k8s-version-371039" [d00cbb62-128c-4108-a3ce-c3c38c3ec762] Running
	I0916 11:10:34.446987  260870 system_pods.go:61] "storage-provisioner" [fdaf9d37-19ec-4a4e-840e-b44e7158d798] Running
	I0916 11:10:34.446996  260870 system_pods.go:74] duration metric: took 3.360519154s to wait for pod list to return data ...
	I0916 11:10:34.447006  260870 default_sa.go:34] waiting for default service account to be created ...
	I0916 11:10:34.449463  260870 default_sa.go:45] found service account: "default"
	I0916 11:10:34.449496  260870 default_sa.go:55] duration metric: took 2.482731ms for default service account to be created ...
	I0916 11:10:34.449506  260870 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 11:10:34.454401  260870 system_pods.go:86] 8 kube-system pods found
	I0916 11:10:34.454432  260870 system_pods.go:89] "coredns-74ff55c5b-78djj" [c118a29b-0828-40a2-9653-f2d3268eb8cd] Running
	I0916 11:10:34.454439  260870 system_pods.go:89] "etcd-old-k8s-version-371039" [2ba7f794-26f1-44cb-a895-77d6e4f40f11] Running
	I0916 11:10:34.454445  260870 system_pods.go:89] "kindnet-txszz" [55ac8e8a-b323-4c4a-a7d5-3c069e89deb8] Running
	I0916 11:10:34.454450  260870 system_pods.go:89] "kube-apiserver-old-k8s-version-371039" [4964def7-7f4b-46ff-b6d0-7122a46ed405] Running
	I0916 11:10:34.454456  260870 system_pods.go:89] "kube-controller-manager-old-k8s-version-371039" [8ab8368c-496d-417a-998c-8996a091c17d] Running
	I0916 11:10:34.454462  260870 system_pods.go:89] "kube-proxy-w2kp4" [fe617d0b-b789-47b3-b18f-0f9602e3873d] Running
	I0916 11:10:34.454468  260870 system_pods.go:89] "kube-scheduler-old-k8s-version-371039" [d00cbb62-128c-4108-a3ce-c3c38c3ec762] Running
	I0916 11:10:34.454472  260870 system_pods.go:89] "storage-provisioner" [fdaf9d37-19ec-4a4e-840e-b44e7158d798] Running
	I0916 11:10:34.454481  260870 system_pods.go:126] duration metric: took 4.967785ms to wait for k8s-apps to be running ...
	I0916 11:10:34.454492  260870 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 11:10:34.454539  260870 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:10:34.467176  260870 system_svc.go:56] duration metric: took 12.679137ms WaitForService to wait for kubelet
	I0916 11:10:34.467202  260870 kubeadm.go:582] duration metric: took 1m22.132748603s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:10:34.467229  260870 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:10:34.470211  260870 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 11:10:34.470253  260870 node_conditions.go:123] node cpu capacity is 8
	I0916 11:10:34.470270  260870 node_conditions.go:105] duration metric: took 3.035491ms to run NodePressure ...
	I0916 11:10:34.470283  260870 start.go:241] waiting for startup goroutines ...
	I0916 11:10:34.470302  260870 start.go:246] waiting for cluster config update ...
	I0916 11:10:34.470319  260870 start.go:255] writing updated cluster config ...
	I0916 11:10:34.470680  260870 ssh_runner.go:195] Run: rm -f paused
	I0916 11:10:34.479027  260870 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-371039" cluster and "default" namespace by default
	E0916 11:10:34.480271  260870 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	I0916 11:10:34.440916  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:10:34.441544  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:10:34.441610  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:10:34.441672  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:10:34.478509  254463 cri.go:89] found id: "5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:34.478532  254463 cri.go:89] found id: ""
	I0916 11:10:34.478541  254463 logs.go:276] 1 containers: [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7]
	I0916 11:10:34.478603  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:34.482354  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:10:34.482417  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:10:34.521403  254463 cri.go:89] found id: ""
	I0916 11:10:34.521431  254463 logs.go:276] 0 containers: []
	W0916 11:10:34.521444  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:10:34.521456  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:10:34.521518  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:10:34.558530  254463 cri.go:89] found id: ""
	I0916 11:10:34.558560  254463 logs.go:276] 0 containers: []
	W0916 11:10:34.558575  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:10:34.558583  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:10:34.558637  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:10:34.598969  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:34.598989  254463 cri.go:89] found id: ""
	I0916 11:10:34.598995  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:10:34.599042  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:34.602583  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:10:34.602647  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:10:34.637484  254463 cri.go:89] found id: ""
	I0916 11:10:34.637512  254463 logs.go:276] 0 containers: []
	W0916 11:10:34.637523  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:10:34.637529  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:10:34.637587  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:10:34.672356  254463 cri.go:89] found id: "cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:34.672384  254463 cri.go:89] found id: ""
	I0916 11:10:34.672393  254463 logs.go:276] 1 containers: [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff]
	I0916 11:10:34.672436  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:10:34.675855  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:10:34.675915  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:10:34.709004  254463 cri.go:89] found id: ""
	I0916 11:10:34.709035  254463 logs.go:276] 0 containers: []
	W0916 11:10:34.709049  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:10:34.709058  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:10:34.709128  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:10:34.745401  254463 cri.go:89] found id: ""
	I0916 11:10:34.745422  254463 logs.go:276] 0 containers: []
	W0916 11:10:34.745430  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:10:34.745438  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:10:34.745449  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:10:34.771335  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:10:34.771386  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:10:34.839246  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:10:34.839267  254463 logs.go:123] Gathering logs for kube-apiserver [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7] ...
	I0916 11:10:34.839281  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:10:34.886565  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:10:34.886604  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:10:34.970001  254463 logs.go:123] Gathering logs for kube-controller-manager [cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff] ...
	I0916 11:10:34.970039  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cf43038e86d9ed58bcbc71646fb21edc50940739cd63cb52a8f8e4de087c70ff"
	I0916 11:10:35.007095  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:10:35.007170  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:10:35.056752  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:10:35.056791  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:10:35.096344  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:10:35.096376  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:10:37.678082  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:10:37.678498  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:10:37.678557  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:10:37.678615  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:10:34.237795  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:36.238446  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:10:38.239378  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	47a31e8c5ac3a       bfe3a36ebd252       About a minute ago   Running             coredns                   0                   fdced5247c6ff       coredns-74ff55c5b-78djj
	00416422d2a43       12968670680f4       About a minute ago   Running             kindnet-cni               0                   6d12bd53c3747       kindnet-txszz
	bdf8504c18d6d       6e38f40d628db       About a minute ago   Running             storage-provisioner       0                   ee17991909f8c       storage-provisioner
	a442530bd3eed       10cc881966cfd       About a minute ago   Running             kube-proxy                0                   a4449e8cd9394       kube-proxy-w2kp4
	8e878c306812f       b9fa1895dcaa6       About a minute ago   Running             kube-controller-manager   0                   5a0a25910c3e4       kube-controller-manager-old-k8s-version-371039
	34eff18910230       0369cf4303ffd       About a minute ago   Running             etcd                      0                   cdb2422929db2       etcd-old-k8s-version-371039
	6b3b4e782188a       3138b6e3d4712       About a minute ago   Running             kube-scheduler            0                   92988ff644b2d       kube-scheduler-old-k8s-version-371039
	5e66ac9a14fe5       ca9843d3b5454       About a minute ago   Running             kube-apiserver            0                   e0135df35da04       kube-apiserver-old-k8s-version-371039
	
	
	==> containerd <==
	Sep 16 11:09:13 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:13.901056543Z" level=info msg="RunPodSandbox for name:\"storage-provisioner\" uid:\"fdaf9d37-19ec-4a4e-840e-b44e7158d798\" namespace:\"kube-system\" returns sandbox id \"ee17991909f8cf296e2235c0255b87d5be5775f909f9e5f300e9aa92d85bdd2b\""
	Sep 16 11:09:13 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:13.903511210Z" level=info msg="CreateContainer within sandbox \"ee17991909f8cf296e2235c0255b87d5be5775f909f9e5f300e9aa92d85bdd2b\" for container name:\"storage-provisioner\""
	Sep 16 11:09:13 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:13.916582954Z" level=info msg="CreateContainer within sandbox \"ee17991909f8cf296e2235c0255b87d5be5775f909f9e5f300e9aa92d85bdd2b\" for name:\"storage-provisioner\" returns container id \"bdf8504c18d6de710f0dc4d8c6f20cc1f4aa180f8b245b99545016551fa68aa3\""
	Sep 16 11:09:13 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:13.917059700Z" level=info msg="StartContainer for \"bdf8504c18d6de710f0dc4d8c6f20cc1f4aa180f8b245b99545016551fa68aa3\""
	Sep 16 11:09:13 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:13.964149291Z" level=info msg="StartContainer for \"bdf8504c18d6de710f0dc4d8c6f20cc1f4aa180f8b245b99545016551fa68aa3\" returns successfully"
	Sep 16 11:09:15 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:15.845118475Z" level=info msg="ImageCreate event name:\"docker.io/kindest/kindnetd:v20240813-c6f155d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 11:09:15 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:15.845810292Z" level=info msg="stop pulling image docker.io/kindest/kindnetd:v20240813-c6f155d6: active requests=0, bytes read=36804223"
	Sep 16 11:09:15 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:15.848404015Z" level=info msg="ImageCreate event name:\"sha256:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 11:09:15 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:15.850977569Z" level=info msg="ImageCreate event name:\"docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Sep 16 11:09:15 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:15.851442555Z" level=info msg="Pulled image \"docker.io/kindest/kindnetd:v20240813-c6f155d6\" with image id \"sha256:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f\", repo tag \"docker.io/kindest/kindnetd:v20240813-c6f155d6\", repo digest \"docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166\", size \"36793393\" in 2.825942222s"
	Sep 16 11:09:15 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:15.851519488Z" level=info msg="PullImage \"docker.io/kindest/kindnetd:v20240813-c6f155d6\" returns image reference \"sha256:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f\""
	Sep 16 11:09:15 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:15.853492988Z" level=info msg="CreateContainer within sandbox \"6d12bd53c3747a9cedf8034bc1c60eb2f6de1b1f45b50a747c26d2d773a72512\" for container name:\"kindnet-cni\""
	Sep 16 11:09:15 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:15.865468393Z" level=info msg="CreateContainer within sandbox \"6d12bd53c3747a9cedf8034bc1c60eb2f6de1b1f45b50a747c26d2d773a72512\" for name:\"kindnet-cni\" returns container id \"00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285\""
	Sep 16 11:09:15 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:15.866095221Z" level=info msg="StartContainer for \"00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285\""
	Sep 16 11:09:15 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:15.938701932Z" level=info msg="StartContainer for \"00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285\" returns successfully"
	Sep 16 11:09:26 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:26.897556537Z" level=info msg="RunPodSandbox for name:\"coredns-74ff55c5b-78djj\" uid:\"c118a29b-0828-40a2-9653-f2d3268eb8cd\" namespace:\"kube-system\""
	Sep 16 11:09:26 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:26.932620392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 11:09:26 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:26.932681735Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 11:09:26 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:26.932692030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:09:26 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:26.932785803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:09:26 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:26.982343321Z" level=info msg="RunPodSandbox for name:\"coredns-74ff55c5b-78djj\" uid:\"c118a29b-0828-40a2-9653-f2d3268eb8cd\" namespace:\"kube-system\" returns sandbox id \"fdced5247c6ffaaed8e609287c84bd12fb6a78c58d70bf4755db09f4f4712d8a\""
	Sep 16 11:09:26 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:26.988709862Z" level=info msg="CreateContainer within sandbox \"fdced5247c6ffaaed8e609287c84bd12fb6a78c58d70bf4755db09f4f4712d8a\" for container name:\"coredns\""
	Sep 16 11:09:27 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:27.003135375Z" level=info msg="CreateContainer within sandbox \"fdced5247c6ffaaed8e609287c84bd12fb6a78c58d70bf4755db09f4f4712d8a\" for name:\"coredns\" returns container id \"47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c\""
	Sep 16 11:09:27 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:27.003815790Z" level=info msg="StartContainer for \"47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c\""
	Sep 16 11:09:27 old-k8s-version-371039 containerd[947]: time="2024-09-16T11:09:27.047962732Z" level=info msg="StartContainer for \"47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c\" returns successfully"
	
	
	==> coredns [47a31e8c5ac3adecd570d46f1ef928aa2f24348944589a8c12365453854c591c] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 77006bb880cc2529c537c33b8dc6eabc
	CoreDNS-1.7.0
	linux/amd64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:49206 - 19492 "HINFO IN 2568215532487827892.8058846988098566839. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014231723s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-371039
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-371039
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=old-k8s-version-371039
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T11_08_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:08:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-371039
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:10:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:09:31 +0000   Mon, 16 Sep 2024 11:08:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:09:31 +0000   Mon, 16 Sep 2024 11:08:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:09:31 +0000   Mon, 16 Sep 2024 11:08:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:09:31 +0000   Mon, 16 Sep 2024 11:09:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-371039
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 9635bab378394b3cbc8d38b8b7ea27c5
	  System UUID:                5a808ec9-2d43-4212-9e81-7580afba2fbc
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-74ff55c5b-78djj                           100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     88s
	  kube-system                 etcd-old-k8s-version-371039                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         98s
	  kube-system                 kindnet-txszz                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      88s
	  kube-system                 kube-apiserver-old-k8s-version-371039             250m (3%)     0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-controller-manager-old-k8s-version-371039    200m (2%)     0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 kube-proxy-w2kp4                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 kube-scheduler-old-k8s-version-371039             100m (1%)     0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 metrics-server-9975d5f86-4f2jl                    100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         1s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  Starting                 114s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  114s (x5 over 114s)  kubelet     Node old-k8s-version-371039 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s (x4 over 114s)  kubelet     Node old-k8s-version-371039 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s (x3 over 114s)  kubelet     Node old-k8s-version-371039 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  114s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 99s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  99s                  kubelet     Node old-k8s-version-371039 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s                  kubelet     Node old-k8s-version-371039 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     99s                  kubelet     Node old-k8s-version-371039 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  98s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                89s                  kubelet     Node old-k8s-version-371039 status is now: NodeReady
	  Normal  Starting                 87s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000001] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000001] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +1.003295] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000012] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.003959] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000006] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +2.011810] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000005] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.000001] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +4.063628] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000008] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.000030] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000007] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.003992] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000006] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +8.187268] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000006] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.000063] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000005] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.003939] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000006] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	
	
	==> etcd [34eff189102300711c29178081cd84ee3bb91bbadd42e71611f1e6cad730b927] <==
	2024-09-16 11:08:49.167631 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-09-16 11:08:49.167680 I | embed: listening for peers on 192.168.103.2:2380
	2024-09-16 11:08:49.167826 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2024/09/16 11:08:50 INFO: f23060b075c4c089 is starting a new election at term 1
	raft2024/09/16 11:08:50 INFO: f23060b075c4c089 became candidate at term 2
	raft2024/09/16 11:08:50 INFO: f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2
	raft2024/09/16 11:08:50 INFO: f23060b075c4c089 became leader at term 2
	raft2024/09/16 11:08:50 INFO: raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2
	2024-09-16 11:08:50.056510 I | etcdserver: published {Name:old-k8s-version-371039 ClientURLs:[https://192.168.103.2:2379]} to cluster 3336683c081d149d
	2024-09-16 11:08:50.056532 I | embed: ready to serve client requests
	2024-09-16 11:08:50.057110 I | embed: ready to serve client requests
	2024-09-16 11:08:50.058044 I | embed: serving client requests on 127.0.0.1:2379
	2024-09-16 11:08:50.058490 I | etcdserver: setting up the initial cluster version to 3.4
	2024-09-16 11:08:50.068100 I | embed: serving client requests on 192.168.103.2:2379
	2024-09-16 11:08:50.070326 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-09-16 11:08:50.070887 I | etcdserver/api: enabled capabilities for version 3.4
	2024-09-16 11:09:11.117153 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:09:20.292389 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:09:30.292398 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:09:40.292229 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:09:50.292279 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:10:00.292399 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:10:10.292409 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:10:20.292351 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:10:30.292340 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 11:10:40 up 53 min,  0 users,  load average: 2.54, 3.28, 2.17
	Linux old-k8s-version-371039 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [00416422d2a43475f6b30da1c845ba6ab8d639e58af4363f7c236f57ffbe1285] <==
	I0916 11:09:16.122984       1 main.go:148] setting mtu 1500 for CNI 
	I0916 11:09:16.123005       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 11:09:16.123030       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 11:09:16.440841       1 controller.go:334] Starting controller kube-network-policies
	I0916 11:09:16.440859       1 controller.go:338] Waiting for informer caches to sync
	I0916 11:09:16.440866       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 11:09:16.741421       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 11:09:16.741449       1 metrics.go:61] Registering metrics
	I0916 11:09:16.741493       1 controller.go:374] Syncing nftables rules
	I0916 11:09:26.443817       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:09:26.443889       1 main.go:299] handling current node
	I0916 11:09:36.443817       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:09:36.443873       1 main.go:299] handling current node
	I0916 11:09:46.444993       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:09:46.445026       1 main.go:299] handling current node
	I0916 11:09:56.448730       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:09:56.448775       1 main.go:299] handling current node
	I0916 11:10:06.442481       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:10:06.442528       1 main.go:299] handling current node
	I0916 11:10:16.440913       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:10:16.440948       1 main.go:299] handling current node
	I0916 11:10:26.440992       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:10:26.441030       1 main.go:299] handling current node
	I0916 11:10:36.443855       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:10:36.443895       1 main.go:299] handling current node
	
	
	==> kube-apiserver [5e66ac9a14fe5a4058d0e0a13c533aa90cf43e7e2829bee2c2b36cb49e2bdefd] <==
	I0916 11:08:53.520192       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0916 11:08:53.521141       1 apf_controller.go:253] Running API Priority and Fairness config worker
	I0916 11:08:53.520210       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0916 11:08:54.353949       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0916 11:08:54.353987       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0916 11:08:54.361543       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0916 11:08:54.365717       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0916 11:08:54.365739       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0916 11:08:54.756312       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 11:08:54.792245       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0916 11:08:54.860469       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I0916 11:08:54.861548       1 controller.go:606] quota admission added evaluator for: endpoints
	I0916 11:08:54.865453       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 11:08:55.892699       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0916 11:08:56.495384       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0916 11:08:56.665190       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0916 11:09:01.882424       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 11:09:12.124421       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0916 11:09:12.248848       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0916 11:09:28.780739       1 client.go:360] parsed scheme: "passthrough"
	I0916 11:09:28.780781       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 11:09:28.780806       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0916 11:10:06.948682       1 client.go:360] parsed scheme: "passthrough"
	I0916 11:10:06.948905       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 11:10:06.948926       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [8e878c306812f7218314ae530ad95c7f7a54c927d987295008ebfa1401dcc21c] <==
	I0916 11:09:12.046684       1 shared_informer.go:247] Caches are synced for crt configmap 
	I0916 11:09:12.046840       1 shared_informer.go:247] Caches are synced for GC 
	I0916 11:09:12.048076       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0916 11:09:12.119917       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0916 11:09:12.130067       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-txszz"
	I0916 11:09:12.131917       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-w2kp4"
	I0916 11:09:12.246156       1 shared_informer.go:247] Caches are synced for deployment 
	I0916 11:09:12.246176       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0916 11:09:12.246193       1 shared_informer.go:247] Caches are synced for disruption 
	I0916 11:09:12.246220       1 disruption.go:339] Sending events to api server.
	I0916 11:09:12.248247       1 shared_informer.go:247] Caches are synced for resource quota 
	I0916 11:09:12.250904       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0916 11:09:12.254472       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-lgf42"
	I0916 11:09:12.261635       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-78djj"
	I0916 11:09:12.425820       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0916 11:09:12.726025       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0916 11:09:12.819908       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0916 11:09:12.819938       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0916 11:09:13.096725       1 request.go:655] Throttling request took 1.049972591s, request: GET:https://192.168.103.2:8443/apis/autoscaling/v2beta1?timeout=32s
	I0916 11:09:13.339089       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0916 11:09:13.344204       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-lgf42"
	I0916 11:09:13.897597       1 shared_informer.go:240] Waiting for caches to sync for resource quota
	I0916 11:09:13.897639       1 shared_informer.go:247] Caches are synced for resource quota 
	I0916 11:10:38.906562       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	I0916 11:10:39.919163       1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-9975d5f86-4f2jl"
	
	
	==> kube-proxy [a442530bd3eed8c0c19667f9fb758696fcd53da538f2ae250a155372b1f2574e] <==
	I0916 11:09:13.322536       1 node.go:172] Successfully retrieved node IP: 192.168.103.2
	I0916 11:09:13.322732       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.103.2), assume IPv4 operation
	W0916 11:09:13.345840       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0916 11:09:13.345951       1 server_others.go:185] Using iptables Proxier.
	I0916 11:09:13.346284       1 server.go:650] Version: v1.20.0
	I0916 11:09:13.347687       1 config.go:315] Starting service config controller
	I0916 11:09:13.349932       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0916 11:09:13.347841       1 config.go:224] Starting endpoint slice config controller
	I0916 11:09:13.420415       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0916 11:09:13.420676       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0916 11:09:13.450370       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [6b3b4e782188a028982f995826169c7b0e2d5db28caae22d39c553f66b1dbf58] <==
	W0916 11:08:53.425390       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 11:08:53.425498       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 11:08:53.425546       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 11:08:53.425566       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 11:08:53.445666       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 11:08:53.445756       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 11:08:53.445770       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0916 11:08:53.445859       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0916 11:08:53.447314       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 11:08:53.447706       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 11:08:53.447999       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 11:08:53.448116       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:08:53.448269       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 11:08:53.448478       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 11:08:53.448860       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 11:08:53.448864       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 11:08:53.449019       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 11:08:53.449164       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 11:08:53.450105       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 11:08:53.450247       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:08:54.410511       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 11:08:54.433702       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 11:08:54.472291       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:08:54.592362       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0916 11:08:56.246004       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Sep 16 11:09:12 old-k8s-version-371039 kubelet[2075]: E0916 11:09:12.829232    2075 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "3a6e2da6a3e56c5a6f73b13065dcd336f8732b250981a6b8f6de9228323aab79": failed to find network info for sandbox "3a6e2da6a3e56c5a6f73b13065dcd336f8732b250981a6b8f6de9228323aab79"
	Sep 16 11:09:12 old-k8s-version-371039 kubelet[2075]: E0916 11:09:12.829320    2075 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "coredns-74ff55c5b-lgf42_kube-system(30c8c5e2-3068-4ddf-bcfa-a514dee78dea)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "3a6e2da6a3e56c5a6f73b13065dcd336f8732b250981a6b8f6de9228323aab79": failed to find network info for sandbox "3a6e2da6a3e56c5a6f73b13065dcd336f8732b250981a6b8f6de9228323aab79"
	Sep 16 11:09:12 old-k8s-version-371039 kubelet[2075]: E0916 11:09:12.829339    2075 kuberuntime_manager.go:755] createPodSandbox for pod "coredns-74ff55c5b-lgf42_kube-system(30c8c5e2-3068-4ddf-bcfa-a514dee78dea)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "3a6e2da6a3e56c5a6f73b13065dcd336f8732b250981a6b8f6de9228323aab79": failed to find network info for sandbox "3a6e2da6a3e56c5a6f73b13065dcd336f8732b250981a6b8f6de9228323aab79"
	Sep 16 11:09:12 old-k8s-version-371039 kubelet[2075]: E0916 11:09:12.829403    2075 pod_workers.go:191] Error syncing pod 30c8c5e2-3068-4ddf-bcfa-a514dee78dea ("coredns-74ff55c5b-lgf42_kube-system(30c8c5e2-3068-4ddf-bcfa-a514dee78dea)"), skipping: failed to "CreatePodSandbox" for "coredns-74ff55c5b-lgf42_kube-system(30c8c5e2-3068-4ddf-bcfa-a514dee78dea)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-74ff55c5b-lgf42_kube-system(30c8c5e2-3068-4ddf-bcfa-a514dee78dea)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"3a6e2da6a3e56c5a6f73b13065dcd336f8732b250981a6b8f6de9228323aab79\": failed to find network info for sandbox \"3a6e2da6a3e56c5a6f73b13065dcd336f8732b250981a6b8f6de9228323aab79\""
	Sep 16 11:09:12 old-k8s-version-371039 kubelet[2075]: E0916 11:09:12.836368    2075 remote_runtime.go:116] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to setup network for sandbox "6262a29949d3b2dd4f285aa4b3f78d4dda3258584a03aab14b23a9e8529c8fe5": failed to find network info for sandbox "6262a29949d3b2dd4f285aa4b3f78d4dda3258584a03aab14b23a9e8529c8fe5"
	Sep 16 11:09:12 old-k8s-version-371039 kubelet[2075]: E0916 11:09:12.836458    2075 kuberuntime_sandbox.go:70] CreatePodSandbox for pod "coredns-74ff55c5b-78djj_kube-system(c118a29b-0828-40a2-9653-f2d3268eb8cd)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "6262a29949d3b2dd4f285aa4b3f78d4dda3258584a03aab14b23a9e8529c8fe5": failed to find network info for sandbox "6262a29949d3b2dd4f285aa4b3f78d4dda3258584a03aab14b23a9e8529c8fe5"
	Sep 16 11:09:12 old-k8s-version-371039 kubelet[2075]: E0916 11:09:12.836480    2075 kuberuntime_manager.go:755] createPodSandbox for pod "coredns-74ff55c5b-78djj_kube-system(c118a29b-0828-40a2-9653-f2d3268eb8cd)" failed: rpc error: code = Unknown desc = failed to setup network for sandbox "6262a29949d3b2dd4f285aa4b3f78d4dda3258584a03aab14b23a9e8529c8fe5": failed to find network info for sandbox "6262a29949d3b2dd4f285aa4b3f78d4dda3258584a03aab14b23a9e8529c8fe5"
	Sep 16 11:09:12 old-k8s-version-371039 kubelet[2075]: E0916 11:09:12.836537    2075 pod_workers.go:191] Error syncing pod c118a29b-0828-40a2-9653-f2d3268eb8cd ("coredns-74ff55c5b-78djj_kube-system(c118a29b-0828-40a2-9653-f2d3268eb8cd)"), skipping: failed to "CreatePodSandbox" for "coredns-74ff55c5b-78djj_kube-system(c118a29b-0828-40a2-9653-f2d3268eb8cd)" with CreatePodSandboxError: "CreatePodSandbox for pod \"coredns-74ff55c5b-78djj_kube-system(c118a29b-0828-40a2-9653-f2d3268eb8cd)\" failed: rpc error: code = Unknown desc = failed to setup network for sandbox \"6262a29949d3b2dd4f285aa4b3f78d4dda3258584a03aab14b23a9e8529c8fe5\": failed to find network info for sandbox \"6262a29949d3b2dd4f285aa4b3f78d4dda3258584a03aab14b23a9e8529c8fe5\""
	Sep 16 11:09:13 old-k8s-version-371039 kubelet[2075]: I0916 11:09:13.521958    2075 topology_manager.go:187] [topologymanager] Topology Admit Handler
	Sep 16 11:09:13 old-k8s-version-371039 kubelet[2075]: I0916 11:09:13.527538    2075 reconciler.go:196] operationExecutor.UnmountVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/30c8c5e2-3068-4ddf-bcfa-a514dee78dea-config-volume") pod "30c8c5e2-3068-4ddf-bcfa-a514dee78dea" (UID: "30c8c5e2-3068-4ddf-bcfa-a514dee78dea")
	Sep 16 11:09:13 old-k8s-version-371039 kubelet[2075]: I0916 11:09:13.527585    2075 reconciler.go:196] operationExecutor.UnmountVolume started for volume "coredns-token-vcrsr" (UniqueName: "kubernetes.io/secret/30c8c5e2-3068-4ddf-bcfa-a514dee78dea-coredns-token-vcrsr") pod "30c8c5e2-3068-4ddf-bcfa-a514dee78dea" (UID: "30c8c5e2-3068-4ddf-bcfa-a514dee78dea")
	Sep 16 11:09:13 old-k8s-version-371039 kubelet[2075]: W0916 11:09:13.527857    2075 empty_dir.go:520] Warning: Failed to clear quota on /var/lib/kubelet/pods/30c8c5e2-3068-4ddf-bcfa-a514dee78dea/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Sep 16 11:09:13 old-k8s-version-371039 kubelet[2075]: I0916 11:09:13.528062    2075 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30c8c5e2-3068-4ddf-bcfa-a514dee78dea-config-volume" (OuterVolumeSpecName: "config-volume") pod "30c8c5e2-3068-4ddf-bcfa-a514dee78dea" (UID: "30c8c5e2-3068-4ddf-bcfa-a514dee78dea"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Sep 16 11:09:13 old-k8s-version-371039 kubelet[2075]: I0916 11:09:13.530229    2075 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30c8c5e2-3068-4ddf-bcfa-a514dee78dea-coredns-token-vcrsr" (OuterVolumeSpecName: "coredns-token-vcrsr") pod "30c8c5e2-3068-4ddf-bcfa-a514dee78dea" (UID: "30c8c5e2-3068-4ddf-bcfa-a514dee78dea"). InnerVolumeSpecName "coredns-token-vcrsr". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 16 11:09:13 old-k8s-version-371039 kubelet[2075]: I0916 11:09:13.627931    2075 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/fdaf9d37-19ec-4a4e-840e-b44e7158d798-tmp") pod "storage-provisioner" (UID: "fdaf9d37-19ec-4a4e-840e-b44e7158d798")
	Sep 16 11:09:13 old-k8s-version-371039 kubelet[2075]: I0916 11:09:13.627974    2075 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-4gk79" (UniqueName: "kubernetes.io/secret/fdaf9d37-19ec-4a4e-840e-b44e7158d798-storage-provisioner-token-4gk79") pod "storage-provisioner" (UID: "fdaf9d37-19ec-4a4e-840e-b44e7158d798")
	Sep 16 11:09:13 old-k8s-version-371039 kubelet[2075]: I0916 11:09:13.627999    2075 reconciler.go:319] Volume detached for volume "config-volume" (UniqueName: "kubernetes.io/configmap/30c8c5e2-3068-4ddf-bcfa-a514dee78dea-config-volume") on node "old-k8s-version-371039" DevicePath ""
	Sep 16 11:09:13 old-k8s-version-371039 kubelet[2075]: I0916 11:09:13.628011    2075 reconciler.go:319] Volume detached for volume "coredns-token-vcrsr" (UniqueName: "kubernetes.io/secret/30c8c5e2-3068-4ddf-bcfa-a514dee78dea-coredns-token-vcrsr") on node "old-k8s-version-371039" DevicePath ""
	Sep 16 11:10:39 old-k8s-version-371039 kubelet[2075]: I0916 11:10:39.923031    2075 topology_manager.go:187] [topologymanager] Topology Admit Handler
	Sep 16 11:10:40 old-k8s-version-371039 kubelet[2075]: I0916 11:10:40.119063    2075 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-dir" (UniqueName: "kubernetes.io/empty-dir/480e2907-201f-461f-aa3d-d24598e679d1-tmp-dir") pod "metrics-server-9975d5f86-4f2jl" (UID: "480e2907-201f-461f-aa3d-d24598e679d1")
	Sep 16 11:10:40 old-k8s-version-371039 kubelet[2075]: I0916 11:10:40.119110    2075 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "metrics-server-token-cs7nf" (UniqueName: "kubernetes.io/secret/480e2907-201f-461f-aa3d-d24598e679d1-metrics-server-token-cs7nf") pod "metrics-server-9975d5f86-4f2jl" (UID: "480e2907-201f-461f-aa3d-d24598e679d1")
	Sep 16 11:10:40 old-k8s-version-371039 kubelet[2075]: E0916 11:10:40.371104    2075 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host
	Sep 16 11:10:40 old-k8s-version-371039 kubelet[2075]: E0916 11:10:40.371180    2075 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host
	Sep 16 11:10:40 old-k8s-version-371039 kubelet[2075]: E0916 11:10:40.371358    2075 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-cs7nf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exe
c:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-4f2jl_kube-system(480e29
07-201f-461f-aa3d-d24598e679d1): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host
	Sep 16 11:10:40 old-k8s-version-371039 kubelet[2075]: E0916 11:10:40.371414    2075 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	
	
	==> storage-provisioner [bdf8504c18d6de710f0dc4d8c6f20cc1f4aa180f8b245b99545016551fa68aa3] <==
	I0916 11:09:13.972762       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 11:09:13.980679       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 11:09:13.980724       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 11:09:13.987659       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 11:09:13.987719       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3df43ad2-abd4-4d32-b26b-91fa0eea8673", APIVersion:"v1", ResourceVersion:"473", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-371039_ba54aacd-67bb-46f0-b4ba-d01754da79ef became leader
	I0916 11:09:13.987846       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-371039_ba54aacd-67bb-46f0-b4ba-d01754da79ef!
	I0916 11:09:14.088020       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-371039_ba54aacd-67bb-46f0-b4ba-d01754da79ef!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-371039 -n old-k8s-version-371039
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-371039 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context old-k8s-version-371039 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (534.892µs)
helpers_test.go:263: kubectl --context old-k8s-version-371039 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (377.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-371039 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-371039 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m13.253742359s)

                                                
                                                
-- stdout --
	* [old-k8s-version-371039] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19651
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-371039" primary control-plane node in "old-k8s-version-371039" cluster
	* Pulling base image v0.0.45-1726358845-19644 ...
	* Restarting existing docker container for "old-k8s-version-371039" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	* Verifying Kubernetes components...
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-371039 addons enable metrics-server
	
	* Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 11:10:46.801572  283294 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:10:46.801692  283294 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:10:46.801702  283294 out.go:358] Setting ErrFile to fd 2...
	I0916 11:10:46.801708  283294 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:10:46.801888  283294 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 11:10:46.802460  283294 out.go:352] Setting JSON to false
	I0916 11:10:46.803874  283294 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3191,"bootTime":1726481856,"procs":330,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:10:46.803976  283294 start.go:139] virtualization: kvm guest
	I0916 11:10:46.806689  283294 out.go:177] * [old-k8s-version-371039] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:10:46.808342  283294 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:10:46.808396  283294 notify.go:220] Checking for updates...
	I0916 11:10:46.811239  283294 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:10:46.813076  283294 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:10:46.814450  283294 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 11:10:46.815948  283294 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:10:46.817320  283294 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:10:46.819163  283294 config.go:182] Loaded profile config "old-k8s-version-371039": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0916 11:10:46.821005  283294 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0916 11:10:46.822201  283294 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:10:46.846131  283294 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 11:10:46.846248  283294 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:10:46.894486  283294 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:76 SystemTime:2024-09-16 11:10:46.884498092 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:10:46.894605  283294 docker.go:318] overlay module found
	I0916 11:10:46.896620  283294 out.go:177] * Using the docker driver based on existing profile
	I0916 11:10:46.897887  283294 start.go:297] selected driver: docker
	I0916 11:10:46.897899  283294 start.go:901] validating driver "docker" against &{Name:old-k8s-version-371039 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-371039 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:10:46.897991  283294 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:10:46.898781  283294 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:10:46.950195  283294 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:76 SystemTime:2024-09-16 11:10:46.93711681 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:10:46.950659  283294 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:10:46.950697  283294 cni.go:84] Creating CNI manager for ""
	I0916 11:10:46.950756  283294 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:10:46.950814  283294 start.go:340] cluster config:
	{Name:old-k8s-version-371039 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-371039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:10:46.953261  283294 out.go:177] * Starting "old-k8s-version-371039" primary control-plane node in "old-k8s-version-371039" cluster
	I0916 11:10:46.954720  283294 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 11:10:46.956480  283294 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 11:10:46.957919  283294 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0916 11:10:46.957985  283294 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0916 11:10:46.958009  283294 cache.go:56] Caching tarball of preloaded images
	I0916 11:10:46.958022  283294 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 11:10:46.958115  283294 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 11:10:46.958130  283294 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0916 11:10:46.958256  283294 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/config.json ...
	W0916 11:10:46.983116  283294 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 11:10:46.983139  283294 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 11:10:46.983232  283294 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 11:10:46.983256  283294 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 11:10:46.983261  283294 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 11:10:46.983271  283294 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 11:10:46.983282  283294 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 11:10:47.049982  283294 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 11:10:47.050027  283294 cache.go:194] Successfully downloaded all kic artifacts
	I0916 11:10:47.050070  283294 start.go:360] acquireMachinesLock for old-k8s-version-371039: {Name:mkee7b58040c5212d75aee187b093a1684178371 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:10:47.050132  283294 start.go:364] duration metric: took 41.328µs to acquireMachinesLock for "old-k8s-version-371039"
	I0916 11:10:47.050148  283294 start.go:96] Skipping create...Using existing machine configuration
	I0916 11:10:47.050155  283294 fix.go:54] fixHost starting: 
	I0916 11:10:47.050363  283294 cli_runner.go:164] Run: docker container inspect old-k8s-version-371039 --format={{.State.Status}}
	I0916 11:10:47.068850  283294 fix.go:112] recreateIfNeeded on old-k8s-version-371039: state=Stopped err=<nil>
	W0916 11:10:47.068879  283294 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 11:10:47.071276  283294 out.go:177] * Restarting existing docker container for "old-k8s-version-371039" ...
	I0916 11:10:47.072644  283294 cli_runner.go:164] Run: docker start old-k8s-version-371039
	I0916 11:10:47.334480  283294 cli_runner.go:164] Run: docker container inspect old-k8s-version-371039 --format={{.State.Status}}
	I0916 11:10:47.352274  283294 kic.go:430] container "old-k8s-version-371039" state is running.
	I0916 11:10:47.352677  283294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-371039
	I0916 11:10:47.371207  283294 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/config.json ...
	I0916 11:10:47.371494  283294 machine.go:93] provisionDockerMachine start ...
	I0916 11:10:47.371562  283294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-371039
	I0916 11:10:47.392274  283294 main.go:141] libmachine: Using SSH client type: native
	I0916 11:10:47.392522  283294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I0916 11:10:47.392538  283294 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 11:10:47.393240  283294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37408->127.0.0.1:33073: read: connection reset by peer
	I0916 11:10:50.527108  283294 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-371039
	
	I0916 11:10:50.527136  283294 ubuntu.go:169] provisioning hostname "old-k8s-version-371039"
	I0916 11:10:50.527186  283294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-371039
	I0916 11:10:50.544925  283294 main.go:141] libmachine: Using SSH client type: native
	I0916 11:10:50.545178  283294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I0916 11:10:50.545201  283294 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-371039 && echo "old-k8s-version-371039" | sudo tee /etc/hostname
	I0916 11:10:50.687680  283294 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-371039
	
	I0916 11:10:50.687824  283294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-371039
	I0916 11:10:50.707380  283294 main.go:141] libmachine: Using SSH client type: native
	I0916 11:10:50.707558  283294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I0916 11:10:50.707576  283294 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-371039' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-371039/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-371039' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:10:50.847649  283294 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:10:50.847681  283294 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 11:10:50.847785  283294 ubuntu.go:177] setting up certificates
	I0916 11:10:50.847805  283294 provision.go:84] configureAuth start
	I0916 11:10:50.847867  283294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-371039
	I0916 11:10:50.869205  283294 provision.go:143] copyHostCerts
	I0916 11:10:50.869275  283294 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 11:10:50.869285  283294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 11:10:50.869371  283294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 11:10:50.869483  283294 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 11:10:50.869494  283294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 11:10:50.869537  283294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 11:10:50.869652  283294 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 11:10:50.869662  283294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 11:10:50.869701  283294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 11:10:50.869778  283294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-371039 san=[127.0.0.1 192.168.103.2 localhost minikube old-k8s-version-371039]
	I0916 11:10:50.921838  283294 provision.go:177] copyRemoteCerts
	I0916 11:10:50.921901  283294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:10:50.921933  283294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-371039
	I0916 11:10:50.939104  283294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/old-k8s-version-371039/id_rsa Username:docker}
	I0916 11:10:51.040084  283294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0916 11:10:51.064229  283294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 11:10:51.088564  283294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 11:10:51.113065  283294 provision.go:87] duration metric: took 265.242287ms to configureAuth
	I0916 11:10:51.113097  283294 ubuntu.go:193] setting minikube options for container-runtime
	I0916 11:10:51.113307  283294 config.go:182] Loaded profile config "old-k8s-version-371039": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0916 11:10:51.113320  283294 machine.go:96] duration metric: took 3.741809846s to provisionDockerMachine
	I0916 11:10:51.113329  283294 start.go:293] postStartSetup for "old-k8s-version-371039" (driver="docker")
	I0916 11:10:51.113343  283294 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:10:51.113409  283294 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:10:51.113453  283294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-371039
	I0916 11:10:51.131549  283294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/old-k8s-version-371039/id_rsa Username:docker}
	I0916 11:10:51.230173  283294 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:10:51.233271  283294 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 11:10:51.233302  283294 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 11:10:51.233310  283294 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 11:10:51.233316  283294 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 11:10:51.233325  283294 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 11:10:51.233368  283294 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 11:10:51.233476  283294 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 11:10:51.233583  283294 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:10:51.243000  283294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:10:51.266467  283294 start.go:296] duration metric: took 153.1189ms for postStartSetup
	I0916 11:10:51.266566  283294 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 11:10:51.266621  283294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-371039
	I0916 11:10:51.284139  283294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/old-k8s-version-371039/id_rsa Username:docker}
	I0916 11:10:51.380637  283294 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 11:10:51.384930  283294 fix.go:56] duration metric: took 4.334769065s for fixHost
	I0916 11:10:51.384953  283294 start.go:83] releasing machines lock for "old-k8s-version-371039", held for 4.334811733s
	I0916 11:10:51.385018  283294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-371039
	I0916 11:10:51.401939  283294 ssh_runner.go:195] Run: cat /version.json
	I0916 11:10:51.402004  283294 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:10:51.402053  283294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-371039
	I0916 11:10:51.402006  283294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-371039
	I0916 11:10:51.421774  283294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/old-k8s-version-371039/id_rsa Username:docker}
	I0916 11:10:51.422155  283294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/old-k8s-version-371039/id_rsa Username:docker}
	I0916 11:10:51.589162  283294 ssh_runner.go:195] Run: systemctl --version
	I0916 11:10:51.593438  283294 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:10:51.597589  283294 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 11:10:51.615651  283294 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 11:10:51.615766  283294 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:10:51.624101  283294 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 11:10:51.624128  283294 start.go:495] detecting cgroup driver to use...
	I0916 11:10:51.624161  283294 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 11:10:51.624199  283294 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 11:10:51.636842  283294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 11:10:51.647590  283294 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:10:51.647654  283294 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:10:51.659711  283294 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:10:51.670608  283294 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:10:51.745050  283294 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:10:51.819789  283294 docker.go:233] disabling docker service ...
	I0916 11:10:51.819852  283294 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:10:51.831899  283294 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:10:51.842601  283294 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:10:51.920945  283294 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:10:51.994325  283294 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:10:52.004888  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:10:52.020104  283294 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0916 11:10:52.029219  283294 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 11:10:52.038479  283294 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 11:10:52.038544  283294 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 11:10:52.047413  283294 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 11:10:52.056583  283294 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 11:10:52.065827  283294 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 11:10:52.075884  283294 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:10:52.085133  283294 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 11:10:52.094098  283294 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:10:52.101669  283294 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:10:52.109369  283294 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:10:52.180802  283294 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 11:10:52.277812  283294 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 11:10:52.277876  283294 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 11:10:52.281434  283294 start.go:563] Will wait 60s for crictl version
	I0916 11:10:52.281478  283294 ssh_runner.go:195] Run: which crictl
	I0916 11:10:52.284493  283294 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:10:52.318033  283294 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 11:10:52.318092  283294 ssh_runner.go:195] Run: containerd --version
	I0916 11:10:52.339936  283294 ssh_runner.go:195] Run: containerd --version
	I0916 11:10:52.364389  283294 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	I0916 11:10:52.365653  283294 cli_runner.go:164] Run: docker network inspect old-k8s-version-371039 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:10:52.383476  283294 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0916 11:10:52.387032  283294 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:10:52.397371  283294 kubeadm.go:883] updating cluster {Name:old-k8s-version-371039 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-371039 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:
/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:10:52.397480  283294 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0916 11:10:52.397526  283294 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:10:52.429815  283294 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0916 11:10:52.429873  283294 ssh_runner.go:195] Run: which lz4
	I0916 11:10:52.433237  283294 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0916 11:10:52.436491  283294 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0916 11:10:52.436521  283294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (472503869 bytes)
	I0916 11:10:53.323375  283294 containerd.go:563] duration metric: took 890.171717ms to copy over tarball
	I0916 11:10:53.323448  283294 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0916 11:10:55.854093  283294 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.53061068s)
	I0916 11:10:55.854127  283294 containerd.go:570] duration metric: took 2.530720429s to extract the tarball
	I0916 11:10:55.854135  283294 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0916 11:10:55.924295  283294 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:10:55.996488  283294 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 11:10:56.097002  283294 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:10:56.130556  283294 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0916 11:10:56.130588  283294 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0916 11:10:56.130654  283294 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:10:56.130692  283294 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0916 11:10:56.130708  283294 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I0916 11:10:56.130677  283294 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:10:56.130730  283294 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:10:56.130743  283294 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:10:56.130706  283294 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:10:56.130744  283294 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I0916 11:10:56.132126  283294 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:10:56.132183  283294 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0916 11:10:56.132199  283294 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:10:56.132192  283294 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:10:56.132195  283294 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:10:56.132234  283294 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0916 11:10:56.132249  283294 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0916 11:10:56.132194  283294 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:10:56.328642  283294 containerd.go:267] Checking existence of image with name "registry.k8s.io/coredns:1.7.0" and sha "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16"
	I0916 11:10:56.328699  283294 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/coredns:1.7.0
	I0916 11:10:56.350494  283294 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0916 11:10:56.350538  283294 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0916 11:10:56.350597  283294 ssh_runner.go:195] Run: which crictl
	I0916 11:10:56.354102  283294 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 11:10:56.366965  283294 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-apiserver:v1.20.0" and sha "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99"
	I0916 11:10:56.367020  283294 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:10:56.376072  283294 containerd.go:267] Checking existence of image with name "registry.k8s.io/pause:3.2" and sha "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c"
	I0916 11:10:56.376136  283294 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/pause:3.2
	I0916 11:10:56.381955  283294 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-proxy:v1.20.0" and sha "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc"
	I0916 11:10:56.382027  283294 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:10:56.387585  283294 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-scheduler:v1.20.0" and sha "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899"
	I0916 11:10:56.387713  283294 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:10:56.388995  283294 containerd.go:267] Checking existence of image with name "registry.k8s.io/etcd:3.4.13-0" and sha "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934"
	I0916 11:10:56.389061  283294 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/etcd:3.4.13-0
	I0916 11:10:56.389361  283294 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 11:10:56.390392  283294 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0916 11:10:56.390431  283294 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:10:56.390470  283294 ssh_runner.go:195] Run: which crictl
	I0916 11:10:56.392981  283294 containerd.go:267] Checking existence of image with name "registry.k8s.io/kube-controller-manager:v1.20.0" and sha "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080"
	I0916 11:10:56.393033  283294 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:10:56.424103  283294 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0916 11:10:56.424150  283294 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0916 11:10:56.424196  283294 ssh_runner.go:195] Run: which crictl
	I0916 11:10:56.425653  283294 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0916 11:10:56.425734  283294 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:10:56.425797  283294 ssh_runner.go:195] Run: which crictl
	I0916 11:10:56.432761  283294 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0916 11:10:56.432818  283294 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:10:56.432858  283294 ssh_runner.go:195] Run: which crictl
	I0916 11:10:56.433296  283294 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0916 11:10:56.433336  283294 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0916 11:10:56.433388  283294 ssh_runner.go:195] Run: which crictl
	I0916 11:10:56.450582  283294 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0916 11:10:56.450641  283294 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:10:56.450654  283294 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0916 11:10:56.450693  283294 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:10:56.450705  283294 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 11:10:56.450734  283294 ssh_runner.go:195] Run: which crictl
	I0916 11:10:56.450779  283294 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:10:56.450802  283294 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:10:56.450848  283294 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 11:10:56.627535  283294 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:10:56.627563  283294 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 11:10:56.627667  283294 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:10:56.629432  283294 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:10:56.629495  283294 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0916 11:10:56.629549  283294 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 11:10:56.629605  283294 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:10:56.751304  283294 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0916 11:10:56.751387  283294 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0916 11:10:56.751450  283294 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0916 11:10:56.751516  283294 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:10:56.751585  283294 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0916 11:10:56.824214  283294 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0916 11:10:56.941789  283294 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0916 11:10:56.941863  283294 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0916 11:10:56.941893  283294 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0916 11:10:56.941942  283294 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0916 11:10:56.941994  283294 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0916 11:10:56.942408  283294 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0916 11:10:56.976881  283294 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0916 11:10:57.242347  283294 containerd.go:267] Checking existence of image with name "gcr.io/k8s-minikube/storage-provisioner:v5" and sha "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I0916 11:10:57.242404  283294 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images ls name==gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:10:57.269331  283294 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0916 11:10:57.269375  283294 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:10:57.269412  283294 ssh_runner.go:195] Run: which crictl
	I0916 11:10:57.272980  283294 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:10:57.362557  283294 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0916 11:10:57.362646  283294 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0916 11:10:57.366025  283294 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0916 11:10:57.366048  283294 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0916 11:10:57.366096  283294 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I0916 11:10:57.736816  283294 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0916 11:10:57.736875  283294 cache_images.go:92] duration metric: took 1.60627335s to LoadCachedImages
	W0916 11:10:57.736949  283294 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19651-3687/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0916 11:10:57.736967  283294 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.20.0 containerd true true} ...
	I0916 11:10:57.737092  283294 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-371039 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-371039 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:10:57.737161  283294 ssh_runner.go:195] Run: sudo crictl info
	I0916 11:10:57.770634  283294 cni.go:84] Creating CNI manager for ""
	I0916 11:10:57.770655  283294 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:10:57.770663  283294 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:10:57.770680  283294 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-371039 NodeName:old-k8s-version-371039 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0916 11:10:57.770819  283294 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-371039"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:10:57.770876  283294 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0916 11:10:57.779454  283294 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 11:10:57.779531  283294 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:10:57.787991  283294 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (443 bytes)
	I0916 11:10:57.804198  283294 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:10:57.820462  283294 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2128 bytes)
	I0916 11:10:57.836886  283294 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0916 11:10:57.840249  283294 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:10:57.849907  283294 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:10:57.924394  283294 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:10:57.937566  283294 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039 for IP: 192.168.103.2
	I0916 11:10:57.937595  283294 certs.go:194] generating shared ca certs ...
	I0916 11:10:57.937625  283294 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:10:57.937760  283294 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 11:10:57.937798  283294 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 11:10:57.937807  283294 certs.go:256] generating profile certs ...
	I0916 11:10:57.937876  283294 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.key
	I0916 11:10:57.937930  283294 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/apiserver.key.2be0dd44
	I0916 11:10:57.937965  283294 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/proxy-client.key
	I0916 11:10:57.938057  283294 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 11:10:57.938084  283294 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 11:10:57.938101  283294 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:10:57.938123  283294 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 11:10:57.938146  283294 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:10:57.938182  283294 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 11:10:57.938218  283294 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:10:57.938805  283294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:10:57.963911  283294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:10:57.988302  283294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:10:58.023689  283294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:10:58.047203  283294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0916 11:10:58.071008  283294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 11:10:58.095347  283294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:10:58.118304  283294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 11:10:58.140804  283294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:10:58.163377  283294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 11:10:58.185734  283294 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 11:10:58.207519  283294 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:10:58.223810  283294 ssh_runner.go:195] Run: openssl version
	I0916 11:10:58.228751  283294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 11:10:58.237982  283294 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 11:10:58.241143  283294 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 11:10:58.241193  283294 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 11:10:58.247443  283294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:10:58.255233  283294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:10:58.264163  283294 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:10:58.267398  283294 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:10:58.267445  283294 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:10:58.273975  283294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:10:58.282357  283294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 11:10:58.291027  283294 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 11:10:58.294339  283294 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 11:10:58.294387  283294 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 11:10:58.300760  283294 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 11:10:58.309197  283294 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:10:58.312711  283294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 11:10:58.318939  283294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 11:10:58.325930  283294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 11:10:58.332063  283294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 11:10:58.338506  283294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 11:10:58.344759  283294 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 11:10:58.351096  283294 kubeadm.go:392] StartCluster: {Name:old-k8s-version-371039 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-371039 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/ho
me/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:10:58.351188  283294 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 11:10:58.351248  283294 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:10:58.385295  283294 cri.go:89] found id: ""
	I0916 11:10:58.385355  283294 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:10:58.393895  283294 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 11:10:58.393915  283294 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 11:10:58.393972  283294 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0916 11:10:58.402959  283294 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0916 11:10:58.403776  283294 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-371039" does not appear in /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:10:58.404256  283294 kubeconfig.go:62] /home/jenkins/minikube-integration/19651-3687/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-371039" cluster setting kubeconfig missing "old-k8s-version-371039" context setting]
	I0916 11:10:58.404996  283294 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:10:58.406479  283294 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 11:10:58.415962  283294 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.103.2
	I0916 11:10:58.416001  283294 kubeadm.go:597] duration metric: took 22.081392ms to restartPrimaryControlPlane
	I0916 11:10:58.416013  283294 kubeadm.go:394] duration metric: took 64.924442ms to StartCluster
	I0916 11:10:58.416044  283294 settings.go:142] acquiring lock: {Name:mk5f140da961bb985c566f732d611f3e238f1f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:10:58.416121  283294 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:10:58.417473  283294 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:10:58.417691  283294 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 11:10:58.417754  283294 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:10:58.417882  283294 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-371039"
	I0916 11:10:58.417893  283294 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-371039"
	I0916 11:10:58.417906  283294 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-371039"
	I0916 11:10:58.417904  283294 config.go:182] Loaded profile config "old-k8s-version-371039": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0916 11:10:58.417913  283294 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-371039"
	W0916 11:10:58.417915  283294 addons.go:243] addon storage-provisioner should already be in state true
	I0916 11:10:58.417918  283294 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-371039"
	I0916 11:10:58.417938  283294 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-371039"
	I0916 11:10:58.417939  283294 addons.go:69] Setting dashboard=true in profile "old-k8s-version-371039"
	I0916 11:10:58.417946  283294 host.go:66] Checking if "old-k8s-version-371039" exists ...
	W0916 11:10:58.417947  283294 addons.go:243] addon metrics-server should already be in state true
	I0916 11:10:58.417969  283294 addons.go:234] Setting addon dashboard=true in "old-k8s-version-371039"
	I0916 11:10:58.417980  283294 host.go:66] Checking if "old-k8s-version-371039" exists ...
	W0916 11:10:58.417983  283294 addons.go:243] addon dashboard should already be in state true
	I0916 11:10:58.418015  283294 host.go:66] Checking if "old-k8s-version-371039" exists ...
	I0916 11:10:58.418249  283294 cli_runner.go:164] Run: docker container inspect old-k8s-version-371039 --format={{.State.Status}}
	I0916 11:10:58.418429  283294 cli_runner.go:164] Run: docker container inspect old-k8s-version-371039 --format={{.State.Status}}
	I0916 11:10:58.418462  283294 cli_runner.go:164] Run: docker container inspect old-k8s-version-371039 --format={{.State.Status}}
	I0916 11:10:58.418463  283294 cli_runner.go:164] Run: docker container inspect old-k8s-version-371039 --format={{.State.Status}}
	I0916 11:10:58.420119  283294 out.go:177] * Verifying Kubernetes components...
	I0916 11:10:58.421904  283294 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:10:58.443463  283294 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-371039"
	W0916 11:10:58.443486  283294 addons.go:243] addon default-storageclass should already be in state true
	I0916 11:10:58.443507  283294 host.go:66] Checking if "old-k8s-version-371039" exists ...
	I0916 11:10:58.444136  283294 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0916 11:10:58.444141  283294 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0916 11:10:58.444170  283294 cli_runner.go:164] Run: docker container inspect old-k8s-version-371039 --format={{.State.Status}}
	I0916 11:10:58.446080  283294 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:10:58.446092  283294 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 11:10:58.446108  283294 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 11:10:58.446197  283294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-371039
	I0916 11:10:58.447517  283294 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0916 11:10:58.447585  283294 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:10:58.447602  283294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:10:58.447642  283294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-371039
	I0916 11:10:58.448852  283294 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0916 11:10:58.448876  283294 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0916 11:10:58.448947  283294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-371039
	I0916 11:10:58.466613  283294 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:10:58.466637  283294 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:10:58.466682  283294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-371039
	I0916 11:10:58.475118  283294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/old-k8s-version-371039/id_rsa Username:docker}
	I0916 11:10:58.477279  283294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/old-k8s-version-371039/id_rsa Username:docker}
	I0916 11:10:58.481212  283294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/old-k8s-version-371039/id_rsa Username:docker}
	I0916 11:10:58.491664  283294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/old-k8s-version-371039/id_rsa Username:docker}
	I0916 11:10:58.520874  283294 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:10:58.532648  283294 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-371039" to be "Ready" ...
	I0916 11:10:58.592114  283294 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0916 11:10:58.592142  283294 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0916 11:10:58.593737  283294 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 11:10:58.593757  283294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0916 11:10:58.595182  283294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:10:58.598616  283294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:10:58.612689  283294 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0916 11:10:58.612720  283294 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0916 11:10:58.613888  283294 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 11:10:58.613914  283294 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 11:10:58.631478  283294 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0916 11:10:58.631502  283294 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0916 11:10:58.634744  283294 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 11:10:58.634764  283294 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 11:10:58.652711  283294 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0916 11:10:58.652737  283294 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0916 11:10:58.655658  283294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 11:10:58.735202  283294 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0916 11:10:58.735231  283294 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0916 11:10:58.748922  283294 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0916 11:10:58.748952  283294 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:10:58.748960  283294 retry.go:31] will retry after 230.064426ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:10:58.748967  283294 retry.go:31] will retry after 328.716215ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:10:58.758838  283294 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0916 11:10:58.758867  283294 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0916 11:10:58.825843  283294 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0916 11:10:58.825870  283294 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W0916 11:10:58.827307  283294 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:10:58.827335  283294 retry.go:31] will retry after 271.797306ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:10:58.843115  283294 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0916 11:10:58.843141  283294 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0916 11:10:58.860231  283294 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0916 11:10:58.860255  283294 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0916 11:10:58.877823  283294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0916 11:10:58.934013  283294 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:10:58.934058  283294 retry.go:31] will retry after 139.906393ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:10:58.980218  283294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0916 11:10:59.037522  283294 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:10:59.037556  283294 retry.go:31] will retry after 418.045226ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:10:59.074770  283294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0916 11:10:59.078168  283294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:10:59.099271  283294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0916 11:10:59.149973  283294 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:10:59.150010  283294 retry.go:31] will retry after 371.96823ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0916 11:10:59.152033  283294 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:10:59.152065  283294 retry.go:31] will retry after 340.539604ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0916 11:10:59.167349  283294 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:10:59.167405  283294 retry.go:31] will retry after 355.063171ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:10:59.456785  283294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:10:59.493248  283294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0916 11:10:59.515062  283294 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:10:59.515101  283294 retry.go:31] will retry after 285.861358ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:10:59.522144  283294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0916 11:10:59.522577  283294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0916 11:10:59.558527  283294 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:10:59.558573  283294 retry.go:31] will retry after 393.178013ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0916 11:10:59.625481  283294 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:10:59.625521  283294 retry.go:31] will retry after 734.446875ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0916 11:10:59.627583  283294 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:10:59.627608  283294 retry.go:31] will retry after 404.007264ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:10:59.801945  283294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0916 11:10:59.859626  283294 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:10:59.859668  283294 retry.go:31] will retry after 772.157376ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:10:59.952892  283294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0916 11:11:00.011329  283294 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:11:00.011361  283294 retry.go:31] will retry after 652.403805ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:11:00.032531  283294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0916 11:11:00.090973  283294 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:11:00.091019  283294 retry.go:31] will retry after 567.417431ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:11:00.360177  283294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0916 11:11:00.417838  283294 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:11:00.417866  283294 retry.go:31] will retry after 479.3001ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:11:00.533438  283294 node_ready.go:53] error getting node "old-k8s-version-371039": Get "https://192.168.103.2:8443/api/v1/nodes/old-k8s-version-371039": dial tcp 192.168.103.2:8443: connect: connection refused
	I0916 11:11:00.632647  283294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:11:00.659026  283294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 11:11:00.664378  283294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0916 11:11:00.695534  283294 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:11:00.695578  283294 retry.go:31] will retry after 691.022815ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0916 11:11:00.738146  283294 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:11:00.738186  283294 retry.go:31] will retry after 1.160812345s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0916 11:11:00.740078  283294 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:11:00.740109  283294 retry.go:31] will retry after 1.005464742s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:11:00.897371  283294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0916 11:11:00.957858  283294 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:11:00.957891  283294 retry.go:31] will retry after 1.60532633s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:11:01.386910  283294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0916 11:11:01.443957  283294 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:11:01.443991  283294 retry.go:31] will retry after 2.175401876s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:11:01.745817  283294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0916 11:11:01.804552  283294 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:11:01.804593  283294 retry.go:31] will retry after 972.392068ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:11:01.899930  283294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0916 11:11:01.958331  283294 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:11:01.958370  283294 retry.go:31] will retry after 1.123776125s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:11:02.533879  283294 node_ready.go:53] error getting node "old-k8s-version-371039": Get "https://192.168.103.2:8443/api/v1/nodes/old-k8s-version-371039": dial tcp 192.168.103.2:8443: connect: connection refused
	I0916 11:11:02.564064  283294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0916 11:11:02.624456  283294 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:11:02.624493  283294 retry.go:31] will retry after 1.229922139s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:11:02.777764  283294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0916 11:11:02.834884  283294 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:11:02.834913  283294 retry.go:31] will retry after 3.387749584s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:11:03.082794  283294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0916 11:11:03.141153  283294 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:11:03.141189  283294 retry.go:31] will retry after 2.589314354s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:11:03.620261  283294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0916 11:11:03.680105  283294 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:11:03.680137  283294 retry.go:31] will retry after 2.611963287s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:11:03.855445  283294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0916 11:11:03.932197  283294 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:11:03.932243  283294 retry.go:31] will retry after 2.463945304s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0916 11:11:05.731079  283294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 11:11:06.223170  283294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:11:06.292561  283294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:11:06.396968  283294 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0916 11:11:08.735716  283294 node_ready.go:49] node "old-k8s-version-371039" has status "Ready":"True"
	I0916 11:11:08.735835  283294 node_ready.go:38] duration metric: took 10.203153103s for node "old-k8s-version-371039" to be "Ready" ...
	I0916 11:11:08.735861  283294 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:11:08.849815  283294 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-78djj" in "kube-system" namespace to be "Ready" ...
	I0916 11:11:09.238820  283294 pod_ready.go:93] pod "coredns-74ff55c5b-78djj" in "kube-system" namespace has status "Ready":"True"
	I0916 11:11:09.238844  283294 pod_ready.go:82] duration metric: took 388.998532ms for pod "coredns-74ff55c5b-78djj" in "kube-system" namespace to be "Ready" ...
	I0916 11:11:09.238856  283294 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-371039" in "kube-system" namespace to be "Ready" ...
	I0916 11:11:09.244000  283294 pod_ready.go:93] pod "etcd-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"True"
	I0916 11:11:09.244026  283294 pod_ready.go:82] duration metric: took 5.162612ms for pod "etcd-old-k8s-version-371039" in "kube-system" namespace to be "Ready" ...
	I0916 11:11:09.244045  283294 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-371039" in "kube-system" namespace to be "Ready" ...
	I0916 11:11:09.923091  283294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.191954476s)
	I0916 11:11:09.923140  283294 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-371039"
	I0916 11:11:09.923195  283294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (3.699998425s)
	I0916 11:11:09.923525  283294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.630930265s)
	I0916 11:11:10.241619  283294 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.84459854s)
	I0916 11:11:10.243934  283294 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-371039 addons enable metrics-server
	
	I0916 11:11:10.245855  283294 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I0916 11:11:10.247395  283294 addons.go:510] duration metric: took 11.829647418s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I0916 11:11:11.249923  283294 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:13.750094  283294 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:15.751352  283294 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"True"
	I0916 11:11:15.751377  283294 pod_ready.go:82] duration metric: took 6.507324048s for pod "kube-apiserver-old-k8s-version-371039" in "kube-system" namespace to be "Ready" ...
	I0916 11:11:15.751388  283294 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace to be "Ready" ...
	I0916 11:11:17.757111  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:20.258429  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:22.757918  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:24.758241  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:27.257831  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:29.759232  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:32.258180  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:34.258663  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:36.258970  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:38.757671  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:40.761143  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:43.257133  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:45.258494  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:47.757335  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:49.757395  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:52.257690  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:54.758372  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:57.257474  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:59.756980  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:01.757290  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:03.757341  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:05.757854  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:08.258030  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:10.757056  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:12.758511  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:15.258014  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:17.757779  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:19.758822  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:21.765067  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:24.258200  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:26.259384  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:26.758238  283294 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"True"
	I0916 11:12:26.758261  283294 pod_ready.go:82] duration metric: took 1m11.006865105s for pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:26.758271  283294 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-w2kp4" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:26.763092  283294 pod_ready.go:93] pod "kube-proxy-w2kp4" in "kube-system" namespace has status "Ready":"True"
	I0916 11:12:26.763116  283294 pod_ready.go:82] duration metric: took 4.838602ms for pod "kube-proxy-w2kp4" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:26.763128  283294 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-371039" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:28.769836  283294 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:31.269772  283294 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:33.273175  283294 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:35.769915  283294 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:37.770648  283294 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"True"
	I0916 11:12:37.770673  283294 pod_ready.go:82] duration metric: took 11.007535908s for pod "kube-scheduler-old-k8s-version-371039" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:37.770686  283294 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:39.777546  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:42.276706  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:44.277484  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:46.777255  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:48.785616  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:51.277044  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:53.776361  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:55.777283  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:58.276870  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:00.277679  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:02.776501  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:05.277166  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:07.775965  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:09.776234  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:11.777149  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:14.276783  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:16.776281  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:18.777270  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:21.277449  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:23.777345  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:26.275636  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:28.276756  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:30.777776  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:33.276185  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:35.276972  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:37.777152  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:39.777460  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:42.276445  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:44.277029  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:46.277361  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:48.777520  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:50.778155  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:53.277620  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:55.776792  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:57.777222  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:59.777303  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:02.276719  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:04.277992  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:06.776489  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:08.776532  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:11.277052  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:13.776954  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:15.776993  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:18.276441  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:20.277436  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:22.777551  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:25.276752  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:27.279238  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:29.776909  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:31.777240  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:34.277473  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:36.777421  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:39.276233  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:41.276681  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:43.277369  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:45.777560  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:48.277461  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:50.776400  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:52.777010  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:54.777497  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:57.277793  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:59.776570  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:02.277344  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:04.777197  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:06.777649  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:09.275890  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:11.277366  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:13.778008  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:16.276669  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:18.277253  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:20.777953  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:23.277719  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:25.776832  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:28.276972  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:30.277002  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:32.776951  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:34.777040  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:37.276482  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:39.277204  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:41.277274  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:43.776591  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:45.776744  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:47.777915  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:50.276765  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:52.776358  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:54.777156  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:57.276967  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:59.277179  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:01.277243  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:03.776353  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:06.277177  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:08.776034  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:10.777020  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:13.276155  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:15.276719  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:17.776593  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:20.277077  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:22.776926  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:24.777009  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:27.277860  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:29.776485  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:31.777071  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:34.277127  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:36.777191  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:37.777631  283294 pod_ready.go:82] duration metric: took 4m0.006929261s for pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace to be "Ready" ...
	E0916 11:16:37.777659  283294 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0916 11:16:37.777669  283294 pod_ready.go:39] duration metric: took 5m29.041786645s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:16:37.777689  283294 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:16:37.777725  283294 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:16:37.777777  283294 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:16:37.813044  283294 cri.go:89] found id: "b8b7a2952008364403b8b3a7f0ec5cca63e9cd244427d95edd2b992a7dff815d"
	I0916 11:16:37.813063  283294 cri.go:89] found id: ""
	I0916 11:16:37.813072  283294 logs.go:276] 1 containers: [b8b7a2952008364403b8b3a7f0ec5cca63e9cd244427d95edd2b992a7dff815d]
	I0916 11:16:37.813135  283294 ssh_runner.go:195] Run: which crictl
	I0916 11:16:37.816744  283294 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:16:37.816813  283294 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:16:37.850919  283294 cri.go:89] found id: "6f10dd2ab1448676f85f2e14b088b767bcca2ef3807aaa62f0370cbf81e594dc"
	I0916 11:16:37.850942  283294 cri.go:89] found id: ""
	I0916 11:16:37.850950  283294 logs.go:276] 1 containers: [6f10dd2ab1448676f85f2e14b088b767bcca2ef3807aaa62f0370cbf81e594dc]
	I0916 11:16:37.850994  283294 ssh_runner.go:195] Run: which crictl
	I0916 11:16:37.854457  283294 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:16:37.854527  283294 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:16:37.888938  283294 cri.go:89] found id: "7e01e437eafa1ace1f5ee4f4bfd2759ea78df9316412a17e9a0e8ae750c20523"
	I0916 11:16:37.888964  283294 cri.go:89] found id: ""
	I0916 11:16:37.888974  283294 logs.go:276] 1 containers: [7e01e437eafa1ace1f5ee4f4bfd2759ea78df9316412a17e9a0e8ae750c20523]
	I0916 11:16:37.889027  283294 ssh_runner.go:195] Run: which crictl
	I0916 11:16:37.892481  283294 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:16:37.892565  283294 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:16:37.929995  283294 cri.go:89] found id: "d4e91e6acd99eff1a1f03c740be536efde73dc3441d0ce21da107f92e07c7aa8"
	I0916 11:16:37.930019  283294 cri.go:89] found id: ""
	I0916 11:16:37.930027  283294 logs.go:276] 1 containers: [d4e91e6acd99eff1a1f03c740be536efde73dc3441d0ce21da107f92e07c7aa8]
	I0916 11:16:37.930073  283294 ssh_runner.go:195] Run: which crictl
	I0916 11:16:37.935700  283294 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:16:37.935855  283294 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:16:37.976752  283294 cri.go:89] found id: "1145c23e87dee884c7c908196bd5c7de96e94947ae439d9c145f0c4dba1630fc"
	I0916 11:16:37.976791  283294 cri.go:89] found id: ""
	I0916 11:16:37.976801  283294 logs.go:276] 1 containers: [1145c23e87dee884c7c908196bd5c7de96e94947ae439d9c145f0c4dba1630fc]
	I0916 11:16:37.976851  283294 ssh_runner.go:195] Run: which crictl
	I0916 11:16:37.980760  283294 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:16:37.980824  283294 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:16:38.018626  283294 cri.go:89] found id: "8dedfda17aef64d94e616f000f9e226b29f8f0ee8b642400d83681bf9b9f8330"
	I0916 11:16:38.018652  283294 cri.go:89] found id: ""
	I0916 11:16:38.018663  283294 logs.go:276] 1 containers: [8dedfda17aef64d94e616f000f9e226b29f8f0ee8b642400d83681bf9b9f8330]
	I0916 11:16:38.018722  283294 ssh_runner.go:195] Run: which crictl
	I0916 11:16:38.022766  283294 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:16:38.022840  283294 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:16:38.056878  283294 cri.go:89] found id: "e812c7a8976381cd9cfc69cfc3ec18d19286650ecfda38b6e08f3545c8c27c65"
	I0916 11:16:38.056897  283294 cri.go:89] found id: ""
	I0916 11:16:38.056904  283294 logs.go:276] 1 containers: [e812c7a8976381cd9cfc69cfc3ec18d19286650ecfda38b6e08f3545c8c27c65]
	I0916 11:16:38.056953  283294 ssh_runner.go:195] Run: which crictl
	I0916 11:16:38.060382  283294 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:16:38.060442  283294 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:16:38.095340  283294 cri.go:89] found id: "4b7e57072db41f8a563f7872a8c96af29447848dc1694db9b8392a1b688b1de4"
	I0916 11:16:38.095365  283294 cri.go:89] found id: "b6e65b347883fafb9798059fa58b334346a8b29c61fe40422f2cf04c355fcd71"
	I0916 11:16:38.095372  283294 cri.go:89] found id: ""
	I0916 11:16:38.095380  283294 logs.go:276] 2 containers: [4b7e57072db41f8a563f7872a8c96af29447848dc1694db9b8392a1b688b1de4 b6e65b347883fafb9798059fa58b334346a8b29c61fe40422f2cf04c355fcd71]
	I0916 11:16:38.095447  283294 ssh_runner.go:195] Run: which crictl
	I0916 11:16:38.099232  283294 ssh_runner.go:195] Run: which crictl
	I0916 11:16:38.102484  283294 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0916 11:16:38.102551  283294 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0916 11:16:38.136765  283294 cri.go:89] found id: "b8894a1f49c45a030260a48d3dd12ddc81b2ba27c652688d77576697c81124e8"
	I0916 11:16:38.136790  283294 cri.go:89] found id: ""
	I0916 11:16:38.136799  283294 logs.go:276] 1 containers: [b8894a1f49c45a030260a48d3dd12ddc81b2ba27c652688d77576697c81124e8]
	I0916 11:16:38.136858  283294 ssh_runner.go:195] Run: which crictl
	I0916 11:16:38.140437  283294 logs.go:123] Gathering logs for kube-apiserver [b8b7a2952008364403b8b3a7f0ec5cca63e9cd244427d95edd2b992a7dff815d] ...
	I0916 11:16:38.140461  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8b7a2952008364403b8b3a7f0ec5cca63e9cd244427d95edd2b992a7dff815d"
	I0916 11:16:38.198162  283294 logs.go:123] Gathering logs for kube-proxy [1145c23e87dee884c7c908196bd5c7de96e94947ae439d9c145f0c4dba1630fc] ...
	I0916 11:16:38.198196  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1145c23e87dee884c7c908196bd5c7de96e94947ae439d9c145f0c4dba1630fc"
	I0916 11:16:38.232396  283294 logs.go:123] Gathering logs for kubernetes-dashboard [b8894a1f49c45a030260a48d3dd12ddc81b2ba27c652688d77576697c81124e8] ...
	I0916 11:16:38.232431  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8894a1f49c45a030260a48d3dd12ddc81b2ba27c652688d77576697c81124e8"
	I0916 11:16:38.269217  283294 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:16:38.269273  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:16:38.372168  283294 logs.go:123] Gathering logs for coredns [7e01e437eafa1ace1f5ee4f4bfd2759ea78df9316412a17e9a0e8ae750c20523] ...
	I0916 11:16:38.372197  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e01e437eafa1ace1f5ee4f4bfd2759ea78df9316412a17e9a0e8ae750c20523"
	I0916 11:16:38.405501  283294 logs.go:123] Gathering logs for kube-controller-manager [8dedfda17aef64d94e616f000f9e226b29f8f0ee8b642400d83681bf9b9f8330] ...
	I0916 11:16:38.405534  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8dedfda17aef64d94e616f000f9e226b29f8f0ee8b642400d83681bf9b9f8330"
	I0916 11:16:38.460683  283294 logs.go:123] Gathering logs for containerd ...
	I0916 11:16:38.460721  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:16:38.521937  283294 logs.go:123] Gathering logs for dmesg ...
	I0916 11:16:38.521975  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:16:38.545968  283294 logs.go:123] Gathering logs for storage-provisioner [4b7e57072db41f8a563f7872a8c96af29447848dc1694db9b8392a1b688b1de4] ...
	I0916 11:16:38.546005  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b7e57072db41f8a563f7872a8c96af29447848dc1694db9b8392a1b688b1de4"
	I0916 11:16:38.580893  283294 logs.go:123] Gathering logs for storage-provisioner [b6e65b347883fafb9798059fa58b334346a8b29c61fe40422f2cf04c355fcd71] ...
	I0916 11:16:38.580918  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6e65b347883fafb9798059fa58b334346a8b29c61fe40422f2cf04c355fcd71"
	I0916 11:16:38.614428  283294 logs.go:123] Gathering logs for kindnet [e812c7a8976381cd9cfc69cfc3ec18d19286650ecfda38b6e08f3545c8c27c65] ...
	I0916 11:16:38.614453  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e812c7a8976381cd9cfc69cfc3ec18d19286650ecfda38b6e08f3545c8c27c65"
	I0916 11:16:38.654390  283294 logs.go:123] Gathering logs for container status ...
	I0916 11:16:38.654427  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:16:38.692272  283294 logs.go:123] Gathering logs for kubelet ...
	I0916 11:16:38.692302  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0916 11:16:38.731349  283294 logs.go:138] Found kubelet problem: Sep 16 11:11:08 old-k8s-version-371039 kubelet[1070]: E0916 11:11:08.526119    1070 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-371039" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-371039' and this object
	W0916 11:16:38.731520  283294 logs.go:138] Found kubelet problem: Sep 16 11:11:08 old-k8s-version-371039 kubelet[1070]: E0916 11:11:08.526479    1070 reflector.go:138] object-"kube-system"/"kindnet-token-xjzl9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-xjzl9" is forbidden: User "system:node:old-k8s-version-371039" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-371039' and this object
	W0916 11:16:38.731678  283294 logs.go:138] Found kubelet problem: Sep 16 11:11:08 old-k8s-version-371039 kubelet[1070]: E0916 11:11:08.526594    1070 reflector.go:138] object-"kube-system"/"coredns-token-vcrsr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-vcrsr" is forbidden: User "system:node:old-k8s-version-371039" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-371039' and this object
	W0916 11:16:38.736629  283294 logs.go:138] Found kubelet problem: Sep 16 11:11:10 old-k8s-version-371039 kubelet[1070]: E0916 11:11:10.329499    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	W0916 11:16:38.736777  283294 logs.go:138] Found kubelet problem: Sep 16 11:11:10 old-k8s-version-371039 kubelet[1070]: E0916 11:11:10.534036    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.738562  283294 logs.go:138] Found kubelet problem: Sep 16 11:11:29 old-k8s-version-371039 kubelet[1070]: E0916 11:11:29.636809    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.738818  283294 logs.go:138] Found kubelet problem: Sep 16 11:11:30 old-k8s-version-371039 kubelet[1070]: E0916 11:11:30.640739    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.740585  283294 logs.go:138] Found kubelet problem: Sep 16 11:11:33 old-k8s-version-371039 kubelet[1070]: E0916 11:11:33.064066    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	W0916 11:16:38.741255  283294 logs.go:138] Found kubelet problem: Sep 16 11:11:40 old-k8s-version-371039 kubelet[1070]: E0916 11:11:40.667028    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.741482  283294 logs.go:138] Found kubelet problem: Sep 16 11:11:43 old-k8s-version-371039 kubelet[1070]: E0916 11:11:43.355910    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.741956  283294 logs.go:138] Found kubelet problem: Sep 16 11:11:49 old-k8s-version-371039 kubelet[1070]: E0916 11:11:49.245550    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.743944  283294 logs.go:138] Found kubelet problem: Sep 16 11:11:58 old-k8s-version-371039 kubelet[1070]: E0916 11:11:58.392988    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	W0916 11:16:38.744367  283294 logs.go:138] Found kubelet problem: Sep 16 11:12:03 old-k8s-version-371039 kubelet[1070]: E0916 11:12:03.723963    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.744606  283294 logs.go:138] Found kubelet problem: Sep 16 11:12:09 old-k8s-version-371039 kubelet[1070]: E0916 11:12:09.246379    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.744750  283294 logs.go:138] Found kubelet problem: Sep 16 11:12:11 old-k8s-version-371039 kubelet[1070]: E0916 11:12:11.355765    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.744986  283294 logs.go:138] Found kubelet problem: Sep 16 11:12:20 old-k8s-version-371039 kubelet[1070]: E0916 11:12:20.355410    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.745118  283294 logs.go:138] Found kubelet problem: Sep 16 11:12:25 old-k8s-version-371039 kubelet[1070]: E0916 11:12:25.356081    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.745366  283294 logs.go:138] Found kubelet problem: Sep 16 11:12:32 old-k8s-version-371039 kubelet[1070]: E0916 11:12:32.355402    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.745517  283294 logs.go:138] Found kubelet problem: Sep 16 11:12:38 old-k8s-version-371039 kubelet[1070]: E0916 11:12:38.355913    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.745945  283294 logs.go:138] Found kubelet problem: Sep 16 11:12:44 old-k8s-version-371039 kubelet[1070]: E0916 11:12:44.821826    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.746182  283294 logs.go:138] Found kubelet problem: Sep 16 11:12:49 old-k8s-version-371039 kubelet[1070]: E0916 11:12:49.245692    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.748175  283294 logs.go:138] Found kubelet problem: Sep 16 11:12:50 old-k8s-version-371039 kubelet[1070]: E0916 11:12:50.378557    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	W0916 11:16:38.748435  283294 logs.go:138] Found kubelet problem: Sep 16 11:13:01 old-k8s-version-371039 kubelet[1070]: E0916 11:13:01.355538    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.748572  283294 logs.go:138] Found kubelet problem: Sep 16 11:13:05 old-k8s-version-371039 kubelet[1070]: E0916 11:13:05.355797    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.748825  283294 logs.go:138] Found kubelet problem: Sep 16 11:13:16 old-k8s-version-371039 kubelet[1070]: E0916 11:13:16.355393    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.748981  283294 logs.go:138] Found kubelet problem: Sep 16 11:13:20 old-k8s-version-371039 kubelet[1070]: E0916 11:13:20.355904    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.749251  283294 logs.go:138] Found kubelet problem: Sep 16 11:13:27 old-k8s-version-371039 kubelet[1070]: E0916 11:13:27.355256    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.749389  283294 logs.go:138] Found kubelet problem: Sep 16 11:13:32 old-k8s-version-371039 kubelet[1070]: E0916 11:13:32.355649    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.749628  283294 logs.go:138] Found kubelet problem: Sep 16 11:13:41 old-k8s-version-371039 kubelet[1070]: E0916 11:13:41.355315    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.749762  283294 logs.go:138] Found kubelet problem: Sep 16 11:13:46 old-k8s-version-371039 kubelet[1070]: E0916 11:13:46.355690    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.750052  283294 logs.go:138] Found kubelet problem: Sep 16 11:13:55 old-k8s-version-371039 kubelet[1070]: E0916 11:13:55.355528    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.750249  283294 logs.go:138] Found kubelet problem: Sep 16 11:13:59 old-k8s-version-371039 kubelet[1070]: E0916 11:13:59.355804    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.750706  283294 logs.go:138] Found kubelet problem: Sep 16 11:14:08 old-k8s-version-371039 kubelet[1070]: E0916 11:14:08.004821    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.750944  283294 logs.go:138] Found kubelet problem: Sep 16 11:14:09 old-k8s-version-371039 kubelet[1070]: E0916 11:14:09.245710    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.752864  283294 logs.go:138] Found kubelet problem: Sep 16 11:14:14 old-k8s-version-371039 kubelet[1070]: E0916 11:14:14.388385    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	W0916 11:16:38.753258  283294 logs.go:138] Found kubelet problem: Sep 16 11:14:21 old-k8s-version-371039 kubelet[1070]: E0916 11:14:21.355284    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.753424  283294 logs.go:138] Found kubelet problem: Sep 16 11:14:25 old-k8s-version-371039 kubelet[1070]: E0916 11:14:25.355879    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.753662  283294 logs.go:138] Found kubelet problem: Sep 16 11:14:34 old-k8s-version-371039 kubelet[1070]: E0916 11:14:34.355467    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.753794  283294 logs.go:138] Found kubelet problem: Sep 16 11:14:40 old-k8s-version-371039 kubelet[1070]: E0916 11:14:40.355902    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.754077  283294 logs.go:138] Found kubelet problem: Sep 16 11:14:49 old-k8s-version-371039 kubelet[1070]: E0916 11:14:49.355426    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.754266  283294 logs.go:138] Found kubelet problem: Sep 16 11:14:54 old-k8s-version-371039 kubelet[1070]: E0916 11:14:54.355668    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.754609  283294 logs.go:138] Found kubelet problem: Sep 16 11:15:00 old-k8s-version-371039 kubelet[1070]: E0916 11:15:00.355224    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.754744  283294 logs.go:138] Found kubelet problem: Sep 16 11:15:08 old-k8s-version-371039 kubelet[1070]: E0916 11:15:08.355679    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.754979  283294 logs.go:138] Found kubelet problem: Sep 16 11:15:12 old-k8s-version-371039 kubelet[1070]: E0916 11:15:12.355397    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.755114  283294 logs.go:138] Found kubelet problem: Sep 16 11:15:22 old-k8s-version-371039 kubelet[1070]: E0916 11:15:22.355653    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.755355  283294 logs.go:138] Found kubelet problem: Sep 16 11:15:25 old-k8s-version-371039 kubelet[1070]: E0916 11:15:25.355528    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.755489  283294 logs.go:138] Found kubelet problem: Sep 16 11:15:34 old-k8s-version-371039 kubelet[1070]: E0916 11:15:34.355954    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.755723  283294 logs.go:138] Found kubelet problem: Sep 16 11:15:38 old-k8s-version-371039 kubelet[1070]: E0916 11:15:38.355218    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.755957  283294 logs.go:138] Found kubelet problem: Sep 16 11:15:48 old-k8s-version-371039 kubelet[1070]: E0916 11:15:48.355603    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.756205  283294 logs.go:138] Found kubelet problem: Sep 16 11:15:49 old-k8s-version-371039 kubelet[1070]: E0916 11:15:49.355370    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.756342  283294 logs.go:138] Found kubelet problem: Sep 16 11:15:59 old-k8s-version-371039 kubelet[1070]: E0916 11:15:59.355864    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.756582  283294 logs.go:138] Found kubelet problem: Sep 16 11:16:03 old-k8s-version-371039 kubelet[1070]: E0916 11:16:03.355623    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.756737  283294 logs.go:138] Found kubelet problem: Sep 16 11:16:12 old-k8s-version-371039 kubelet[1070]: E0916 11:16:12.355692    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.756988  283294 logs.go:138] Found kubelet problem: Sep 16 11:16:14 old-k8s-version-371039 kubelet[1070]: E0916 11:16:14.355205    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.757133  283294 logs.go:138] Found kubelet problem: Sep 16 11:16:26 old-k8s-version-371039 kubelet[1070]: E0916 11:16:26.355705    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.757384  283294 logs.go:138] Found kubelet problem: Sep 16 11:16:29 old-k8s-version-371039 kubelet[1070]: E0916 11:16:29.355289    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	I0916 11:16:38.757401  283294 logs.go:123] Gathering logs for etcd [6f10dd2ab1448676f85f2e14b088b767bcca2ef3807aaa62f0370cbf81e594dc] ...
	I0916 11:16:38.757423  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f10dd2ab1448676f85f2e14b088b767bcca2ef3807aaa62f0370cbf81e594dc"
	I0916 11:16:38.801272  283294 logs.go:123] Gathering logs for kube-scheduler [d4e91e6acd99eff1a1f03c740be536efde73dc3441d0ce21da107f92e07c7aa8] ...
	I0916 11:16:38.801305  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4e91e6acd99eff1a1f03c740be536efde73dc3441d0ce21da107f92e07c7aa8"
	I0916 11:16:38.840644  283294 out.go:358] Setting ErrFile to fd 2...
	I0916 11:16:38.840680  283294 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0916 11:16:38.840747  283294 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0916 11:16:38.840761  283294 out.go:270]   Sep 16 11:16:03 old-k8s-version-371039 kubelet[1070]: E0916 11:16:03.355623    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	  Sep 16 11:16:03 old-k8s-version-371039 kubelet[1070]: E0916 11:16:03.355623    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.840774  283294 out.go:270]   Sep 16 11:16:12 old-k8s-version-371039 kubelet[1070]: E0916 11:16:12.355692    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 16 11:16:12 old-k8s-version-371039 kubelet[1070]: E0916 11:16:12.355692    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.840782  283294 out.go:270]   Sep 16 11:16:14 old-k8s-version-371039 kubelet[1070]: E0916 11:16:14.355205    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	  Sep 16 11:16:14 old-k8s-version-371039 kubelet[1070]: E0916 11:16:14.355205    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.840787  283294 out.go:270]   Sep 16 11:16:26 old-k8s-version-371039 kubelet[1070]: E0916 11:16:26.355705    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 16 11:16:26 old-k8s-version-371039 kubelet[1070]: E0916 11:16:26.355705    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.840794  283294 out.go:270]   Sep 16 11:16:29 old-k8s-version-371039 kubelet[1070]: E0916 11:16:29.355289    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	  Sep 16 11:16:29 old-k8s-version-371039 kubelet[1070]: E0916 11:16:29.355289    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	I0916 11:16:38.840800  283294 out.go:358] Setting ErrFile to fd 2...
	I0916 11:16:38.840807  283294 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:16:48.841972  283294 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:16:48.854415  283294 api_server.go:72] duration metric: took 5m50.436689522s to wait for apiserver process to appear ...
	I0916 11:16:48.854453  283294 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:16:48.854503  283294 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:16:48.854557  283294 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:16:48.887775  283294 cri.go:89] found id: "b8b7a2952008364403b8b3a7f0ec5cca63e9cd244427d95edd2b992a7dff815d"
	I0916 11:16:48.887798  283294 cri.go:89] found id: ""
	I0916 11:16:48.887806  283294 logs.go:276] 1 containers: [b8b7a2952008364403b8b3a7f0ec5cca63e9cd244427d95edd2b992a7dff815d]
	I0916 11:16:48.887872  283294 ssh_runner.go:195] Run: which crictl
	I0916 11:16:48.891291  283294 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:16:48.891348  283294 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:16:48.925752  283294 cri.go:89] found id: "6f10dd2ab1448676f85f2e14b088b767bcca2ef3807aaa62f0370cbf81e594dc"
	I0916 11:16:48.925775  283294 cri.go:89] found id: ""
	I0916 11:16:48.925785  283294 logs.go:276] 1 containers: [6f10dd2ab1448676f85f2e14b088b767bcca2ef3807aaa62f0370cbf81e594dc]
	I0916 11:16:48.925852  283294 ssh_runner.go:195] Run: which crictl
	I0916 11:16:48.929766  283294 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:16:48.929854  283294 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:16:48.966519  283294 cri.go:89] found id: "7e01e437eafa1ace1f5ee4f4bfd2759ea78df9316412a17e9a0e8ae750c20523"
	I0916 11:16:48.966547  283294 cri.go:89] found id: ""
	I0916 11:16:48.966556  283294 logs.go:276] 1 containers: [7e01e437eafa1ace1f5ee4f4bfd2759ea78df9316412a17e9a0e8ae750c20523]
	I0916 11:16:48.966602  283294 ssh_runner.go:195] Run: which crictl
	I0916 11:16:48.970103  283294 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:16:48.970164  283294 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:16:49.005884  283294 cri.go:89] found id: "d4e91e6acd99eff1a1f03c740be536efde73dc3441d0ce21da107f92e07c7aa8"
	I0916 11:16:49.005905  283294 cri.go:89] found id: ""
	I0916 11:16:49.005913  283294 logs.go:276] 1 containers: [d4e91e6acd99eff1a1f03c740be536efde73dc3441d0ce21da107f92e07c7aa8]
	I0916 11:16:49.005967  283294 ssh_runner.go:195] Run: which crictl
	I0916 11:16:49.009476  283294 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:16:49.009548  283294 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:16:49.048147  283294 cri.go:89] found id: "1145c23e87dee884c7c908196bd5c7de96e94947ae439d9c145f0c4dba1630fc"
	I0916 11:16:49.048170  283294 cri.go:89] found id: ""
	I0916 11:16:49.048179  283294 logs.go:276] 1 containers: [1145c23e87dee884c7c908196bd5c7de96e94947ae439d9c145f0c4dba1630fc]
	I0916 11:16:49.048223  283294 ssh_runner.go:195] Run: which crictl
	I0916 11:16:49.051960  283294 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:16:49.052035  283294 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:16:49.087883  283294 cri.go:89] found id: "8dedfda17aef64d94e616f000f9e226b29f8f0ee8b642400d83681bf9b9f8330"
	I0916 11:16:49.087905  283294 cri.go:89] found id: ""
	I0916 11:16:49.087915  283294 logs.go:276] 1 containers: [8dedfda17aef64d94e616f000f9e226b29f8f0ee8b642400d83681bf9b9f8330]
	I0916 11:16:49.087966  283294 ssh_runner.go:195] Run: which crictl
	I0916 11:16:49.092680  283294 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:16:49.092752  283294 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:16:49.133282  283294 cri.go:89] found id: "e812c7a8976381cd9cfc69cfc3ec18d19286650ecfda38b6e08f3545c8c27c65"
	I0916 11:16:49.133301  283294 cri.go:89] found id: ""
	I0916 11:16:49.133307  283294 logs.go:276] 1 containers: [e812c7a8976381cd9cfc69cfc3ec18d19286650ecfda38b6e08f3545c8c27c65]
	I0916 11:16:49.133352  283294 ssh_runner.go:195] Run: which crictl
	I0916 11:16:49.137155  283294 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:16:49.137221  283294 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:16:49.175662  283294 cri.go:89] found id: "4b7e57072db41f8a563f7872a8c96af29447848dc1694db9b8392a1b688b1de4"
	I0916 11:16:49.175685  283294 cri.go:89] found id: "b6e65b347883fafb9798059fa58b334346a8b29c61fe40422f2cf04c355fcd71"
	I0916 11:16:49.175688  283294 cri.go:89] found id: ""
	I0916 11:16:49.175696  283294 logs.go:276] 2 containers: [4b7e57072db41f8a563f7872a8c96af29447848dc1694db9b8392a1b688b1de4 b6e65b347883fafb9798059fa58b334346a8b29c61fe40422f2cf04c355fcd71]
	I0916 11:16:49.175804  283294 ssh_runner.go:195] Run: which crictl
	I0916 11:16:49.179547  283294 ssh_runner.go:195] Run: which crictl
	I0916 11:16:49.183406  283294 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0916 11:16:49.183467  283294 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0916 11:16:49.217951  283294 cri.go:89] found id: "b8894a1f49c45a030260a48d3dd12ddc81b2ba27c652688d77576697c81124e8"
	I0916 11:16:49.217971  283294 cri.go:89] found id: ""
	I0916 11:16:49.217978  283294 logs.go:276] 1 containers: [b8894a1f49c45a030260a48d3dd12ddc81b2ba27c652688d77576697c81124e8]
	I0916 11:16:49.218025  283294 ssh_runner.go:195] Run: which crictl
	I0916 11:16:49.221652  283294 logs.go:123] Gathering logs for coredns [7e01e437eafa1ace1f5ee4f4bfd2759ea78df9316412a17e9a0e8ae750c20523] ...
	I0916 11:16:49.221682  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e01e437eafa1ace1f5ee4f4bfd2759ea78df9316412a17e9a0e8ae750c20523"
	I0916 11:16:49.256810  283294 logs.go:123] Gathering logs for kindnet [e812c7a8976381cd9cfc69cfc3ec18d19286650ecfda38b6e08f3545c8c27c65] ...
	I0916 11:16:49.256839  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e812c7a8976381cd9cfc69cfc3ec18d19286650ecfda38b6e08f3545c8c27c65"
	I0916 11:16:49.298643  283294 logs.go:123] Gathering logs for kubernetes-dashboard [b8894a1f49c45a030260a48d3dd12ddc81b2ba27c652688d77576697c81124e8] ...
	I0916 11:16:49.298677  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8894a1f49c45a030260a48d3dd12ddc81b2ba27c652688d77576697c81124e8"
	I0916 11:16:49.337515  283294 logs.go:123] Gathering logs for dmesg ...
	I0916 11:16:49.337541  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:16:49.362573  283294 logs.go:123] Gathering logs for etcd [6f10dd2ab1448676f85f2e14b088b767bcca2ef3807aaa62f0370cbf81e594dc] ...
	I0916 11:16:49.362605  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f10dd2ab1448676f85f2e14b088b767bcca2ef3807aaa62f0370cbf81e594dc"
	I0916 11:16:49.415419  283294 logs.go:123] Gathering logs for storage-provisioner [b6e65b347883fafb9798059fa58b334346a8b29c61fe40422f2cf04c355fcd71] ...
	I0916 11:16:49.415465  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6e65b347883fafb9798059fa58b334346a8b29c61fe40422f2cf04c355fcd71"
	I0916 11:16:49.453909  283294 logs.go:123] Gathering logs for containerd ...
	I0916 11:16:49.453940  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:16:49.526611  283294 logs.go:123] Gathering logs for container status ...
	I0916 11:16:49.526652  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:16:49.569526  283294 logs.go:123] Gathering logs for kube-apiserver [b8b7a2952008364403b8b3a7f0ec5cca63e9cd244427d95edd2b992a7dff815d] ...
	I0916 11:16:49.569559  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8b7a2952008364403b8b3a7f0ec5cca63e9cd244427d95edd2b992a7dff815d"
	I0916 11:16:49.629670  283294 logs.go:123] Gathering logs for kube-scheduler [d4e91e6acd99eff1a1f03c740be536efde73dc3441d0ce21da107f92e07c7aa8] ...
	I0916 11:16:49.629707  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4e91e6acd99eff1a1f03c740be536efde73dc3441d0ce21da107f92e07c7aa8"
	I0916 11:16:49.668253  283294 logs.go:123] Gathering logs for kubelet ...
	I0916 11:16:49.668288  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0916 11:16:49.704632  283294 logs.go:138] Found kubelet problem: Sep 16 11:11:08 old-k8s-version-371039 kubelet[1070]: E0916 11:11:08.526119    1070 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-371039" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-371039' and this object
	W0916 11:16:49.704824  283294 logs.go:138] Found kubelet problem: Sep 16 11:11:08 old-k8s-version-371039 kubelet[1070]: E0916 11:11:08.526479    1070 reflector.go:138] object-"kube-system"/"kindnet-token-xjzl9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-xjzl9" is forbidden: User "system:node:old-k8s-version-371039" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-371039' and this object
	W0916 11:16:49.704981  283294 logs.go:138] Found kubelet problem: Sep 16 11:11:08 old-k8s-version-371039 kubelet[1070]: E0916 11:11:08.526594    1070 reflector.go:138] object-"kube-system"/"coredns-token-vcrsr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-vcrsr" is forbidden: User "system:node:old-k8s-version-371039" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-371039' and this object
	W0916 11:16:49.709915  283294 logs.go:138] Found kubelet problem: Sep 16 11:11:10 old-k8s-version-371039 kubelet[1070]: E0916 11:11:10.329499    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	W0916 11:16:49.710072  283294 logs.go:138] Found kubelet problem: Sep 16 11:11:10 old-k8s-version-371039 kubelet[1070]: E0916 11:11:10.534036    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:49.712496  283294 logs.go:138] Found kubelet problem: Sep 16 11:11:29 old-k8s-version-371039 kubelet[1070]: E0916 11:11:29.636809    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:49.712894  283294 logs.go:138] Found kubelet problem: Sep 16 11:11:30 old-k8s-version-371039 kubelet[1070]: E0916 11:11:30.640739    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:49.714887  283294 logs.go:138] Found kubelet problem: Sep 16 11:11:33 old-k8s-version-371039 kubelet[1070]: E0916 11:11:33.064066    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	W0916 11:16:49.715709  283294 logs.go:138] Found kubelet problem: Sep 16 11:11:40 old-k8s-version-371039 kubelet[1070]: E0916 11:11:40.667028    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:49.715976  283294 logs.go:138] Found kubelet problem: Sep 16 11:11:43 old-k8s-version-371039 kubelet[1070]: E0916 11:11:43.355910    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:49.716468  283294 logs.go:138] Found kubelet problem: Sep 16 11:11:49 old-k8s-version-371039 kubelet[1070]: E0916 11:11:49.245550    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:49.719090  283294 logs.go:138] Found kubelet problem: Sep 16 11:11:58 old-k8s-version-371039 kubelet[1070]: E0916 11:11:58.392988    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	W0916 11:16:49.719537  283294 logs.go:138] Found kubelet problem: Sep 16 11:12:03 old-k8s-version-371039 kubelet[1070]: E0916 11:12:03.723963    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:49.719810  283294 logs.go:138] Found kubelet problem: Sep 16 11:12:09 old-k8s-version-371039 kubelet[1070]: E0916 11:12:09.246379    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:49.719944  283294 logs.go:138] Found kubelet problem: Sep 16 11:12:11 old-k8s-version-371039 kubelet[1070]: E0916 11:12:11.355765    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:49.720183  283294 logs.go:138] Found kubelet problem: Sep 16 11:12:20 old-k8s-version-371039 kubelet[1070]: E0916 11:12:20.355410    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:49.720318  283294 logs.go:138] Found kubelet problem: Sep 16 11:12:25 old-k8s-version-371039 kubelet[1070]: E0916 11:12:25.356081    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:49.720556  283294 logs.go:138] Found kubelet problem: Sep 16 11:12:32 old-k8s-version-371039 kubelet[1070]: E0916 11:12:32.355402    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:49.720689  283294 logs.go:138] Found kubelet problem: Sep 16 11:12:38 old-k8s-version-371039 kubelet[1070]: E0916 11:12:38.355913    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:49.721127  283294 logs.go:138] Found kubelet problem: Sep 16 11:12:44 old-k8s-version-371039 kubelet[1070]: E0916 11:12:44.821826    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:49.721486  283294 logs.go:138] Found kubelet problem: Sep 16 11:12:49 old-k8s-version-371039 kubelet[1070]: E0916 11:12:49.245692    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:49.723353  283294 logs.go:138] Found kubelet problem: Sep 16 11:12:50 old-k8s-version-371039 kubelet[1070]: E0916 11:12:50.378557    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	W0916 11:16:49.723596  283294 logs.go:138] Found kubelet problem: Sep 16 11:13:01 old-k8s-version-371039 kubelet[1070]: E0916 11:13:01.355538    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:49.723730  283294 logs.go:138] Found kubelet problem: Sep 16 11:13:05 old-k8s-version-371039 kubelet[1070]: E0916 11:13:05.355797    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:49.723988  283294 logs.go:138] Found kubelet problem: Sep 16 11:13:16 old-k8s-version-371039 kubelet[1070]: E0916 11:13:16.355393    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:49.724139  283294 logs.go:138] Found kubelet problem: Sep 16 11:13:20 old-k8s-version-371039 kubelet[1070]: E0916 11:13:20.355904    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:49.724476  283294 logs.go:138] Found kubelet problem: Sep 16 11:13:27 old-k8s-version-371039 kubelet[1070]: E0916 11:13:27.355256    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:49.724715  283294 logs.go:138] Found kubelet problem: Sep 16 11:13:32 old-k8s-version-371039 kubelet[1070]: E0916 11:13:32.355649    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:49.725114  283294 logs.go:138] Found kubelet problem: Sep 16 11:13:41 old-k8s-version-371039 kubelet[1070]: E0916 11:13:41.355315    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:49.725332  283294 logs.go:138] Found kubelet problem: Sep 16 11:13:46 old-k8s-version-371039 kubelet[1070]: E0916 11:13:46.355690    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:49.725653  283294 logs.go:138] Found kubelet problem: Sep 16 11:13:55 old-k8s-version-371039 kubelet[1070]: E0916 11:13:55.355528    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:49.725815  283294 logs.go:138] Found kubelet problem: Sep 16 11:13:59 old-k8s-version-371039 kubelet[1070]: E0916 11:13:59.355804    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:49.726242  283294 logs.go:138] Found kubelet problem: Sep 16 11:14:08 old-k8s-version-371039 kubelet[1070]: E0916 11:14:08.004821    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:49.726482  283294 logs.go:138] Found kubelet problem: Sep 16 11:14:09 old-k8s-version-371039 kubelet[1070]: E0916 11:14:09.245710    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:49.728362  283294 logs.go:138] Found kubelet problem: Sep 16 11:14:14 old-k8s-version-371039 kubelet[1070]: E0916 11:14:14.388385    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	W0916 11:16:49.728608  283294 logs.go:138] Found kubelet problem: Sep 16 11:14:21 old-k8s-version-371039 kubelet[1070]: E0916 11:14:21.355284    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:49.728743  283294 logs.go:138] Found kubelet problem: Sep 16 11:14:25 old-k8s-version-371039 kubelet[1070]: E0916 11:14:25.355879    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:49.728994  283294 logs.go:138] Found kubelet problem: Sep 16 11:14:34 old-k8s-version-371039 kubelet[1070]: E0916 11:14:34.355467    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:49.729196  283294 logs.go:138] Found kubelet problem: Sep 16 11:14:40 old-k8s-version-371039 kubelet[1070]: E0916 11:14:40.355902    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:49.729565  283294 logs.go:138] Found kubelet problem: Sep 16 11:14:49 old-k8s-version-371039 kubelet[1070]: E0916 11:14:49.355426    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:49.729731  283294 logs.go:138] Found kubelet problem: Sep 16 11:14:54 old-k8s-version-371039 kubelet[1070]: E0916 11:14:54.355668    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:49.729999  283294 logs.go:138] Found kubelet problem: Sep 16 11:15:00 old-k8s-version-371039 kubelet[1070]: E0916 11:15:00.355224    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:49.730144  283294 logs.go:138] Found kubelet problem: Sep 16 11:15:08 old-k8s-version-371039 kubelet[1070]: E0916 11:15:08.355679    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:49.730420  283294 logs.go:138] Found kubelet problem: Sep 16 11:15:12 old-k8s-version-371039 kubelet[1070]: E0916 11:15:12.355397    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:49.730571  283294 logs.go:138] Found kubelet problem: Sep 16 11:15:22 old-k8s-version-371039 kubelet[1070]: E0916 11:15:22.355653    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:49.730823  283294 logs.go:138] Found kubelet problem: Sep 16 11:15:25 old-k8s-version-371039 kubelet[1070]: E0916 11:15:25.355528    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:49.730969  283294 logs.go:138] Found kubelet problem: Sep 16 11:15:34 old-k8s-version-371039 kubelet[1070]: E0916 11:15:34.355954    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:49.731246  283294 logs.go:138] Found kubelet problem: Sep 16 11:15:38 old-k8s-version-371039 kubelet[1070]: E0916 11:15:38.355218    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:49.731411  283294 logs.go:138] Found kubelet problem: Sep 16 11:15:48 old-k8s-version-371039 kubelet[1070]: E0916 11:15:48.355603    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:49.731689  283294 logs.go:138] Found kubelet problem: Sep 16 11:15:49 old-k8s-version-371039 kubelet[1070]: E0916 11:15:49.355370    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:49.731924  283294 logs.go:138] Found kubelet problem: Sep 16 11:15:59 old-k8s-version-371039 kubelet[1070]: E0916 11:15:59.355864    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:49.732262  283294 logs.go:138] Found kubelet problem: Sep 16 11:16:03 old-k8s-version-371039 kubelet[1070]: E0916 11:16:03.355623    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:49.732475  283294 logs.go:138] Found kubelet problem: Sep 16 11:16:12 old-k8s-version-371039 kubelet[1070]: E0916 11:16:12.355692    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:49.732826  283294 logs.go:138] Found kubelet problem: Sep 16 11:16:14 old-k8s-version-371039 kubelet[1070]: E0916 11:16:14.355205    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:49.732995  283294 logs.go:138] Found kubelet problem: Sep 16 11:16:26 old-k8s-version-371039 kubelet[1070]: E0916 11:16:26.355705    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:49.733308  283294 logs.go:138] Found kubelet problem: Sep 16 11:16:29 old-k8s-version-371039 kubelet[1070]: E0916 11:16:29.355289    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:49.733524  283294 logs.go:138] Found kubelet problem: Sep 16 11:16:39 old-k8s-version-371039 kubelet[1070]: E0916 11:16:39.355981    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:49.733800  283294 logs.go:138] Found kubelet problem: Sep 16 11:16:40 old-k8s-version-371039 kubelet[1070]: E0916 11:16:40.355189    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	I0916 11:16:49.733819  283294 logs.go:123] Gathering logs for kube-controller-manager [8dedfda17aef64d94e616f000f9e226b29f8f0ee8b642400d83681bf9b9f8330] ...
	I0916 11:16:49.733838  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8dedfda17aef64d94e616f000f9e226b29f8f0ee8b642400d83681bf9b9f8330"
	I0916 11:16:49.795448  283294 logs.go:123] Gathering logs for storage-provisioner [4b7e57072db41f8a563f7872a8c96af29447848dc1694db9b8392a1b688b1de4] ...
	I0916 11:16:49.795488  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b7e57072db41f8a563f7872a8c96af29447848dc1694db9b8392a1b688b1de4"
	I0916 11:16:49.838368  283294 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:16:49.838402  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:16:49.954855  283294 logs.go:123] Gathering logs for kube-proxy [1145c23e87dee884c7c908196bd5c7de96e94947ae439d9c145f0c4dba1630fc] ...
	I0916 11:16:49.954891  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1145c23e87dee884c7c908196bd5c7de96e94947ae439d9c145f0c4dba1630fc"
	I0916 11:16:49.994029  283294 out.go:358] Setting ErrFile to fd 2...
	I0916 11:16:49.994058  283294 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0916 11:16:49.994134  283294 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0916 11:16:49.994150  283294 out.go:270]   Sep 16 11:16:14 old-k8s-version-371039 kubelet[1070]: E0916 11:16:14.355205    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	  Sep 16 11:16:14 old-k8s-version-371039 kubelet[1070]: E0916 11:16:14.355205    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:49.994160  283294 out.go:270]   Sep 16 11:16:26 old-k8s-version-371039 kubelet[1070]: E0916 11:16:26.355705    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 16 11:16:26 old-k8s-version-371039 kubelet[1070]: E0916 11:16:26.355705    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:49.994173  283294 out.go:270]   Sep 16 11:16:29 old-k8s-version-371039 kubelet[1070]: E0916 11:16:29.355289    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	  Sep 16 11:16:29 old-k8s-version-371039 kubelet[1070]: E0916 11:16:29.355289    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:49.994337  283294 out.go:270]   Sep 16 11:16:39 old-k8s-version-371039 kubelet[1070]: E0916 11:16:39.355981    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 16 11:16:39 old-k8s-version-371039 kubelet[1070]: E0916 11:16:39.355981    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:49.994352  283294 out.go:270]   Sep 16 11:16:40 old-k8s-version-371039 kubelet[1070]: E0916 11:16:40.355189    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	  Sep 16 11:16:40 old-k8s-version-371039 kubelet[1070]: E0916 11:16:40.355189    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	I0916 11:16:49.994375  283294 out.go:358] Setting ErrFile to fd 2...
	I0916 11:16:49.994385  283294 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:16:59.995885  283294 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0916 11:17:00.001438  283294 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I0916 11:17:00.004070  283294 out.go:201] 
	W0916 11:17:00.005556  283294 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0916 11:17:00.005591  283294 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0916 11:17:00.005611  283294 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0916 11:17:00.005618  283294 out.go:270] * 
	* 
	W0916 11:17:00.006550  283294 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 11:17:00.008068  283294 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-371039 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-371039
helpers_test.go:235: (dbg) docker inspect old-k8s-version-371039:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9e01fb8ba8f9390279ece7fa8549e0985cb4ee5a2abca405a94db6651b123b23",
	        "Created": "2024-09-16T11:08:26.808717426Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 283619,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T11:10:47.182625379Z",
	            "FinishedAt": "2024-09-16T11:10:46.3068422Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/9e01fb8ba8f9390279ece7fa8549e0985cb4ee5a2abca405a94db6651b123b23/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9e01fb8ba8f9390279ece7fa8549e0985cb4ee5a2abca405a94db6651b123b23/hostname",
	        "HostsPath": "/var/lib/docker/containers/9e01fb8ba8f9390279ece7fa8549e0985cb4ee5a2abca405a94db6651b123b23/hosts",
	        "LogPath": "/var/lib/docker/containers/9e01fb8ba8f9390279ece7fa8549e0985cb4ee5a2abca405a94db6651b123b23/9e01fb8ba8f9390279ece7fa8549e0985cb4ee5a2abca405a94db6651b123b23-json.log",
	        "Name": "/old-k8s-version-371039",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-371039:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-371039",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a4c36d1f47b4f755139012e0a43ea6f04287ebb716fa5c832223a9a5a773adaf-init/diff:/var/lib/docker/overlay2/e391a31d00d345931d18da68f8a7d7338830f6d9b98fe203461225d03a65d594/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a4c36d1f47b4f755139012e0a43ea6f04287ebb716fa5c832223a9a5a773adaf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a4c36d1f47b4f755139012e0a43ea6f04287ebb716fa5c832223a9a5a773adaf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a4c36d1f47b4f755139012e0a43ea6f04287ebb716fa5c832223a9a5a773adaf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-371039",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-371039/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-371039",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-371039",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-371039",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "edb89f5d0f1b14778bc6503c7122826ccde192142507f982d72042ac23f8d31f",
	            "SandboxKey": "/var/run/docker/netns/edb89f5d0f1b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-371039": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "617bc0338b3b0f6ed38b0b21b091e38e1d6c95398d3e053128c978435134833f",
	                    "EndpointID": "5143d6e6c759ce273e967be48af66c83d163fb8c953ab200f9e3b0c27528cf34",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-371039",
	                        "9e01fb8ba8f9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-371039 -n old-k8s-version-371039
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-371039 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-371039 logs -n 25: (3.153431488s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p default-k8s-diff-port-006978       | default-k8s-diff-port-006978 | jenkins | v1.34.0 | 16 Sep 24 11:13 UTC | 16 Sep 24 11:13 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-006978 | jenkins | v1.34.0 | 16 Sep 24 11:13 UTC |                     |
	|         | default-k8s-diff-port-006978                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | no-preload-349453 image list                           | no-preload-349453            | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC | 16 Sep 24 11:14 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-349453                                   | no-preload-349453            | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC | 16 Sep 24 11:14 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-349453                                   | no-preload-349453            | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC | 16 Sep 24 11:14 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-349453                                   | no-preload-349453            | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC | 16 Sep 24 11:14 UTC |
	| delete  | -p no-preload-349453                                   | no-preload-349453            | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC | 16 Sep 24 11:14 UTC |
	| start   | -p newest-cni-802652 --memory=2200 --alsologtostderr   | newest-cni-802652            | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC | 16 Sep 24 11:14 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-802652             | newest-cni-802652            | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC | 16 Sep 24 11:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-802652                                   | newest-cni-802652            | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC | 16 Sep 24 11:14 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-802652                  | newest-cni-802652            | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC | 16 Sep 24 11:14 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-802652 --memory=2200 --alsologtostderr   | newest-cni-802652            | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC | 16 Sep 24 11:15 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-802652 image list                           | newest-cni-802652            | jenkins | v1.34.0 | 16 Sep 24 11:15 UTC | 16 Sep 24 11:15 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-802652                                   | newest-cni-802652            | jenkins | v1.34.0 | 16 Sep 24 11:15 UTC | 16 Sep 24 11:15 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-802652                                   | newest-cni-802652            | jenkins | v1.34.0 | 16 Sep 24 11:15 UTC | 16 Sep 24 11:15 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-802652                                   | newest-cni-802652            | jenkins | v1.34.0 | 16 Sep 24 11:15 UTC | 16 Sep 24 11:15 UTC |
	| delete  | -p newest-cni-802652                                   | newest-cni-802652            | jenkins | v1.34.0 | 16 Sep 24 11:15 UTC | 16 Sep 24 11:15 UTC |
	| start   | -p auto-771611 --memory=3072                           | auto-771611                  | jenkins | v1.34.0 | 16 Sep 24 11:15 UTC | 16 Sep 24 11:16 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	| ssh     | -p auto-771611 pgrep -a                                | auto-771611                  | jenkins | v1.34.0 | 16 Sep 24 11:16 UTC | 16 Sep 24 11:16 UTC |
	|         | kubelet                                                |                              |         |         |                     |                     |
	| image   | embed-certs-679624 image list                          | embed-certs-679624           | jenkins | v1.34.0 | 16 Sep 24 11:16 UTC | 16 Sep 24 11:16 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-679624                                  | embed-certs-679624           | jenkins | v1.34.0 | 16 Sep 24 11:16 UTC | 16 Sep 24 11:16 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-679624                                  | embed-certs-679624           | jenkins | v1.34.0 | 16 Sep 24 11:16 UTC | 16 Sep 24 11:16 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-679624                                  | embed-certs-679624           | jenkins | v1.34.0 | 16 Sep 24 11:16 UTC | 16 Sep 24 11:16 UTC |
	| delete  | -p embed-certs-679624                                  | embed-certs-679624           | jenkins | v1.34.0 | 16 Sep 24 11:16 UTC | 16 Sep 24 11:16 UTC |
	| start   | -p kindnet-771611                                      | kindnet-771611               | jenkins | v1.34.0 | 16 Sep 24 11:16 UTC |                     |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=docker                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:16:56
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:16:56.933321  332275 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:16:56.933419  332275 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:16:56.933425  332275 out.go:358] Setting ErrFile to fd 2...
	I0916 11:16:56.933429  332275 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:16:56.933652  332275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 11:16:56.934241  332275 out.go:352] Setting JSON to false
	I0916 11:16:56.935579  332275 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3561,"bootTime":1726481856,"procs":343,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:16:56.935690  332275 start.go:139] virtualization: kvm guest
	I0916 11:16:56.937954  332275 out.go:177] * [kindnet-771611] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:16:56.939413  332275 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:16:56.939464  332275 notify.go:220] Checking for updates...
	I0916 11:16:56.942255  332275 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:16:56.943874  332275 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:16:56.945387  332275 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 11:16:56.946730  332275 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:16:56.948073  332275 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:16:56.949882  332275 config.go:182] Loaded profile config "auto-771611": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:16:56.949998  332275 config.go:182] Loaded profile config "default-k8s-diff-port-006978": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:16:56.950102  332275 config.go:182] Loaded profile config "old-k8s-version-371039": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0916 11:16:56.950214  332275 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:16:56.974378  332275 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 11:16:56.974499  332275 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:16:57.023146  332275 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:true NGoroutines:84 SystemTime:2024-09-16 11:16:57.012761712 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:16:57.023252  332275 docker.go:318] overlay module found
	I0916 11:16:57.024846  332275 out.go:177] * Using the docker driver based on user configuration
	I0916 11:16:57.026028  332275 start.go:297] selected driver: docker
	I0916 11:16:57.026046  332275 start.go:901] validating driver "docker" against <nil>
	I0916 11:16:57.026060  332275 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:16:57.026962  332275 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:16:57.077056  332275 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:true NGoroutines:84 SystemTime:2024-09-16 11:16:57.067870315 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:16:57.077199  332275 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 11:16:57.077430  332275 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:16:57.079092  332275 out.go:177] * Using Docker driver with root privileges
	I0916 11:16:57.080508  332275 cni.go:84] Creating CNI manager for "kindnet"
	I0916 11:16:57.080531  332275 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 11:16:57.080611  332275 start.go:340] cluster config:
	{Name:kindnet-771611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-771611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:16:57.082117  332275 out.go:177] * Starting "kindnet-771611" primary control-plane node in "kindnet-771611" cluster
	I0916 11:16:57.083542  332275 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 11:16:57.084962  332275 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 11:16:57.086118  332275 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:16:57.086161  332275 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4
	I0916 11:16:57.086171  332275 cache.go:56] Caching tarball of preloaded images
	I0916 11:16:57.086260  332275 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 11:16:57.086257  332275 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 11:16:57.086274  332275 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 11:16:57.086401  332275 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/config.json ...
	I0916 11:16:57.086426  332275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/config.json: {Name:mk85d1c52f772c780df10ed18ec6ee82497f4665 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0916 11:16:57.107611  332275 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 11:16:57.107628  332275 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 11:16:57.107698  332275 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 11:16:57.107713  332275 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 11:16:57.107717  332275 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 11:16:57.107724  332275 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 11:16:57.107730  332275 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 11:16:57.166277  332275 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 11:16:57.166320  332275 cache.go:194] Successfully downloaded all kic artifacts
	I0916 11:16:57.166353  332275 start.go:360] acquireMachinesLock for kindnet-771611: {Name:mk5409d440397cb7d3d0472cf5d14b2bfbc751d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:16:57.166453  332275 start.go:364] duration metric: took 80.88µs to acquireMachinesLock for "kindnet-771611"
	I0916 11:16:57.166477  332275 start.go:93] Provisioning new machine with config: &{Name:kindnet-771611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-771611 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 11:16:57.166565  332275 start.go:125] createHost starting for "" (driver="docker")
	I0916 11:16:56.129710  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:58.629215  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:59.995885  283294 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0916 11:17:00.001438  283294 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I0916 11:17:00.004070  283294 out.go:201] 
	W0916 11:17:00.005556  283294 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0916 11:17:00.005591  283294 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0916 11:17:00.005611  283294 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0916 11:17:00.005618  283294 out.go:270] * 
	W0916 11:17:00.006550  283294 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 11:17:00.008068  283294 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	069b3bead1bc3       523cad1a4df73       6 seconds ago       Exited              dashboard-metrics-scraper   6                   7270699f18202       dashboard-metrics-scraper-8d5bb5db8-7m6td
	4b7e57072db41       6e38f40d628db       5 minutes ago       Running             storage-provisioner         1                   93b8c8cd7b4b6       storage-provisioner
	b8894a1f49c45       07655ddf2eebe       5 minutes ago       Running             kubernetes-dashboard        0                   bd5ff588c3d01       kubernetes-dashboard-cd95d586-9sr9v
	e812c7a897638       12968670680f4       5 minutes ago       Running             kindnet-cni                 0                   f0a3b63ad532f       kindnet-txszz
	7e01e437eafa1       bfe3a36ebd252       5 minutes ago       Running             coredns                     0                   88f52d4838bfd       coredns-74ff55c5b-78djj
	b6e65b347883f       6e38f40d628db       5 minutes ago       Exited              storage-provisioner         0                   93b8c8cd7b4b6       storage-provisioner
	1145c23e87dee       10cc881966cfd       5 minutes ago       Running             kube-proxy                  0                   d0c6dbe1595c1       kube-proxy-w2kp4
	8dedfda17aef6       b9fa1895dcaa6       5 minutes ago       Running             kube-controller-manager     0                   b225af49e9834       kube-controller-manager-old-k8s-version-371039
	d4e91e6acd99e       3138b6e3d4712       5 minutes ago       Running             kube-scheduler              0                   24563282539af       kube-scheduler-old-k8s-version-371039
	b8b7a29520083       ca9843d3b5454       5 minutes ago       Running             kube-apiserver              0                   5e272dc6257d1       kube-apiserver-old-k8s-version-371039
	6f10dd2ab1448       0369cf4303ffd       5 minutes ago       Running             etcd                        0                   830e4d974c2e1       etcd-old-k8s-version-371039
	
	
	==> containerd <==
	Sep 16 11:12:50 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:12:50.376445634Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host" host=fake.domain
	Sep 16 11:12:50 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:12:50.378007334Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 16 11:12:50 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:12:50.378013767Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 16 11:14:07 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:14:07.357731669Z" level=info msg="CreateContainer within sandbox \"7270699f18202a68b3cbfaeec615880970ca87013f8548b8f1c66ace9d6d464d\" for container name:\"dashboard-metrics-scraper\" attempt:5"
	Sep 16 11:14:07 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:14:07.370708306Z" level=info msg="CreateContainer within sandbox \"7270699f18202a68b3cbfaeec615880970ca87013f8548b8f1c66ace9d6d464d\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"35d9b8aed85a04d0180e3da8bde05bf29e0abe35658db319fca445550605e571\""
	Sep 16 11:14:07 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:14:07.371345476Z" level=info msg="StartContainer for \"35d9b8aed85a04d0180e3da8bde05bf29e0abe35658db319fca445550605e571\""
	Sep 16 11:14:07 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:14:07.435930874Z" level=info msg="StartContainer for \"35d9b8aed85a04d0180e3da8bde05bf29e0abe35658db319fca445550605e571\" returns successfully"
	Sep 16 11:14:07 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:14:07.470709851Z" level=info msg="shim disconnected" id=35d9b8aed85a04d0180e3da8bde05bf29e0abe35658db319fca445550605e571 namespace=k8s.io
	Sep 16 11:14:07 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:14:07.470789162Z" level=warning msg="cleaning up after shim disconnected" id=35d9b8aed85a04d0180e3da8bde05bf29e0abe35658db319fca445550605e571 namespace=k8s.io
	Sep 16 11:14:07 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:14:07.470805125Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 16 11:14:08 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:14:08.005210445Z" level=info msg="RemoveContainer for \"501fec326410d790b90fb4a561dbbab9cb5e9dcc6ee52f2f601a8885550b154f\""
	Sep 16 11:14:08 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:14:08.010435003Z" level=info msg="RemoveContainer for \"501fec326410d790b90fb4a561dbbab9cb5e9dcc6ee52f2f601a8885550b154f\" returns successfully"
	Sep 16 11:14:14 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:14:14.355942813Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 11:14:14 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:14:14.386362237Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host" host=fake.domain
	Sep 16 11:14:14 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:14:14.387842439Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 16 11:14:14 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:14:14.387881509Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 16 11:16:54 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:16:54.357509929Z" level=info msg="CreateContainer within sandbox \"7270699f18202a68b3cbfaeec615880970ca87013f8548b8f1c66ace9d6d464d\" for container name:\"dashboard-metrics-scraper\" attempt:6"
	Sep 16 11:16:54 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:16:54.368387979Z" level=info msg="CreateContainer within sandbox \"7270699f18202a68b3cbfaeec615880970ca87013f8548b8f1c66ace9d6d464d\" for name:\"dashboard-metrics-scraper\" attempt:6 returns container id \"069b3bead1bc30b9d5eb89b2d9db4d250c2257e24b86e1b134c8dfb0382854f9\""
	Sep 16 11:16:54 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:16:54.368930848Z" level=info msg="StartContainer for \"069b3bead1bc30b9d5eb89b2d9db4d250c2257e24b86e1b134c8dfb0382854f9\""
	Sep 16 11:16:54 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:16:54.432707800Z" level=info msg="StartContainer for \"069b3bead1bc30b9d5eb89b2d9db4d250c2257e24b86e1b134c8dfb0382854f9\" returns successfully"
	Sep 16 11:16:54 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:16:54.468458251Z" level=info msg="shim disconnected" id=069b3bead1bc30b9d5eb89b2d9db4d250c2257e24b86e1b134c8dfb0382854f9 namespace=k8s.io
	Sep 16 11:16:54 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:16:54.468521639Z" level=warning msg="cleaning up after shim disconnected" id=069b3bead1bc30b9d5eb89b2d9db4d250c2257e24b86e1b134c8dfb0382854f9 namespace=k8s.io
	Sep 16 11:16:54 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:16:54.468534100Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 16 11:16:55 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:16:55.338587831Z" level=info msg="RemoveContainer for \"35d9b8aed85a04d0180e3da8bde05bf29e0abe35658db319fca445550605e571\""
	Sep 16 11:16:55 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:16:55.342908393Z" level=info msg="RemoveContainer for \"35d9b8aed85a04d0180e3da8bde05bf29e0abe35658db319fca445550605e571\" returns successfully"
	
	
	==> coredns [7e01e437eafa1ace1f5ee4f4bfd2759ea78df9316412a17e9a0e8ae750c20523] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 77006bb880cc2529c537c33b8dc6eabc
	CoreDNS-1.7.0
	linux/amd64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:49206 - 19492 "HINFO IN 2568215532487827892.8058846988098566839. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014231723s
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 77006bb880cc2529c537c33b8dc6eabc
	CoreDNS-1.7.0
	linux/amd64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:35595 - 5399 "HINFO IN 2418305322430051184.4287842096554552965. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01004514s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-371039
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-371039
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=old-k8s-version-371039
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T11_08_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:08:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-371039
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:16:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:16:39 +0000   Mon, 16 Sep 2024 11:08:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:16:39 +0000   Mon, 16 Sep 2024 11:08:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:16:39 +0000   Mon, 16 Sep 2024 11:08:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:16:39 +0000   Mon, 16 Sep 2024 11:09:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-371039
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 1ae9c883339d4b4e909ea43ab97b9195
	  System UUID:                5a808ec9-2d43-4212-9e81-7580afba2fbc
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-74ff55c5b-78djj                           100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m49s
	  kube-system                 etcd-old-k8s-version-371039                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m59s
	  kube-system                 kindnet-txszz                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      7m49s
	  kube-system                 kube-apiserver-old-k8s-version-371039             250m (3%)     0 (0%)      0 (0%)           0 (0%)         7m59s
	  kube-system                 kube-controller-manager-old-k8s-version-371039    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m59s
	  kube-system                 kube-proxy-w2kp4                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m49s
	  kube-system                 kube-scheduler-old-k8s-version-371039             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m59s
	  kube-system                 metrics-server-9975d5f86-4f2jl                    100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         6m22s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m48s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-7m6td         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m37s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-9sr9v               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 8m15s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m15s (x5 over 8m15s)  kubelet     Node old-k8s-version-371039 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m15s (x4 over 8m15s)  kubelet     Node old-k8s-version-371039 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m15s (x3 over 8m15s)  kubelet     Node old-k8s-version-371039 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m15s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 8m                     kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m                     kubelet     Node old-k8s-version-371039 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m                     kubelet     Node old-k8s-version-371039 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m                     kubelet     Node old-k8s-version-371039 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m59s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m50s                  kubelet     Node old-k8s-version-371039 status is now: NodeReady
	  Normal  Starting                 7m48s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 5m58s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m58s (x8 over 5m58s)  kubelet     Node old-k8s-version-371039 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m58s (x8 over 5m58s)  kubelet     Node old-k8s-version-371039 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m58s (x7 over 5m58s)  kubelet     Node old-k8s-version-371039 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m58s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m51s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.000004] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +1.024015] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000007] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000005] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000001] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +2.015813] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000006] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000002] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +4.063624] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000002] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +0.000004] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000002] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +8.191266] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000006] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000002] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	
	
	==> etcd [6f10dd2ab1448676f85f2e14b088b767bcca2ef3807aaa62f0370cbf81e594dc] <==
	2024-09-16 11:12:52.151904 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:13:02.151987 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:13:12.151941 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:13:22.152319 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:13:32.151944 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:13:42.151914 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:13:52.151984 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:14:02.151938 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:14:12.151969 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:14:22.151968 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:14:32.151948 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:14:42.152139 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:14:52.151958 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:15:02.152175 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:15:12.152124 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:15:22.152313 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:15:32.152521 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:15:42.152307 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:15:52.152157 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:16:02.151916 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:16:12.151936 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:16:22.151998 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:16:32.151834 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:16:42.152195 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:16:52.152029 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 11:17:01 up 59 min,  0 users,  load average: 1.67, 2.42, 2.18
	Linux old-k8s-version-371039 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [e812c7a8976381cd9cfc69cfc3ec18d19286650ecfda38b6e08f3545c8c27c65] <==
	I0916 11:14:53.849511       1 main.go:299] handling current node
	I0916 11:15:03.847878       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:15:03.847927       1 main.go:299] handling current node
	I0916 11:15:13.841080       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:15:13.841123       1 main.go:299] handling current node
	I0916 11:15:23.840912       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:15:23.840971       1 main.go:299] handling current node
	I0916 11:15:33.847873       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:15:33.847922       1 main.go:299] handling current node
	I0916 11:15:43.848168       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:15:43.848200       1 main.go:299] handling current node
	I0916 11:15:53.849561       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:15:53.849604       1 main.go:299] handling current node
	I0916 11:16:03.849195       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:16:03.849234       1 main.go:299] handling current node
	I0916 11:16:13.841357       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:16:13.841391       1 main.go:299] handling current node
	I0916 11:16:23.847851       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:16:23.847885       1 main.go:299] handling current node
	I0916 11:16:33.841045       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:16:33.841083       1 main.go:299] handling current node
	I0916 11:16:43.840919       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:16:43.840966       1 main.go:299] handling current node
	I0916 11:16:53.841545       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:16:53.841604       1 main.go:299] handling current node
	
	
	==> kube-apiserver [b8b7a2952008364403b8b3a7f0ec5cca63e9cd244427d95edd2b992a7dff815d] <==
	I0916 11:13:15.275823       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 11:13:15.275834       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0916 11:13:52.795838       1 client.go:360] parsed scheme: "passthrough"
	I0916 11:13:52.795882       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 11:13:52.795890       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0916 11:14:11.149775       1 handler_proxy.go:102] no RequestInfo found in the context
	E0916 11:14:11.149847       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0916 11:14:11.149855       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0916 11:14:25.124806       1 client.go:360] parsed scheme: "passthrough"
	I0916 11:14:25.124851       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 11:14:25.124871       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0916 11:15:09.927590       1 client.go:360] parsed scheme: "passthrough"
	I0916 11:15:09.927653       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 11:15:09.927664       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0916 11:15:41.597147       1 client.go:360] parsed scheme: "passthrough"
	I0916 11:15:41.597188       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 11:15:41.597195       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0916 11:16:09.524614       1 handler_proxy.go:102] no RequestInfo found in the context
	E0916 11:16:09.524684       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0916 11:16:09.524692       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0916 11:16:26.215211       1 client.go:360] parsed scheme: "passthrough"
	I0916 11:16:26.215257       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 11:16:26.215266       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [8dedfda17aef64d94e616f000f9e226b29f8f0ee8b642400d83681bf9b9f8330] <==
	E0916 11:12:57.526990       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0916 11:13:03.801581       1 request.go:655] Throttling request took 1.048545814s, request: GET:https://192.168.103.2:8443/apis/policy/v1beta1?timeout=32s
	W0916 11:13:04.652762       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0916 11:13:28.028551       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0916 11:13:36.303211       1 request.go:655] Throttling request took 1.048431296s, request: GET:https://192.168.103.2:8443/apis/node.k8s.io/v1?timeout=32s
	W0916 11:13:37.154412       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0916 11:13:58.530051       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0916 11:14:08.804527       1 request.go:655] Throttling request took 1.048482129s, request: GET:https://192.168.103.2:8443/apis/certificates.k8s.io/v1?timeout=32s
	W0916 11:14:09.655557       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0916 11:14:29.032133       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0916 11:14:41.305844       1 request.go:655] Throttling request took 1.048724819s, request: GET:https://192.168.103.2:8443/apis/scheduling.k8s.io/v1beta1?timeout=32s
	W0916 11:14:42.157112       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0916 11:14:59.534104       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0916 11:15:13.807004       1 request.go:655] Throttling request took 1.048695794s, request: GET:https://192.168.103.2:8443/apis/apiregistration.k8s.io/v1?timeout=32s
	W0916 11:15:14.658651       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0916 11:15:30.036352       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0916 11:15:46.308676       1 request.go:655] Throttling request took 1.048425709s, request: GET:https://192.168.103.2:8443/apis/apps/v1?timeout=32s
	W0916 11:15:47.159901       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0916 11:16:00.538130       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0916 11:16:18.810155       1 request.go:655] Throttling request took 1.048608181s, request: GET:https://192.168.103.2:8443/apis/authentication.k8s.io/v1beta1?timeout=32s
	W0916 11:16:19.661202       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0916 11:16:31.039982       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0916 11:16:51.311292       1 request.go:655] Throttling request took 1.048464492s, request: GET:https://192.168.103.2:8443/apis/events.k8s.io/v1beta1?timeout=32s
	W0916 11:16:52.162356       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0916 11:17:01.541679       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [1145c23e87dee884c7c908196bd5c7de96e94947ae439d9c145f0c4dba1630fc] <==
	I0916 11:09:13.322536       1 node.go:172] Successfully retrieved node IP: 192.168.103.2
	I0916 11:09:13.322732       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.103.2), assume IPv4 operation
	W0916 11:09:13.345840       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0916 11:09:13.345951       1 server_others.go:185] Using iptables Proxier.
	I0916 11:09:13.346284       1 server.go:650] Version: v1.20.0
	I0916 11:09:13.347687       1 config.go:315] Starting service config controller
	I0916 11:09:13.349932       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0916 11:09:13.347841       1 config.go:224] Starting endpoint slice config controller
	I0916 11:09:13.420415       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0916 11:09:13.420676       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0916 11:09:13.450370       1 shared_informer.go:247] Caches are synced for service config 
	I0916 11:11:10.475965       1 node.go:172] Successfully retrieved node IP: 192.168.103.2
	I0916 11:11:10.476023       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.103.2), assume IPv4 operation
	W0916 11:11:10.493087       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0916 11:11:10.493201       1 server_others.go:185] Using iptables Proxier.
	I0916 11:11:10.493552       1 server.go:650] Version: v1.20.0
	I0916 11:11:10.494094       1 config.go:224] Starting endpoint slice config controller
	I0916 11:11:10.494117       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0916 11:11:10.494542       1 config.go:315] Starting service config controller
	I0916 11:11:10.494837       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0916 11:11:10.594963       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0916 11:11:10.595513       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [d4e91e6acd99eff1a1f03c740be536efde73dc3441d0ce21da107f92e07c7aa8] <==
	E0916 11:08:53.447999       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 11:08:53.448116       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:08:53.448269       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 11:08:53.448478       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 11:08:53.448860       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 11:08:53.448864       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 11:08:53.449019       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 11:08:53.449164       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 11:08:53.450105       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 11:08:53.450247       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:08:54.410511       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 11:08:54.433702       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 11:08:54.472291       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:08:54.592362       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0916 11:08:56.246004       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0916 11:11:05.236372       1 serving.go:331] Generated self-signed cert in-memory
	W0916 11:11:08.422847       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 11:11:08.422945       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 11:11:08.423021       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 11:11:08.423069       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 11:11:08.539159       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 11:11:08.539206       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 11:11:08.539648       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0916 11:11:08.539706       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0916 11:11:08.639920       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Sep 16 11:15:34 old-k8s-version-371039 kubelet[1070]: E0916 11:15:34.355954    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 11:15:38 old-k8s-version-371039 kubelet[1070]: I0916 11:15:38.354928    1070 scope.go:95] [topologymanager] RemoveContainer - Container ID: 35d9b8aed85a04d0180e3da8bde05bf29e0abe35658db319fca445550605e571
	Sep 16 11:15:38 old-k8s-version-371039 kubelet[1070]: E0916 11:15:38.355218    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	Sep 16 11:15:48 old-k8s-version-371039 kubelet[1070]: E0916 11:15:48.355603    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 11:15:49 old-k8s-version-371039 kubelet[1070]: I0916 11:15:49.355008    1070 scope.go:95] [topologymanager] RemoveContainer - Container ID: 35d9b8aed85a04d0180e3da8bde05bf29e0abe35658db319fca445550605e571
	Sep 16 11:15:49 old-k8s-version-371039 kubelet[1070]: E0916 11:15:49.355370    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	Sep 16 11:15:59 old-k8s-version-371039 kubelet[1070]: E0916 11:15:59.355864    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 11:16:03 old-k8s-version-371039 kubelet[1070]: I0916 11:16:03.355196    1070 scope.go:95] [topologymanager] RemoveContainer - Container ID: 35d9b8aed85a04d0180e3da8bde05bf29e0abe35658db319fca445550605e571
	Sep 16 11:16:03 old-k8s-version-371039 kubelet[1070]: E0916 11:16:03.355623    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	Sep 16 11:16:12 old-k8s-version-371039 kubelet[1070]: E0916 11:16:12.355692    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 11:16:14 old-k8s-version-371039 kubelet[1070]: I0916 11:16:14.354920    1070 scope.go:95] [topologymanager] RemoveContainer - Container ID: 35d9b8aed85a04d0180e3da8bde05bf29e0abe35658db319fca445550605e571
	Sep 16 11:16:14 old-k8s-version-371039 kubelet[1070]: E0916 11:16:14.355205    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	Sep 16 11:16:26 old-k8s-version-371039 kubelet[1070]: E0916 11:16:26.355705    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 11:16:29 old-k8s-version-371039 kubelet[1070]: I0916 11:16:29.355001    1070 scope.go:95] [topologymanager] RemoveContainer - Container ID: 35d9b8aed85a04d0180e3da8bde05bf29e0abe35658db319fca445550605e571
	Sep 16 11:16:29 old-k8s-version-371039 kubelet[1070]: E0916 11:16:29.355289    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	Sep 16 11:16:39 old-k8s-version-371039 kubelet[1070]: E0916 11:16:39.355981    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 11:16:40 old-k8s-version-371039 kubelet[1070]: I0916 11:16:40.354912    1070 scope.go:95] [topologymanager] RemoveContainer - Container ID: 35d9b8aed85a04d0180e3da8bde05bf29e0abe35658db319fca445550605e571
	Sep 16 11:16:40 old-k8s-version-371039 kubelet[1070]: E0916 11:16:40.355189    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	Sep 16 11:16:53 old-k8s-version-371039 kubelet[1070]: E0916 11:16:53.355923    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 11:16:54 old-k8s-version-371039 kubelet[1070]: I0916 11:16:54.354979    1070 scope.go:95] [topologymanager] RemoveContainer - Container ID: 35d9b8aed85a04d0180e3da8bde05bf29e0abe35658db319fca445550605e571
	Sep 16 11:16:55 old-k8s-version-371039 kubelet[1070]: I0916 11:16:55.337389    1070 scope.go:95] [topologymanager] RemoveContainer - Container ID: 35d9b8aed85a04d0180e3da8bde05bf29e0abe35658db319fca445550605e571
	Sep 16 11:16:55 old-k8s-version-371039 kubelet[1070]: I0916 11:16:55.337750    1070 scope.go:95] [topologymanager] RemoveContainer - Container ID: 069b3bead1bc30b9d5eb89b2d9db4d250c2257e24b86e1b134c8dfb0382854f9
	Sep 16 11:16:55 old-k8s-version-371039 kubelet[1070]: E0916 11:16:55.338092    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	Sep 16 11:16:59 old-k8s-version-371039 kubelet[1070]: I0916 11:16:59.245269    1070 scope.go:95] [topologymanager] RemoveContainer - Container ID: 069b3bead1bc30b9d5eb89b2d9db4d250c2257e24b86e1b134c8dfb0382854f9
	Sep 16 11:16:59 old-k8s-version-371039 kubelet[1070]: E0916 11:16:59.245633    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	
	
	==> kubernetes-dashboard [b8894a1f49c45a030260a48d3dd12ddc81b2ba27c652688d77576697c81124e8] <==
	2024/09/16 11:11:33 Using namespace: kubernetes-dashboard
	2024/09/16 11:11:33 Using in-cluster config to connect to apiserver
	2024/09/16 11:11:33 Using secret token for csrf signing
	2024/09/16 11:11:33 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/16 11:11:33 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/16 11:11:33 Successful initial request to the apiserver, version: v1.20.0
	2024/09/16 11:11:33 Generating JWE encryption key
	2024/09/16 11:11:33 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/16 11:11:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/16 11:11:33 Initializing JWE encryption key from synchronized object
	2024/09/16 11:11:33 Creating in-cluster Sidecar client
	2024/09/16 11:11:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:11:33 Serving insecurely on HTTP port: 9090
	2024/09/16 11:12:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:12:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:13:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:13:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:14:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:14:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:15:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:15:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:16:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:16:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:11:33 Starting overwatch
	
	
	==> storage-provisioner [4b7e57072db41f8a563f7872a8c96af29447848dc1694db9b8392a1b688b1de4] <==
	I0916 11:11:40.848372       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 11:11:40.891731       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 11:11:40.894031       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 11:11:58.320853       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 11:11:58.321666       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-371039_cd48ed3a-7bb7-4816-ad2b-a773b03c9c79!
	I0916 11:11:58.321659       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3df43ad2-abd4-4d32-b26b-91fa0eea8673", APIVersion:"v1", ResourceVersion:"800", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-371039_cd48ed3a-7bb7-4816-ad2b-a773b03c9c79 became leader
	I0916 11:11:58.423431       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-371039_cd48ed3a-7bb7-4816-ad2b-a773b03c9c79!
	
	
	==> storage-provisioner [b6e65b347883fafb9798059fa58b334346a8b29c61fe40422f2cf04c355fcd71] <==
	I0916 11:09:13.972762       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 11:09:13.980679       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 11:09:13.980724       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 11:09:13.987659       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 11:09:13.987719       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3df43ad2-abd4-4d32-b26b-91fa0eea8673", APIVersion:"v1", ResourceVersion:"473", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-371039_ba54aacd-67bb-46f0-b4ba-d01754da79ef became leader
	I0916 11:09:13.987846       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-371039_ba54aacd-67bb-46f0-b4ba-d01754da79ef!
	I0916 11:09:14.088020       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-371039_ba54aacd-67bb-46f0-b4ba-d01754da79ef!
	I0916 11:11:10.425227       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0916 11:11:40.437499       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-371039 -n old-k8s-version-371039
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-371039 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context old-k8s-version-371039 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (1.01973ms)
helpers_test.go:263: kubectl --context old-k8s-version-371039 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (377.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (3.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-679624 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-679624 create -f testdata/busybox.yaml: fork/exec /usr/local/bin/kubectl: exec format error (951.625µs)
start_stop_delete_test.go:196: kubectl --context embed-certs-679624 create -f testdata/busybox.yaml failed: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-679624
helpers_test.go:235: (dbg) docker inspect embed-certs-679624:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8a143ceb3281ed1e96f6f01866c194c6d8fa87e7cc776258c58582b8b277ac01",
	        "Created": "2024-09-16T11:11:24.339291508Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 289930,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T11:11:24.472248835Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/8a143ceb3281ed1e96f6f01866c194c6d8fa87e7cc776258c58582b8b277ac01/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8a143ceb3281ed1e96f6f01866c194c6d8fa87e7cc776258c58582b8b277ac01/hostname",
	        "HostsPath": "/var/lib/docker/containers/8a143ceb3281ed1e96f6f01866c194c6d8fa87e7cc776258c58582b8b277ac01/hosts",
	        "LogPath": "/var/lib/docker/containers/8a143ceb3281ed1e96f6f01866c194c6d8fa87e7cc776258c58582b8b277ac01/8a143ceb3281ed1e96f6f01866c194c6d8fa87e7cc776258c58582b8b277ac01-json.log",
	        "Name": "/embed-certs-679624",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-679624:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-679624",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/cb7af4404e9e88f242c67c60187c08f2160de8b651471dce3186948c08f3dc76-init/diff:/var/lib/docker/overlay2/e391a31d00d345931d18da68f8a7d7338830f6d9b98fe203461225d03a65d594/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cb7af4404e9e88f242c67c60187c08f2160de8b651471dce3186948c08f3dc76/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cb7af4404e9e88f242c67c60187c08f2160de8b651471dce3186948c08f3dc76/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cb7af4404e9e88f242c67c60187c08f2160de8b651471dce3186948c08f3dc76/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-679624",
	                "Source": "/var/lib/docker/volumes/embed-certs-679624/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-679624",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-679624",
	                "name.minikube.sigs.k8s.io": "embed-certs-679624",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ff60825c25c0c32e46c9786671ffef996b2342a731555808d9dc885e9b8cac8e",
	            "SandboxKey": "/var/run/docker/netns/ff60825c25c0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-679624": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null,
	                    "NetworkID": "5c8d67185b352feb5e2b0195e3f409fe6cf79bd750730cb6897291fef1a3c3d7",
	                    "EndpointID": "dddf70084024b7c890e66e96d6c39e3f3c7ed4ae631ca39642acb6c9b79a1c44",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-679624",
	                        "8a143ceb3281"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-679624 -n embed-certs-679624
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-679624 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-679624 logs -n 25: (1.226046404s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-021107                              | cert-expiration-021107    | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:08 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| start   | -p force-systemd-flag-917705                           | force-systemd-flag-917705 | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:08 UTC |
	|         | --memory=2048 --force-systemd                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                                   |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-846070                               | force-systemd-env-846070  | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | ssh cat                                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-846070                            | force-systemd-env-846070  | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| stop    | -p kubernetes-upgrade-311911                           | kubernetes-upgrade-311911 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| start   | -p cert-options-840054                                 | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-311911                           | kubernetes-upgrade-311911 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                                   |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-917705                              | force-systemd-flag-917705 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | ssh cat                                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-917705                           | force-systemd-flag-917705 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| start   | -p old-k8s-version-371039                              | old-k8s-version-371039    | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:10 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| ssh     | cert-options-840054 ssh                                | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | openssl x509 -text -noout -in                          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-840054 -- sudo                         | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                           |         |         |                     |                     |
	| delete  | -p cert-options-840054                                 | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| start   | -p no-preload-349453                                   | no-preload-349453         | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:09 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-349453             | no-preload-349453         | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC | 16 Sep 24 11:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p no-preload-349453                                   | no-preload-349453         | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC | 16 Sep 24 11:09 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-349453                  | no-preload-349453         | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC | 16 Sep 24 11:09 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p no-preload-349453                                   | no-preload-349453         | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-371039        | old-k8s-version-371039    | jenkins | v1.34.0 | 16 Sep 24 11:10 UTC | 16 Sep 24 11:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p old-k8s-version-371039                              | old-k8s-version-371039    | jenkins | v1.34.0 | 16 Sep 24 11:10 UTC | 16 Sep 24 11:10 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-371039             | old-k8s-version-371039    | jenkins | v1.34.0 | 16 Sep 24 11:10 UTC | 16 Sep 24 11:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-371039                              | old-k8s-version-371039    | jenkins | v1.34.0 | 16 Sep 24 11:10 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| start   | -p cert-expiration-021107                              | cert-expiration-021107    | jenkins | v1.34.0 | 16 Sep 24 11:11 UTC | 16 Sep 24 11:11 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-021107                              | cert-expiration-021107    | jenkins | v1.34.0 | 16 Sep 24 11:11 UTC | 16 Sep 24 11:11 UTC |
	| start   | -p embed-certs-679624                                  | embed-certs-679624        | jenkins | v1.34.0 | 16 Sep 24 11:11 UTC | 16 Sep 24 11:12 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:11:18
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:11:18.856155  288908 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:11:18.856262  288908 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:11:18.856269  288908 out.go:358] Setting ErrFile to fd 2...
	I0916 11:11:18.856274  288908 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:11:18.856461  288908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 11:11:18.857036  288908 out.go:352] Setting JSON to false
	I0916 11:11:18.858346  288908 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3223,"bootTime":1726481856,"procs":348,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:11:18.858451  288908 start.go:139] virtualization: kvm guest
	I0916 11:11:18.860470  288908 out.go:177] * [embed-certs-679624] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:11:18.862286  288908 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:11:18.862325  288908 notify.go:220] Checking for updates...
	I0916 11:11:18.864825  288908 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:11:18.865999  288908 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:11:18.867166  288908 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 11:11:18.868600  288908 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:11:18.870074  288908 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:11:18.871834  288908 config.go:182] Loaded profile config "kubernetes-upgrade-311911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:11:18.871944  288908 config.go:182] Loaded profile config "no-preload-349453": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:11:18.872024  288908 config.go:182] Loaded profile config "old-k8s-version-371039": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0916 11:11:18.872127  288908 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:11:18.894405  288908 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 11:11:18.894515  288908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:11:18.948949  288908 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:86 SystemTime:2024-09-16 11:11:18.937344705 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:11:18.949132  288908 docker.go:318] overlay module found
	I0916 11:11:18.950939  288908 out.go:177] * Using the docker driver based on user configuration
	I0916 11:11:18.952281  288908 start.go:297] selected driver: docker
	I0916 11:11:18.952313  288908 start.go:901] validating driver "docker" against <nil>
	I0916 11:11:18.952331  288908 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:11:18.953507  288908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:11:19.001625  288908 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:86 SystemTime:2024-09-16 11:11:18.99185584 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:11:19.001804  288908 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 11:11:19.002056  288908 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:11:19.003908  288908 out.go:177] * Using Docker driver with root privileges
	I0916 11:11:19.005402  288908 cni.go:84] Creating CNI manager for ""
	I0916 11:11:19.005465  288908 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:11:19.005479  288908 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 11:11:19.005564  288908 start.go:340] cluster config:
	{Name:embed-certs-679624 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-679624 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:11:19.007384  288908 out.go:177] * Starting "embed-certs-679624" primary control-plane node in "embed-certs-679624" cluster
	I0916 11:11:19.009150  288908 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 11:11:19.010840  288908 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 11:11:19.012215  288908 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:11:19.012278  288908 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4
	I0916 11:11:19.012297  288908 cache.go:56] Caching tarball of preloaded images
	I0916 11:11:19.012311  288908 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 11:11:19.012483  288908 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 11:11:19.012514  288908 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 11:11:19.012637  288908 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/config.json ...
	I0916 11:11:19.012667  288908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/config.json: {Name:mk779755db7fc6d270e9404ca4b6e4963d78e149 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0916 11:11:19.033306  288908 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 11:11:19.033331  288908 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 11:11:19.033415  288908 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 11:11:19.033429  288908 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 11:11:19.033435  288908 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 11:11:19.033442  288908 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 11:11:19.033458  288908 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 11:11:19.086983  288908 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 11:11:19.087029  288908 cache.go:194] Successfully downloaded all kic artifacts
	I0916 11:11:19.087070  288908 start.go:360] acquireMachinesLock for embed-certs-679624: {Name:mk5c5a1695ab7bba9827e17eb437dd80adf4e091 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:11:19.087184  288908 start.go:364] duration metric: took 93.132µs to acquireMachinesLock for "embed-certs-679624"
	I0916 11:11:19.087215  288908 start.go:93] Provisioning new machine with config: &{Name:embed-certs-679624 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-679624 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 11:11:19.087341  288908 start.go:125] createHost starting for "" (driver="docker")
	I0916 11:11:17.757111  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:20.258429  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:17.707064  254463 logs.go:123] Gathering logs for kube-apiserver [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7] ...
	I0916 11:11:17.707097  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:11:17.745431  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:11:17.745460  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:17.807745  254463 logs.go:123] Gathering logs for kube-controller-manager [5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0] ...
	I0916 11:11:17.807796  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:17.841462  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:11:17.841493  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:11:17.927928  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:11:17.927966  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:11:17.951261  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:11:17.951305  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:11:18.013608  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:11:18.013640  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:11:18.013660  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:20.558195  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:11:20.558623  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:11:20.558677  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:11:20.558734  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:11:20.595321  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:20.595346  254463 cri.go:89] found id: ""
	I0916 11:11:20.595355  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:11:20.595413  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:20.599420  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:11:20.599497  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:11:20.641184  254463 cri.go:89] found id: ""
	I0916 11:11:20.641211  254463 logs.go:276] 0 containers: []
	W0916 11:11:20.641223  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:11:20.641232  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:11:20.641292  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:11:20.682399  254463 cri.go:89] found id: ""
	I0916 11:11:20.682431  254463 logs.go:276] 0 containers: []
	W0916 11:11:20.682443  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:11:20.682451  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:11:20.682516  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:11:20.721644  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:20.721669  254463 cri.go:89] found id: ""
	I0916 11:11:20.721678  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:11:20.721731  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:20.725651  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:11:20.725724  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:11:20.767294  254463 cri.go:89] found id: ""
	I0916 11:11:20.767321  254463 logs.go:276] 0 containers: []
	W0916 11:11:20.767329  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:11:20.767335  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:11:20.767382  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:11:20.801830  254463 cri.go:89] found id: "5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:20.801855  254463 cri.go:89] found id: ""
	I0916 11:11:20.801865  254463 logs.go:276] 1 containers: [5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0]
	I0916 11:11:20.801922  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:20.805407  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:11:20.805482  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:11:20.840869  254463 cri.go:89] found id: ""
	I0916 11:11:20.840900  254463 logs.go:276] 0 containers: []
	W0916 11:11:20.840912  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:11:20.840919  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:11:20.840979  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:11:20.878195  254463 cri.go:89] found id: ""
	I0916 11:11:20.878221  254463 logs.go:276] 0 containers: []
	W0916 11:11:20.878229  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:11:20.878237  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:11:20.878248  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:11:20.925361  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:11:20.925388  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:11:21.019564  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:11:21.019600  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:11:21.048676  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:11:21.048723  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:11:21.112999  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:11:21.113033  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:11:21.113051  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:21.154086  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:11:21.154114  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:21.235856  254463 logs.go:123] Gathering logs for kube-controller-manager [5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0] ...
	I0916 11:11:21.235897  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:21.278612  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:11:21.278650  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:11:19.238965  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:21.239025  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:23.738819  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:19.090071  288908 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 11:11:19.090308  288908 start.go:159] libmachine.API.Create for "embed-certs-679624" (driver="docker")
	I0916 11:11:19.090338  288908 client.go:168] LocalClient.Create starting
	I0916 11:11:19.090401  288908 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem
	I0916 11:11:19.090431  288908 main.go:141] libmachine: Decoding PEM data...
	I0916 11:11:19.090448  288908 main.go:141] libmachine: Parsing certificate...
	I0916 11:11:19.090505  288908 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem
	I0916 11:11:19.090523  288908 main.go:141] libmachine: Decoding PEM data...
	I0916 11:11:19.090534  288908 main.go:141] libmachine: Parsing certificate...
	I0916 11:11:19.090850  288908 cli_runner.go:164] Run: docker network inspect embed-certs-679624 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 11:11:19.107706  288908 cli_runner.go:211] docker network inspect embed-certs-679624 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 11:11:19.107836  288908 network_create.go:284] running [docker network inspect embed-certs-679624] to gather additional debugging logs...
	I0916 11:11:19.107862  288908 cli_runner.go:164] Run: docker network inspect embed-certs-679624
	W0916 11:11:19.124412  288908 cli_runner.go:211] docker network inspect embed-certs-679624 returned with exit code 1
	I0916 11:11:19.124439  288908 network_create.go:287] error running [docker network inspect embed-certs-679624]: docker network inspect embed-certs-679624: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-679624 not found
	I0916 11:11:19.124466  288908 network_create.go:289] output of [docker network inspect embed-certs-679624]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-679624 not found
	
	** /stderr **
	I0916 11:11:19.124580  288908 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:11:19.142536  288908 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c95c64bb41bd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:bd:76:ab:c4} reservation:<nil>}
	I0916 11:11:19.143504  288908 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fad43aa9929b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:94:3f:61:fd} reservation:<nil>}
	I0916 11:11:19.144458  288908 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-49585fce923a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:ad:45:94:54} reservation:<nil>}
	I0916 11:11:19.145163  288908 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-45dc384def28 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:95:3e:48:c3} reservation:<nil>}
	I0916 11:11:19.146136  288908 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cbec20}
	I0916 11:11:19.146158  288908 network_create.go:124] attempt to create docker network embed-certs-679624 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0916 11:11:19.146211  288908 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-679624 embed-certs-679624
	I0916 11:11:19.210275  288908 network_create.go:108] docker network embed-certs-679624 192.168.85.0/24 created
	I0916 11:11:19.210306  288908 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-679624" container
	I0916 11:11:19.210356  288908 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 11:11:19.227600  288908 cli_runner.go:164] Run: docker volume create embed-certs-679624 --label name.minikube.sigs.k8s.io=embed-certs-679624 --label created_by.minikube.sigs.k8s.io=true
	I0916 11:11:19.245579  288908 oci.go:103] Successfully created a docker volume embed-certs-679624
	I0916 11:11:19.245640  288908 cli_runner.go:164] Run: docker run --rm --name embed-certs-679624-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-679624 --entrypoint /usr/bin/test -v embed-certs-679624:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 11:11:19.757598  288908 oci.go:107] Successfully prepared a docker volume embed-certs-679624
	I0916 11:11:19.757638  288908 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:11:19.757655  288908 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 11:11:19.757735  288908 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-679624:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 11:11:22.757918  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:24.758241  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:23.825300  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:11:23.825689  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:11:23.825738  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:11:23.825786  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:11:23.859216  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:23.859235  254463 cri.go:89] found id: ""
	I0916 11:11:23.859242  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:11:23.859286  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:23.862764  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:11:23.862821  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:11:23.895042  254463 cri.go:89] found id: ""
	I0916 11:11:23.895069  254463 logs.go:276] 0 containers: []
	W0916 11:11:23.895078  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:11:23.895084  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:11:23.895139  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:11:23.926804  254463 cri.go:89] found id: ""
	I0916 11:11:23.926829  254463 logs.go:276] 0 containers: []
	W0916 11:11:23.926842  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:11:23.926850  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:11:23.926897  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:11:23.961138  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:23.961159  254463 cri.go:89] found id: ""
	I0916 11:11:23.961166  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:11:23.961218  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:23.964777  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:11:23.964842  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:11:24.007913  254463 cri.go:89] found id: ""
	I0916 11:11:24.007939  254463 logs.go:276] 0 containers: []
	W0916 11:11:24.007951  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:11:24.007959  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:11:24.008029  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:11:24.049372  254463 cri.go:89] found id: "5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:24.049444  254463 cri.go:89] found id: ""
	I0916 11:11:24.049460  254463 logs.go:276] 1 containers: [5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0]
	I0916 11:11:24.049523  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:24.054045  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:11:24.054127  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:11:24.093835  254463 cri.go:89] found id: ""
	I0916 11:11:24.093864  254463 logs.go:276] 0 containers: []
	W0916 11:11:24.093875  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:11:24.093883  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:11:24.093939  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:11:24.129861  254463 cri.go:89] found id: ""
	I0916 11:11:24.129888  254463 logs.go:276] 0 containers: []
	W0916 11:11:24.129896  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:11:24.129904  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:11:24.129916  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:11:24.179039  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:11:24.179086  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:11:24.218126  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:11:24.218159  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:11:24.318420  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:11:24.318456  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:11:24.349622  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:11:24.349663  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:11:24.429380  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:11:24.429415  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:11:24.429433  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:24.468570  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:11:24.468615  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:24.557739  254463 logs.go:123] Gathering logs for kube-controller-manager [5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0] ...
	I0916 11:11:24.557776  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:27.098528  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:11:27.098979  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:11:27.099032  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:11:27.099086  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:11:27.135416  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:27.135437  254463 cri.go:89] found id: ""
	I0916 11:11:27.135444  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:11:27.135489  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:27.138909  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:11:27.138973  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:11:27.177050  254463 cri.go:89] found id: ""
	I0916 11:11:27.177080  254463 logs.go:276] 0 containers: []
	W0916 11:11:27.177091  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:11:27.177099  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:11:27.177160  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:11:27.212036  254463 cri.go:89] found id: ""
	I0916 11:11:27.212061  254463 logs.go:276] 0 containers: []
	W0916 11:11:27.212073  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:11:27.212081  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:11:27.212136  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:11:27.251569  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:27.251590  254463 cri.go:89] found id: ""
	I0916 11:11:27.251598  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:11:27.251651  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:27.258394  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:11:27.258463  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:11:27.296919  254463 cri.go:89] found id: ""
	I0916 11:11:27.296950  254463 logs.go:276] 0 containers: []
	W0916 11:11:27.296960  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:11:27.296965  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:11:27.297023  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:11:27.335315  254463 cri.go:89] found id: "5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:27.335334  254463 cri.go:89] found id: ""
	I0916 11:11:27.335342  254463 logs.go:276] 1 containers: [5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0]
	I0916 11:11:27.335384  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:27.338919  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:11:27.338984  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:11:27.375852  254463 cri.go:89] found id: ""
	I0916 11:11:27.375877  254463 logs.go:276] 0 containers: []
	W0916 11:11:27.375890  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:11:27.375905  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:11:27.375963  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:11:27.413862  254463 cri.go:89] found id: ""
	I0916 11:11:27.413883  254463 logs.go:276] 0 containers: []
	W0916 11:11:27.413891  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:11:27.413899  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:11:27.413909  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:11:27.526092  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:11:27.526127  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:11:27.550647  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:11:27.550682  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:11:27.620133  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:11:27.620156  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:11:27.620170  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:27.665894  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:11:27.665929  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:25.739512  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:28.239069  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:24.264807  288908 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-679624:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.506989871s)
	I0916 11:11:24.264850  288908 kic.go:203] duration metric: took 4.507189916s to extract preloaded images to volume ...
	W0916 11:11:24.265015  288908 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 11:11:24.265175  288908 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 11:11:24.316681  288908 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-679624 --name embed-certs-679624 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-679624 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-679624 --network embed-certs-679624 --ip 192.168.85.2 --volume embed-certs-679624:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 11:11:24.669712  288908 cli_runner.go:164] Run: docker container inspect embed-certs-679624 --format={{.State.Running}}
	I0916 11:11:24.689977  288908 cli_runner.go:164] Run: docker container inspect embed-certs-679624 --format={{.State.Status}}
	I0916 11:11:24.710159  288908 cli_runner.go:164] Run: docker exec embed-certs-679624 stat /var/lib/dpkg/alternatives/iptables
	I0916 11:11:24.751713  288908 oci.go:144] the created container "embed-certs-679624" has a running status.
	I0916 11:11:24.751782  288908 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/embed-certs-679624/id_rsa...
	I0916 11:11:24.870719  288908 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3687/.minikube/machines/embed-certs-679624/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 11:11:24.897688  288908 cli_runner.go:164] Run: docker container inspect embed-certs-679624 --format={{.State.Status}}
	I0916 11:11:24.915975  288908 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 11:11:24.915999  288908 kic_runner.go:114] Args: [docker exec --privileged embed-certs-679624 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 11:11:24.973386  288908 cli_runner.go:164] Run: docker container inspect embed-certs-679624 --format={{.State.Status}}
	I0916 11:11:24.992710  288908 machine.go:93] provisionDockerMachine start ...
	I0916 11:11:24.992788  288908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:11:25.013373  288908 main.go:141] libmachine: Using SSH client type: native
	I0916 11:11:25.013666  288908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I0916 11:11:25.013688  288908 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 11:11:25.014308  288908 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45610->127.0.0.1:33078: read: connection reset by peer
	I0916 11:11:28.148063  288908 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-679624
	
	I0916 11:11:28.148089  288908 ubuntu.go:169] provisioning hostname "embed-certs-679624"
	I0916 11:11:28.148161  288908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:11:28.169027  288908 main.go:141] libmachine: Using SSH client type: native
	I0916 11:11:28.169265  288908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I0916 11:11:28.169282  288908 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-679624 && echo "embed-certs-679624" | sudo tee /etc/hostname
	I0916 11:11:28.355513  288908 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-679624
	
	I0916 11:11:28.355629  288908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:11:28.374039  288908 main.go:141] libmachine: Using SSH client type: native
	I0916 11:11:28.374264  288908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I0916 11:11:28.374294  288908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-679624' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-679624/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-679624' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:11:28.508073  288908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:11:28.508100  288908 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 11:11:28.508138  288908 ubuntu.go:177] setting up certificates
	I0916 11:11:28.508156  288908 provision.go:84] configureAuth start
	I0916 11:11:28.508223  288908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-679624
	I0916 11:11:28.529363  288908 provision.go:143] copyHostCerts
	I0916 11:11:28.529425  288908 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 11:11:28.529444  288908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 11:11:28.529506  288908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 11:11:28.529605  288908 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 11:11:28.529616  288908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 11:11:28.529646  288908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 11:11:28.529753  288908 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 11:11:28.529767  288908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 11:11:28.529800  288908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 11:11:28.529884  288908 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.embed-certs-679624 san=[127.0.0.1 192.168.85.2 embed-certs-679624 localhost minikube]
	I0916 11:11:28.660139  288908 provision.go:177] copyRemoteCerts
	I0916 11:11:28.660207  288908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:11:28.660257  288908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:11:28.686030  288908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/embed-certs-679624/id_rsa Username:docker}
	I0916 11:11:28.781031  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 11:11:28.805291  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0916 11:11:28.828019  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 11:11:28.852211  288908 provision.go:87] duration metric: took 344.043242ms to configureAuth
	I0916 11:11:28.852237  288908 ubuntu.go:193] setting minikube options for container-runtime
	I0916 11:11:28.852389  288908 config.go:182] Loaded profile config "embed-certs-679624": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:11:28.852399  288908 machine.go:96] duration metric: took 3.859669611s to provisionDockerMachine
	I0916 11:11:28.852422  288908 client.go:171] duration metric: took 9.762061004s to LocalClient.Create
	I0916 11:11:28.852442  288908 start.go:167] duration metric: took 9.762135091s to libmachine.API.Create "embed-certs-679624"
	I0916 11:11:28.852450  288908 start.go:293] postStartSetup for "embed-certs-679624" (driver="docker")
	I0916 11:11:28.852458  288908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:11:28.852498  288908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:11:28.852531  288908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:11:28.870309  288908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/embed-certs-679624/id_rsa Username:docker}
	I0916 11:11:28.965110  288908 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:11:28.968523  288908 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 11:11:28.968563  288908 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 11:11:28.968575  288908 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 11:11:28.968583  288908 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 11:11:28.968596  288908 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 11:11:28.968713  288908 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 11:11:28.968785  288908 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 11:11:28.968871  288908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:11:28.977835  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:11:29.001876  288908 start.go:296] duration metric: took 149.414216ms for postStartSetup
	I0916 11:11:29.002250  288908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-679624
	I0916 11:11:29.019869  288908 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/config.json ...
	I0916 11:11:29.020153  288908 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 11:11:29.020205  288908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:11:29.038049  288908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/embed-certs-679624/id_rsa Username:docker}
	I0916 11:11:29.128967  288908 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 11:11:29.133547  288908 start.go:128] duration metric: took 10.046188671s to createHost
	I0916 11:11:29.133576  288908 start.go:83] releasing machines lock for "embed-certs-679624", held for 10.046377271s
	I0916 11:11:29.133643  288908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-679624
	I0916 11:11:29.152662  288908 ssh_runner.go:195] Run: cat /version.json
	I0916 11:11:29.152692  288908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:11:29.152722  288908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:11:29.152762  288908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:11:29.171183  288908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/embed-certs-679624/id_rsa Username:docker}
	I0916 11:11:29.171187  288908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/embed-certs-679624/id_rsa Username:docker}
	I0916 11:11:29.263485  288908 ssh_runner.go:195] Run: systemctl --version
	I0916 11:11:29.342939  288908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:11:29.347342  288908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 11:11:29.371959  288908 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 11:11:29.372033  288908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:11:29.398988  288908 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 11:11:29.399013  288908 start.go:495] detecting cgroup driver to use...
	I0916 11:11:29.399046  288908 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 11:11:29.399095  288908 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 11:11:29.410609  288908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 11:11:29.422113  288908 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:11:29.422178  288908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:11:29.436056  288908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:11:29.449916  288908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:11:29.528110  288908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:11:29.607390  288908 docker.go:233] disabling docker service ...
	I0916 11:11:29.607457  288908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:11:29.625383  288908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:11:29.637734  288908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:11:29.715467  288908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:11:29.796841  288908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:11:29.807894  288908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:11:29.824334  288908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 11:11:29.834092  288908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 11:11:29.845179  288908 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 11:11:29.845243  288908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 11:11:29.854840  288908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 11:11:29.864202  288908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 11:11:29.873608  288908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 11:11:29.883253  288908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:11:29.892391  288908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 11:11:29.901723  288908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 11:11:29.910902  288908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 11:11:29.920511  288908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:11:29.928496  288908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:11:29.937029  288908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:11:30.021638  288908 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 11:11:30.130291  288908 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 11:11:30.130362  288908 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 11:11:30.134196  288908 start.go:563] Will wait 60s for crictl version
	I0916 11:11:30.134260  288908 ssh_runner.go:195] Run: which crictl
	I0916 11:11:30.137609  288908 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:11:30.170590  288908 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 11:11:30.170645  288908 ssh_runner.go:195] Run: containerd --version
	I0916 11:11:30.192976  288908 ssh_runner.go:195] Run: containerd --version
	I0916 11:11:30.217368  288908 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 11:11:27.257831  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:29.759232  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:30.218805  288908 cli_runner.go:164] Run: docker network inspect embed-certs-679624 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:11:30.236609  288908 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0916 11:11:30.240710  288908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:11:30.251608  288908 kubeadm.go:883] updating cluster {Name:embed-certs-679624 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-679624 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:11:30.251732  288908 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:11:30.251856  288908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:11:30.289360  288908 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 11:11:30.289390  288908 containerd.go:534] Images already preloaded, skipping extraction
	I0916 11:11:30.289443  288908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:11:30.322306  288908 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 11:11:30.322325  288908 cache_images.go:84] Images are preloaded, skipping loading
	I0916 11:11:30.322332  288908 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.31.1 containerd true true} ...
	I0916 11:11:30.322410  288908 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-679624 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-679624 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:11:30.322458  288908 ssh_runner.go:195] Run: sudo crictl info
	I0916 11:11:30.357287  288908 cni.go:84] Creating CNI manager for ""
	I0916 11:11:30.357313  288908 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:11:30.357328  288908 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:11:30.357356  288908 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-679624 NodeName:embed-certs-679624 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 11:11:30.357533  288908 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-679624"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:11:30.357614  288908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 11:11:30.366434  288908 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 11:11:30.366500  288908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:11:30.375187  288908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0916 11:11:30.392300  288908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:11:30.410224  288908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0916 11:11:30.430159  288908 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0916 11:11:30.433926  288908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:11:30.444984  288908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:11:30.528873  288908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:11:30.543894  288908 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624 for IP: 192.168.85.2
	I0916 11:11:30.543916  288908 certs.go:194] generating shared ca certs ...
	I0916 11:11:30.543936  288908 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:11:30.544125  288908 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 11:11:30.544187  288908 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 11:11:30.544201  288908 certs.go:256] generating profile certs ...
	I0916 11:11:30.544273  288908 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/client.key
	I0916 11:11:30.544301  288908 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/client.crt with IP's: []
	I0916 11:11:30.788131  288908 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/client.crt ...
	I0916 11:11:30.788166  288908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/client.crt: {Name:mk02095d3afb4fad8c6d28e1f88b13ba36a9f6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:11:30.788368  288908 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/client.key ...
	I0916 11:11:30.788382  288908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/client.key: {Name:mk6908273136c2132f294f84c2cf9245d566117f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:11:30.788485  288908 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/apiserver.key.67e55e90
	I0916 11:11:30.788507  288908 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/apiserver.crt.67e55e90 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0916 11:11:30.999277  288908 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/apiserver.crt.67e55e90 ...
	I0916 11:11:30.999316  288908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/apiserver.crt.67e55e90: {Name:mk955ebd562252fd3d65acb6c2e198ab5e903fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:11:30.999516  288908 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/apiserver.key.67e55e90 ...
	I0916 11:11:30.999535  288908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/apiserver.key.67e55e90: {Name:mkc82f26c1c509a023699ea12765ff496bced47f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:11:30.999625  288908 certs.go:381] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/apiserver.crt.67e55e90 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/apiserver.crt
	I0916 11:11:30.999750  288908 certs.go:385] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/apiserver.key.67e55e90 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/apiserver.key
	I0916 11:11:30.999843  288908 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/proxy-client.key
	I0916 11:11:30.999865  288908 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/proxy-client.crt with IP's: []
	I0916 11:11:31.288838  288908 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/proxy-client.crt ...
	I0916 11:11:31.288945  288908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/proxy-client.crt: {Name:mk8bd14445a9da8b563b4c4456dcb6ef5aa0023c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:11:31.289235  288908 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/proxy-client.key ...
	I0916 11:11:31.289294  288908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/proxy-client.key: {Name:mk97c2379e3649b3d274265134c4b6a81c84d628 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:11:31.289625  288908 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 11:11:31.289722  288908 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 11:11:31.289752  288908 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:11:31.289809  288908 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 11:11:31.289858  288908 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:11:31.289915  288908 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 11:11:31.289997  288908 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:11:31.290950  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:11:31.317053  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:11:31.344299  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:11:31.373008  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:11:31.399445  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0916 11:11:31.425552  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 11:11:31.452299  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:11:31.480024  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 11:11:31.507034  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:11:31.533755  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 11:11:31.560944  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 11:11:31.588146  288908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:11:31.607340  288908 ssh_runner.go:195] Run: openssl version
	I0916 11:11:31.613749  288908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 11:11:31.623827  288908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 11:11:31.628105  288908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 11:11:31.628170  288908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 11:11:31.636053  288908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:11:31.646541  288908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:11:31.657059  288908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:11:31.661092  288908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:11:31.661152  288908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:11:31.668468  288908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:11:31.678986  288908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 11:11:31.688721  288908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 11:11:31.692740  288908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 11:11:31.692806  288908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 11:11:31.700158  288908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 11:11:31.710466  288908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:11:31.714043  288908 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 11:11:31.714124  288908 kubeadm.go:392] StartCluster: {Name:embed-certs-679624 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-679624 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:11:31.714222  288908 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 11:11:31.714261  288908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:11:31.756398  288908 cri.go:89] found id: ""
	I0916 11:11:31.756465  288908 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:11:31.766605  288908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 11:11:31.777090  288908 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 11:11:31.777143  288908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 11:11:31.787168  288908 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 11:11:31.787188  288908 kubeadm.go:157] found existing configuration files:
	
	I0916 11:11:31.787251  288908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 11:11:31.796664  288908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 11:11:31.796730  288908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 11:11:31.806726  288908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 11:11:31.816111  288908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 11:11:31.816165  288908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 11:11:31.825102  288908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 11:11:31.834700  288908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 11:11:31.834757  288908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 11:11:31.845052  288908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 11:11:31.854270  288908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 11:11:31.854344  288908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 11:11:31.864084  288908 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 11:11:31.911207  288908 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 11:11:31.911280  288908 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 11:11:31.929566  288908 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 11:11:31.929629  288908 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 11:11:31.929721  288908 kubeadm.go:310] OS: Linux
	I0916 11:11:31.929795  288908 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 11:11:31.929868  288908 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 11:11:31.929930  288908 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 11:11:31.929999  288908 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 11:11:31.930043  288908 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 11:11:31.930089  288908 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 11:11:31.930127  288908 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 11:11:31.930168  288908 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 11:11:31.930207  288908 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 11:11:32.003661  288908 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 11:11:32.003913  288908 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 11:11:32.004027  288908 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 11:11:32.009787  288908 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 11:11:27.745904  254463 logs.go:123] Gathering logs for kube-controller-manager [5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0] ...
	I0916 11:11:27.745938  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:27.786487  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:11:27.786512  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:11:27.843816  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:11:27.843853  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:11:30.387079  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:11:30.387476  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:11:30.387543  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:11:30.387611  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:11:30.423116  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:30.423146  254463 cri.go:89] found id: ""
	I0916 11:11:30.423157  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:11:30.423209  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:30.427346  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:11:30.427415  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:11:30.464033  254463 cri.go:89] found id: ""
	I0916 11:11:30.464064  254463 logs.go:276] 0 containers: []
	W0916 11:11:30.464076  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:11:30.464084  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:11:30.464149  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:11:30.506628  254463 cri.go:89] found id: ""
	I0916 11:11:30.506660  254463 logs.go:276] 0 containers: []
	W0916 11:11:30.506673  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:11:30.506682  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:11:30.506741  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:11:30.541832  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:30.541860  254463 cri.go:89] found id: ""
	I0916 11:11:30.541874  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:11:30.541932  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:30.546020  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:11:30.546090  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:11:30.586076  254463 cri.go:89] found id: ""
	I0916 11:11:30.586101  254463 logs.go:276] 0 containers: []
	W0916 11:11:30.586111  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:11:30.586118  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:11:30.586175  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:11:30.627319  254463 cri.go:89] found id: "5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:30.627343  254463 cri.go:89] found id: ""
	I0916 11:11:30.627352  254463 logs.go:276] 1 containers: [5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0]
	I0916 11:11:30.627404  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:30.630804  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:11:30.630871  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:11:30.672322  254463 cri.go:89] found id: ""
	I0916 11:11:30.672349  254463 logs.go:276] 0 containers: []
	W0916 11:11:30.672360  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:11:30.672368  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:11:30.672427  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:11:30.711423  254463 cri.go:89] found id: ""
	I0916 11:11:30.711445  254463 logs.go:276] 0 containers: []
	W0916 11:11:30.711453  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:11:30.711461  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:11:30.711473  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:30.787457  254463 logs.go:123] Gathering logs for kube-controller-manager [5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0] ...
	I0916 11:11:30.787499  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:30.825566  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:11:30.825596  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:11:30.873424  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:11:30.873458  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:11:30.912596  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:11:30.912622  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:11:31.041509  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:11:31.041554  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:11:31.069628  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:11:31.069671  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:11:31.147283  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:11:31.147317  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:11:31.147333  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:30.239104  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:32.739847  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:32.012718  288908 out.go:235]   - Generating certificates and keys ...
	I0916 11:11:32.012811  288908 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 11:11:32.012866  288908 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 11:11:32.274323  288908 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 11:11:32.645738  288908 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 11:11:32.802923  288908 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 11:11:32.869257  288908 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 11:11:33.074216  288908 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 11:11:33.074453  288908 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-679624 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0916 11:11:33.198709  288908 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 11:11:33.198917  288908 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-679624 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0916 11:11:33.288526  288908 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 11:11:33.371633  288908 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 11:11:33.467662  288908 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 11:11:33.467854  288908 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 11:11:33.610889  288908 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 11:11:33.928327  288908 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 11:11:34.209629  288908 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 11:11:34.318731  288908 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 11:11:34.497638  288908 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 11:11:34.498358  288908 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 11:11:34.501042  288908 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 11:11:32.258180  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:34.258663  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:36.258970  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:33.692712  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:11:33.693191  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:11:33.693260  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:11:33.693318  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:11:33.729008  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:33.729033  254463 cri.go:89] found id: ""
	I0916 11:11:33.729043  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:11:33.729109  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:33.733530  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:11:33.733664  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:11:33.781978  254463 cri.go:89] found id: ""
	I0916 11:11:33.782012  254463 logs.go:276] 0 containers: []
	W0916 11:11:33.782023  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:11:33.782031  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:11:33.782097  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:11:33.834507  254463 cri.go:89] found id: ""
	I0916 11:11:33.834606  254463 logs.go:276] 0 containers: []
	W0916 11:11:33.834635  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:11:33.834670  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:11:33.834747  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:11:33.871434  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:33.871453  254463 cri.go:89] found id: ""
	I0916 11:11:33.871460  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:11:33.871506  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:33.876069  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:11:33.876139  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:11:33.939474  254463 cri.go:89] found id: ""
	I0916 11:11:33.939507  254463 logs.go:276] 0 containers: []
	W0916 11:11:33.939518  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:11:33.939525  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:11:33.939579  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:11:33.980476  254463 cri.go:89] found id: "1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:33.980501  254463 cri.go:89] found id: "5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:33.980507  254463 cri.go:89] found id: ""
	I0916 11:11:33.980514  254463 logs.go:276] 2 containers: [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff 5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0]
	I0916 11:11:33.980577  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:33.984110  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:33.987346  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:11:33.987409  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:11:34.040605  254463 cri.go:89] found id: ""
	I0916 11:11:34.040633  254463 logs.go:276] 0 containers: []
	W0916 11:11:34.040644  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:11:34.040655  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:11:34.040719  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:11:34.077726  254463 cri.go:89] found id: ""
	I0916 11:11:34.077754  254463 logs.go:276] 0 containers: []
	W0916 11:11:34.077765  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:11:34.077783  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:11:34.077799  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:11:34.170123  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:11:34.170148  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:11:34.170162  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:34.230253  254463 logs.go:123] Gathering logs for kube-controller-manager [5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0] ...
	I0916 11:11:34.230291  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:34.271506  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:11:34.271533  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:11:34.327836  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:11:34.327865  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:11:34.448242  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:11:34.448278  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:11:34.471341  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:11:34.471385  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:11:34.521420  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:11:34.521454  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:34.601090  254463 logs.go:123] Gathering logs for kube-controller-manager [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff] ...
	I0916 11:11:34.601130  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:37.138930  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:11:37.139314  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:11:37.139360  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:11:37.139403  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:11:37.180304  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:37.180327  254463 cri.go:89] found id: ""
	I0916 11:11:37.180335  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:11:37.180393  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:37.184635  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:11:37.184700  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:11:37.218889  254463 cri.go:89] found id: ""
	I0916 11:11:37.218917  254463 logs.go:276] 0 containers: []
	W0916 11:11:37.218928  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:11:37.218936  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:11:37.218992  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:11:37.256844  254463 cri.go:89] found id: ""
	I0916 11:11:37.256871  254463 logs.go:276] 0 containers: []
	W0916 11:11:37.256881  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:11:37.256888  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:11:37.256946  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:11:37.297431  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:37.297456  254463 cri.go:89] found id: ""
	I0916 11:11:37.297466  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:11:37.297526  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:37.301491  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:11:37.301548  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:11:37.337632  254463 cri.go:89] found id: ""
	I0916 11:11:37.337660  254463 logs.go:276] 0 containers: []
	W0916 11:11:37.337671  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:11:37.337682  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:11:37.337738  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:11:37.376904  254463 cri.go:89] found id: "1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:37.376933  254463 cri.go:89] found id: "5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:37.376939  254463 cri.go:89] found id: ""
	I0916 11:11:37.376950  254463 logs.go:276] 2 containers: [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff 5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0]
	I0916 11:11:37.377006  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:37.380947  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:37.384225  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:11:37.384278  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:11:37.419944  254463 cri.go:89] found id: ""
	I0916 11:11:37.419974  254463 logs.go:276] 0 containers: []
	W0916 11:11:37.419985  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:11:37.419994  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:11:37.420047  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:11:37.454586  254463 cri.go:89] found id: ""
	I0916 11:11:37.454615  254463 logs.go:276] 0 containers: []
	W0916 11:11:37.454635  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:11:37.454651  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:11:37.454670  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:11:37.501786  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:11:37.501815  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:11:37.611024  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:11:37.611066  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:11:37.675810  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:11:37.675834  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:11:37.675858  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:35.238935  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:37.737929  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:34.503090  288908 out.go:235]   - Booting up control plane ...
	I0916 11:11:34.503204  288908 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 11:11:34.503307  288908 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 11:11:34.503428  288908 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 11:11:34.512767  288908 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 11:11:34.518364  288908 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 11:11:34.518434  288908 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 11:11:34.609756  288908 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 11:11:34.609882  288908 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 11:11:35.111264  288908 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.674049ms
	I0916 11:11:35.111379  288908 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 11:11:40.113566  288908 kubeadm.go:310] [api-check] The API server is healthy after 5.002308876s
	I0916 11:11:40.124445  288908 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 11:11:40.136433  288908 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 11:11:40.158632  288908 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 11:11:40.158882  288908 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-679624 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 11:11:40.166356  288908 kubeadm.go:310] [bootstrap-token] Using token: 84spig.4y8nxn4hci96swit
	I0916 11:11:40.168019  288908 out.go:235]   - Configuring RBAC rules ...
	I0916 11:11:40.168133  288908 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 11:11:40.171476  288908 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 11:11:40.177632  288908 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 11:11:40.180530  288908 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 11:11:40.183240  288908 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 11:11:40.187632  288908 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 11:11:40.520291  288908 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 11:11:40.953108  288908 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 11:11:41.520171  288908 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 11:11:41.520855  288908 kubeadm.go:310] 
	I0916 11:11:41.520935  288908 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 11:11:41.520944  288908 kubeadm.go:310] 
	I0916 11:11:41.521009  288908 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 11:11:41.521016  288908 kubeadm.go:310] 
	I0916 11:11:41.521036  288908 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 11:11:41.521083  288908 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 11:11:41.521124  288908 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 11:11:41.521130  288908 kubeadm.go:310] 
	I0916 11:11:41.521171  288908 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 11:11:41.521176  288908 kubeadm.go:310] 
	I0916 11:11:41.521214  288908 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 11:11:41.521219  288908 kubeadm.go:310] 
	I0916 11:11:41.521259  288908 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 11:11:41.521324  288908 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 11:11:41.521379  288908 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 11:11:41.521386  288908 kubeadm.go:310] 
	I0916 11:11:41.521450  288908 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 11:11:41.521511  288908 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 11:11:41.521517  288908 kubeadm.go:310] 
	I0916 11:11:41.521582  288908 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 84spig.4y8nxn4hci96swit \
	I0916 11:11:41.521679  288908 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 \
	I0916 11:11:41.521701  288908 kubeadm.go:310] 	--control-plane 
	I0916 11:11:41.521705  288908 kubeadm.go:310] 
	I0916 11:11:41.521785  288908 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 11:11:41.521793  288908 kubeadm.go:310] 
	I0916 11:11:41.521875  288908 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 84spig.4y8nxn4hci96swit \
	I0916 11:11:41.521955  288908 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 
	I0916 11:11:41.524979  288908 kubeadm.go:310] W0916 11:11:31.907821    1135 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:11:41.525354  288908 kubeadm.go:310] W0916 11:11:31.908743    1135 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:11:41.525562  288908 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 11:11:41.525672  288908 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 11:11:41.525704  288908 cni.go:84] Creating CNI manager for ""
	I0916 11:11:41.525719  288908 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:11:41.527698  288908 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 11:11:38.757671  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:40.761143  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:37.758822  254463 logs.go:123] Gathering logs for kube-controller-manager [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff] ...
	I0916 11:11:37.758871  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:37.797236  254463 logs.go:123] Gathering logs for kube-controller-manager [5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0] ...
	I0916 11:11:37.797263  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:37.842272  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:11:37.842314  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:11:37.892228  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:11:37.892268  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:11:37.913264  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:11:37.913303  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:40.469419  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:11:40.469842  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:11:40.469897  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:11:40.469972  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:11:40.504839  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:40.504859  254463 cri.go:89] found id: ""
	I0916 11:11:40.504867  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:11:40.504910  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:40.509056  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:11:40.509144  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:11:40.544727  254463 cri.go:89] found id: ""
	I0916 11:11:40.544754  254463 logs.go:276] 0 containers: []
	W0916 11:11:40.544764  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:11:40.544769  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:11:40.544824  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:11:40.585143  254463 cri.go:89] found id: ""
	I0916 11:11:40.585177  254463 logs.go:276] 0 containers: []
	W0916 11:11:40.585188  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:11:40.585197  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:11:40.585253  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:11:40.618406  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:40.618433  254463 cri.go:89] found id: ""
	I0916 11:11:40.618442  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:11:40.618497  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:40.622183  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:11:40.622241  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:11:40.654226  254463 cri.go:89] found id: ""
	I0916 11:11:40.654257  254463 logs.go:276] 0 containers: []
	W0916 11:11:40.654270  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:11:40.654278  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:11:40.654338  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:11:40.704703  254463 cri.go:89] found id: "1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:40.704731  254463 cri.go:89] found id: "5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:40.704737  254463 cri.go:89] found id: ""
	I0916 11:11:40.704747  254463 logs.go:276] 2 containers: [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff 5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0]
	I0916 11:11:40.704804  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:40.709695  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:40.714182  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:11:40.714283  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:11:40.769401  254463 cri.go:89] found id: ""
	I0916 11:11:40.769432  254463 logs.go:276] 0 containers: []
	W0916 11:11:40.769443  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:11:40.769450  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:11:40.769508  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:11:40.814114  254463 cri.go:89] found id: ""
	I0916 11:11:40.814180  254463 logs.go:276] 0 containers: []
	W0916 11:11:40.814203  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:11:40.814224  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:11:40.814242  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:11:40.923888  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:11:40.923942  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:11:40.954712  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:11:40.954756  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:11:41.019515  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:11:41.019535  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:11:41.019547  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:41.091866  254463 logs.go:123] Gathering logs for kube-controller-manager [5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0] ...
	I0916 11:11:41.091908  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:41.126670  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:11:41.126702  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:11:41.165890  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:11:41.165924  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:41.203538  254463 logs.go:123] Gathering logs for kube-controller-manager [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff] ...
	I0916 11:11:41.203568  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:41.241297  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:11:41.241325  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:11:39.738817  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:41.739547  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:41.528971  288908 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 11:11:41.532973  288908 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 11:11:41.532990  288908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 11:11:41.550641  288908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 11:11:41.759420  288908 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 11:11:41.759500  288908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:11:41.759538  288908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-679624 minikube.k8s.io/updated_at=2024_09_16T11_11_41_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=embed-certs-679624 minikube.k8s.io/primary=true
	I0916 11:11:41.843186  288908 ops.go:34] apiserver oom_adj: -16
	I0916 11:11:41.843192  288908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:11:42.344100  288908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:11:42.843846  288908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:11:43.343804  288908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:11:43.843597  288908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:11:44.344103  288908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:11:44.843919  288908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:11:45.344112  288908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:11:45.843558  288908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:11:45.931329  288908 kubeadm.go:1113] duration metric: took 4.171896183s to wait for elevateKubeSystemPrivileges
	I0916 11:11:45.931371  288908 kubeadm.go:394] duration metric: took 14.217250544s to StartCluster
	I0916 11:11:45.931395  288908 settings.go:142] acquiring lock: {Name:mk5f140da961bb985c566f732d611f3e238f1f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:11:45.931468  288908 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:11:45.933917  288908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:11:45.934189  288908 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 11:11:45.934349  288908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 11:11:45.934378  288908 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:11:45.934476  288908 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-679624"
	I0916 11:11:45.934514  288908 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-679624"
	I0916 11:11:45.934555  288908 host.go:66] Checking if "embed-certs-679624" exists ...
	I0916 11:11:45.934544  288908 addons.go:69] Setting default-storageclass=true in profile "embed-certs-679624"
	I0916 11:11:45.934561  288908 config.go:182] Loaded profile config "embed-certs-679624": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:11:45.934573  288908 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-679624"
	I0916 11:11:45.935002  288908 cli_runner.go:164] Run: docker container inspect embed-certs-679624 --format={{.State.Status}}
	I0916 11:11:45.935187  288908 cli_runner.go:164] Run: docker container inspect embed-certs-679624 --format={{.State.Status}}
	I0916 11:11:45.936273  288908 out.go:177] * Verifying Kubernetes components...
	I0916 11:11:45.937809  288908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:11:45.969287  288908 addons.go:234] Setting addon default-storageclass=true in "embed-certs-679624"
	I0916 11:11:45.969351  288908 host.go:66] Checking if "embed-certs-679624" exists ...
	I0916 11:11:45.969852  288908 cli_runner.go:164] Run: docker container inspect embed-certs-679624 --format={{.State.Status}}
	I0916 11:11:45.974500  288908 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:11:43.257133  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:45.258494  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:45.975949  288908 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:11:45.975972  288908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:11:45.976045  288908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:11:45.990299  288908 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:11:45.990325  288908 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:11:45.990383  288908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:11:45.994530  288908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/embed-certs-679624/id_rsa Username:docker}
	I0916 11:11:46.007683  288908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/embed-certs-679624/id_rsa Username:docker}
	I0916 11:11:46.233531  288908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 11:11:46.234917  288908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:11:46.241775  288908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:11:46.249620  288908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:11:46.762554  288908 start.go:971] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0916 11:11:46.764311  288908 node_ready.go:35] waiting up to 6m0s for node "embed-certs-679624" to be "Ready" ...
	I0916 11:11:46.821592  288908 node_ready.go:49] node "embed-certs-679624" has status "Ready":"True"
	I0916 11:11:46.821625  288908 node_ready.go:38] duration metric: took 57.288494ms for node "embed-certs-679624" to be "Ready" ...
	I0916 11:11:46.821637  288908 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:11:46.831195  288908 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dmv6t" in "kube-system" namespace to be "Ready" ...
	I0916 11:11:47.181058  288908 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0916 11:11:43.787247  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:11:43.787686  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:11:43.787788  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:11:43.787845  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:11:43.820358  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:43.820379  254463 cri.go:89] found id: ""
	I0916 11:11:43.820386  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:11:43.820429  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:43.823977  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:11:43.824036  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:11:43.858303  254463 cri.go:89] found id: ""
	I0916 11:11:43.858331  254463 logs.go:276] 0 containers: []
	W0916 11:11:43.858342  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:11:43.858350  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:11:43.858410  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:11:43.896708  254463 cri.go:89] found id: ""
	I0916 11:11:43.896738  254463 logs.go:276] 0 containers: []
	W0916 11:11:43.896750  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:11:43.896758  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:11:43.896818  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:11:43.930745  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:43.930785  254463 cri.go:89] found id: ""
	I0916 11:11:43.930794  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:11:43.930857  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:43.934261  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:11:43.934324  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:11:43.967505  254463 cri.go:89] found id: ""
	I0916 11:11:43.967532  254463 logs.go:276] 0 containers: []
	W0916 11:11:43.967542  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:11:43.967549  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:11:43.967609  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:11:44.001802  254463 cri.go:89] found id: "1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:44.001822  254463 cri.go:89] found id: "5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:44.001826  254463 cri.go:89] found id: ""
	I0916 11:11:44.001833  254463 logs.go:276] 2 containers: [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff 5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0]
	I0916 11:11:44.001877  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:44.005500  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:44.008954  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:11:44.009028  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:11:44.042735  254463 cri.go:89] found id: ""
	I0916 11:11:44.042758  254463 logs.go:276] 0 containers: []
	W0916 11:11:44.042766  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:11:44.042771  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:11:44.042825  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:11:44.076718  254463 cri.go:89] found id: ""
	I0916 11:11:44.076741  254463 logs.go:276] 0 containers: []
	W0916 11:11:44.076749  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:11:44.076760  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:11:44.076770  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:11:44.124987  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:11:44.125027  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:44.197752  254463 logs.go:123] Gathering logs for kube-controller-manager [5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0] ...
	I0916 11:11:44.197791  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:44.231307  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:11:44.231335  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:11:44.292499  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:11:44.292527  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:11:44.292542  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:44.328765  254463 logs.go:123] Gathering logs for kube-controller-manager [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff] ...
	I0916 11:11:44.328796  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:44.366047  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:11:44.366073  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:11:44.403288  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:11:44.403313  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:11:44.498895  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:11:44.498933  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:11:47.021378  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:11:47.021855  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:11:47.021915  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:11:47.021977  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:11:47.074174  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:47.074248  254463 cri.go:89] found id: ""
	I0916 11:11:47.074262  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:11:47.074560  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:47.078609  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:11:47.078682  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:11:47.111355  254463 cri.go:89] found id: ""
	I0916 11:11:47.111380  254463 logs.go:276] 0 containers: []
	W0916 11:11:47.111388  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:11:47.111396  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:11:47.111446  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:11:47.154273  254463 cri.go:89] found id: ""
	I0916 11:11:47.154301  254463 logs.go:276] 0 containers: []
	W0916 11:11:47.154313  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:11:47.154321  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:11:47.154380  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:11:47.196698  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:47.196719  254463 cri.go:89] found id: ""
	I0916 11:11:47.196728  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:11:47.196793  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:47.200205  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:11:47.200282  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:11:47.239306  254463 cri.go:89] found id: ""
	I0916 11:11:47.239328  254463 logs.go:276] 0 containers: []
	W0916 11:11:47.239336  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:11:47.239341  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:11:47.239388  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:11:47.275473  254463 cri.go:89] found id: "1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:47.275494  254463 cri.go:89] found id: ""
	I0916 11:11:47.275501  254463 logs.go:276] 1 containers: [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff]
	I0916 11:11:47.275547  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:47.279217  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:11:47.279271  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:11:47.312601  254463 cri.go:89] found id: ""
	I0916 11:11:47.312630  254463 logs.go:276] 0 containers: []
	W0916 11:11:47.312643  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:11:47.312651  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:11:47.312703  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:11:47.351786  254463 cri.go:89] found id: ""
	I0916 11:11:47.351818  254463 logs.go:276] 0 containers: []
	W0916 11:11:47.351830  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:11:47.351841  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:11:47.351856  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:47.388358  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:11:47.388390  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:47.458891  254463 logs.go:123] Gathering logs for kube-controller-manager [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff] ...
	I0916 11:11:47.458925  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:47.495067  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:11:47.495095  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:11:47.556395  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:11:47.556436  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:11:47.606059  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:11:47.606089  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:11:44.237845  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:46.240764  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:48.737615  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:47.182277  288908 addons.go:510] duration metric: took 1.24791353s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0916 11:11:47.267907  288908 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-679624" context rescaled to 1 replicas
	I0916 11:11:48.836602  288908 pod_ready.go:103] pod "coredns-7c65d6cfc9-dmv6t" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:47.757335  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:49.757395  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:47.703200  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:11:47.703236  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:11:47.724642  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:11:47.724684  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:11:47.783498  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:11:50.283928  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:11:50.284374  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:11:50.284423  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:11:50.284474  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:11:50.316834  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:50.316862  254463 cri.go:89] found id: ""
	I0916 11:11:50.316873  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:11:50.316935  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:50.320355  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:11:50.320432  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:11:50.352376  254463 cri.go:89] found id: ""
	I0916 11:11:50.352396  254463 logs.go:276] 0 containers: []
	W0916 11:11:50.352405  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:11:50.352412  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:11:50.352472  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:11:50.387428  254463 cri.go:89] found id: ""
	I0916 11:11:50.387468  254463 logs.go:276] 0 containers: []
	W0916 11:11:50.387479  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:11:50.387487  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:11:50.387537  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:11:50.420454  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:50.420473  254463 cri.go:89] found id: ""
	I0916 11:11:50.420479  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:11:50.420521  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:50.423917  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:11:50.423975  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:11:50.458163  254463 cri.go:89] found id: ""
	I0916 11:11:50.458184  254463 logs.go:276] 0 containers: []
	W0916 11:11:50.458192  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:11:50.458199  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:11:50.458251  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:11:50.490942  254463 cri.go:89] found id: "1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:50.490970  254463 cri.go:89] found id: ""
	I0916 11:11:50.490980  254463 logs.go:276] 1 containers: [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff]
	I0916 11:11:50.491034  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:50.494494  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:11:50.494557  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:11:50.525559  254463 cri.go:89] found id: ""
	I0916 11:11:50.525586  254463 logs.go:276] 0 containers: []
	W0916 11:11:50.525597  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:11:50.525605  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:11:50.525669  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:11:50.557477  254463 cri.go:89] found id: ""
	I0916 11:11:50.557499  254463 logs.go:276] 0 containers: []
	W0916 11:11:50.557507  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:11:50.557522  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:11:50.557534  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:11:50.604317  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:11:50.604355  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:11:50.641507  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:11:50.641536  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:11:50.730228  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:11:50.730266  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:11:50.756357  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:11:50.756403  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:11:50.815959  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:11:50.815992  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:11:50.816005  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:50.853332  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:11:50.853362  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:50.922239  254463 logs.go:123] Gathering logs for kube-controller-manager [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff] ...
	I0916 11:11:50.922282  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:50.739091  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:53.238404  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:50.837082  288908 pod_ready.go:103] pod "coredns-7c65d6cfc9-dmv6t" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:53.337228  288908 pod_ready.go:103] pod "coredns-7c65d6cfc9-dmv6t" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:52.257690  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:54.758372  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:53.459773  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:11:53.460269  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:11:53.460322  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:11:53.460371  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:11:53.495261  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:53.495288  254463 cri.go:89] found id: ""
	I0916 11:11:53.495298  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:11:53.495359  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:53.499351  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:11:53.499415  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:11:53.532686  254463 cri.go:89] found id: ""
	I0916 11:11:53.532716  254463 logs.go:276] 0 containers: []
	W0916 11:11:53.532728  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:11:53.532736  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:11:53.532788  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:11:53.568013  254463 cri.go:89] found id: ""
	I0916 11:11:53.568043  254463 logs.go:276] 0 containers: []
	W0916 11:11:53.568054  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:11:53.568062  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:11:53.568117  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:11:53.601908  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:53.601931  254463 cri.go:89] found id: ""
	I0916 11:11:53.601938  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:11:53.601983  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:53.605669  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:11:53.605742  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:11:53.638394  254463 cri.go:89] found id: ""
	I0916 11:11:53.638420  254463 logs.go:276] 0 containers: []
	W0916 11:11:53.638428  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:11:53.638441  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:11:53.638484  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:11:53.670648  254463 cri.go:89] found id: "1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:53.670669  254463 cri.go:89] found id: ""
	I0916 11:11:53.670678  254463 logs.go:276] 1 containers: [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff]
	I0916 11:11:53.670736  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:53.674142  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:11:53.674193  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:11:53.707669  254463 cri.go:89] found id: ""
	I0916 11:11:53.707698  254463 logs.go:276] 0 containers: []
	W0916 11:11:53.707708  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:11:53.707714  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:11:53.707825  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:11:53.742075  254463 cri.go:89] found id: ""
	I0916 11:11:53.742102  254463 logs.go:276] 0 containers: []
	W0916 11:11:53.742113  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:11:53.742125  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:11:53.742140  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:53.811381  254463 logs.go:123] Gathering logs for kube-controller-manager [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff] ...
	I0916 11:11:53.811415  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:53.846858  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:11:53.846888  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:11:53.891595  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:11:53.891630  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:11:53.925443  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:11:53.925468  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:11:54.015424  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:11:54.015460  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:11:54.036290  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:11:54.036325  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:11:54.096466  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:11:54.096489  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:11:54.096503  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:56.631912  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:11:56.632364  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:11:56.632424  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:11:56.632484  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:11:56.665467  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:56.665487  254463 cri.go:89] found id: ""
	I0916 11:11:56.665494  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:11:56.665540  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:56.669053  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:11:56.669132  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:11:56.701684  254463 cri.go:89] found id: ""
	I0916 11:11:56.701710  254463 logs.go:276] 0 containers: []
	W0916 11:11:56.701721  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:11:56.701728  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:11:56.701790  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:11:56.737251  254463 cri.go:89] found id: ""
	I0916 11:11:56.737289  254463 logs.go:276] 0 containers: []
	W0916 11:11:56.737300  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:11:56.737309  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:11:56.737369  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:11:56.771303  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:56.771332  254463 cri.go:89] found id: ""
	I0916 11:11:56.771340  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:11:56.771382  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:56.774735  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:11:56.774801  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:11:56.807663  254463 cri.go:89] found id: ""
	I0916 11:11:56.807685  254463 logs.go:276] 0 containers: []
	W0916 11:11:56.807693  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:11:56.807698  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:11:56.807788  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:11:56.841120  254463 cri.go:89] found id: "1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:56.841142  254463 cri.go:89] found id: ""
	I0916 11:11:56.841156  254463 logs.go:276] 1 containers: [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff]
	I0916 11:11:56.841200  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:56.844692  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:11:56.844748  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:11:56.877007  254463 cri.go:89] found id: ""
	I0916 11:11:56.877028  254463 logs.go:276] 0 containers: []
	W0916 11:11:56.877036  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:11:56.877041  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:11:56.877088  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:11:56.909108  254463 cri.go:89] found id: ""
	I0916 11:11:56.909136  254463 logs.go:276] 0 containers: []
	W0916 11:11:56.909147  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:11:56.909157  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:11:56.909168  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:11:56.955888  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:11:56.955935  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:11:56.993135  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:11:56.993180  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:11:57.082361  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:11:57.082402  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:11:57.103865  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:11:57.103902  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:11:57.164129  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:11:57.164146  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:11:57.164158  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:57.200538  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:11:57.200568  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:57.273343  254463 logs.go:123] Gathering logs for kube-controller-manager [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff] ...
	I0916 11:11:57.273378  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:55.738690  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:58.238544  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:55.337472  288908 pod_ready.go:103] pod "coredns-7c65d6cfc9-dmv6t" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:57.838821  288908 pod_ready.go:103] pod "coredns-7c65d6cfc9-dmv6t" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:57.257474  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:59.756980  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:01.757290  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:59.806641  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:11:59.807071  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:11:59.807129  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:11:59.807189  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:11:59.841203  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:59.841230  254463 cri.go:89] found id: ""
	I0916 11:11:59.841242  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:11:59.841300  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:59.845256  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:11:59.845334  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:11:59.883444  254463 cri.go:89] found id: ""
	I0916 11:11:59.883480  254463 logs.go:276] 0 containers: []
	W0916 11:11:59.883489  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:11:59.883495  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:11:59.883555  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:11:59.917754  254463 cri.go:89] found id: ""
	I0916 11:11:59.917777  254463 logs.go:276] 0 containers: []
	W0916 11:11:59.917788  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:11:59.917795  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:11:59.917863  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:11:59.956094  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:59.956119  254463 cri.go:89] found id: ""
	I0916 11:11:59.956133  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:11:59.956190  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:59.959827  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:11:59.959913  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:11:59.999060  254463 cri.go:89] found id: ""
	I0916 11:11:59.999087  254463 logs.go:276] 0 containers: []
	W0916 11:11:59.999097  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:11:59.999105  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:11:59.999173  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:12:00.034193  254463 cri.go:89] found id: "1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:12:00.034214  254463 cri.go:89] found id: ""
	I0916 11:12:00.034223  254463 logs.go:276] 1 containers: [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff]
	I0916 11:12:00.034285  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:12:00.037736  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:12:00.037798  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:12:00.070142  254463 cri.go:89] found id: ""
	I0916 11:12:00.070169  254463 logs.go:276] 0 containers: []
	W0916 11:12:00.070177  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:12:00.070183  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:12:00.070231  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:12:00.103691  254463 cri.go:89] found id: ""
	I0916 11:12:00.103716  254463 logs.go:276] 0 containers: []
	W0916 11:12:00.103724  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:12:00.103773  254463 logs.go:123] Gathering logs for kube-controller-manager [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff] ...
	I0916 11:12:00.103790  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:12:00.137085  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:12:00.137111  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:12:00.185521  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:12:00.185555  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:12:00.221687  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:12:00.221717  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:12:00.313223  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:12:00.313269  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:12:00.337700  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:12:00.337742  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:12:00.396098  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:12:00.396119  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:12:00.396130  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:12:00.433027  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:12:00.433077  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:12:00.337500  288908 pod_ready.go:103] pod "coredns-7c65d6cfc9-dmv6t" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:01.337371  288908 pod_ready.go:93] pod "coredns-7c65d6cfc9-dmv6t" in "kube-system" namespace has status "Ready":"True"
	I0916 11:12:01.337393  288908 pod_ready.go:82] duration metric: took 14.506166654s for pod "coredns-7c65d6cfc9-dmv6t" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:01.337404  288908 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-x4f6n" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:01.339056  288908 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-x4f6n" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-x4f6n" not found
	I0916 11:12:01.339081  288908 pod_ready.go:82] duration metric: took 1.668579ms for pod "coredns-7c65d6cfc9-x4f6n" in "kube-system" namespace to be "Ready" ...
	E0916 11:12:01.339093  288908 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-x4f6n" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-x4f6n" not found
	I0916 11:12:01.339102  288908 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-679624" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:01.342921  288908 pod_ready.go:93] pod "etcd-embed-certs-679624" in "kube-system" namespace has status "Ready":"True"
	I0916 11:12:01.342938  288908 pod_ready.go:82] duration metric: took 3.82908ms for pod "etcd-embed-certs-679624" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:01.342949  288908 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-679624" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:01.346533  288908 pod_ready.go:93] pod "kube-apiserver-embed-certs-679624" in "kube-system" namespace has status "Ready":"True"
	I0916 11:12:01.346552  288908 pod_ready.go:82] duration metric: took 3.596798ms for pod "kube-apiserver-embed-certs-679624" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:01.346560  288908 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-679624" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:01.350192  288908 pod_ready.go:93] pod "kube-controller-manager-embed-certs-679624" in "kube-system" namespace has status "Ready":"True"
	I0916 11:12:01.350208  288908 pod_ready.go:82] duration metric: took 3.643463ms for pod "kube-controller-manager-embed-certs-679624" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:01.350217  288908 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bt6k2" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:01.535253  288908 pod_ready.go:93] pod "kube-proxy-bt6k2" in "kube-system" namespace has status "Ready":"True"
	I0916 11:12:01.535276  288908 pod_ready.go:82] duration metric: took 185.05015ms for pod "kube-proxy-bt6k2" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:01.535286  288908 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-679624" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:01.935780  288908 pod_ready.go:93] pod "kube-scheduler-embed-certs-679624" in "kube-system" namespace has status "Ready":"True"
	I0916 11:12:01.935805  288908 pod_ready.go:82] duration metric: took 400.511614ms for pod "kube-scheduler-embed-certs-679624" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:01.935814  288908 pod_ready.go:39] duration metric: took 15.114148588s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:12:01.935828  288908 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:12:01.935879  288908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:12:01.948406  288908 api_server.go:72] duration metric: took 16.014183768s to wait for apiserver process to appear ...
	I0916 11:12:01.948432  288908 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:12:01.948456  288908 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0916 11:12:01.952961  288908 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0916 11:12:01.954088  288908 api_server.go:141] control plane version: v1.31.1
	I0916 11:12:01.954120  288908 api_server.go:131] duration metric: took 5.681186ms to wait for apiserver health ...
	I0916 11:12:01.954129  288908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 11:12:02.138246  288908 system_pods.go:59] 8 kube-system pods found
	I0916 11:12:02.138282  288908 system_pods.go:61] "coredns-7c65d6cfc9-dmv6t" [95a9589e-1385-4fb0-8b68-fb26098daf01] Running
	I0916 11:12:02.138288  288908 system_pods.go:61] "etcd-embed-certs-679624" [b351fd38-6c30-4fa6-b5da-159580232c88] Running
	I0916 11:12:02.138294  288908 system_pods.go:61] "kindnet-78kp5" [795aad3a-f96f-4477-9bd5-49a233890f1e] Running
	I0916 11:12:02.138303  288908 system_pods.go:61] "kube-apiserver-embed-certs-679624" [858cb7c8-8e66-4952-bf89-228525cffafb] Running
	I0916 11:12:02.138309  288908 system_pods.go:61] "kube-controller-manager-embed-certs-679624" [f6aaf262-2f80-4875-bbbf-1b5918a23787] Running
	I0916 11:12:02.138314  288908 system_pods.go:61] "kube-proxy-bt6k2" [cae0a1e2-c041-4c6c-8772-978a2c544879] Running
	I0916 11:12:02.138320  288908 system_pods.go:61] "kube-scheduler-embed-certs-679624" [3eb8a130-09a9-4ae6-b12a-6dff3309cbae] Running
	I0916 11:12:02.138328  288908 system_pods.go:61] "storage-provisioner" [3b5477b8-ac39-4acc-9e16-a13a7b1d3e10] Running
	I0916 11:12:02.138334  288908 system_pods.go:74] duration metric: took 184.199914ms to wait for pod list to return data ...
	I0916 11:12:02.138346  288908 default_sa.go:34] waiting for default service account to be created ...
	I0916 11:12:02.335554  288908 default_sa.go:45] found service account: "default"
	I0916 11:12:02.335581  288908 default_sa.go:55] duration metric: took 197.225628ms for default service account to be created ...
	I0916 11:12:02.335592  288908 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 11:12:02.537944  288908 system_pods.go:86] 8 kube-system pods found
	I0916 11:12:02.537972  288908 system_pods.go:89] "coredns-7c65d6cfc9-dmv6t" [95a9589e-1385-4fb0-8b68-fb26098daf01] Running
	I0916 11:12:02.537977  288908 system_pods.go:89] "etcd-embed-certs-679624" [b351fd38-6c30-4fa6-b5da-159580232c88] Running
	I0916 11:12:02.537981  288908 system_pods.go:89] "kindnet-78kp5" [795aad3a-f96f-4477-9bd5-49a233890f1e] Running
	I0916 11:12:02.537985  288908 system_pods.go:89] "kube-apiserver-embed-certs-679624" [858cb7c8-8e66-4952-bf89-228525cffafb] Running
	I0916 11:12:02.537989  288908 system_pods.go:89] "kube-controller-manager-embed-certs-679624" [f6aaf262-2f80-4875-bbbf-1b5918a23787] Running
	I0916 11:12:02.537992  288908 system_pods.go:89] "kube-proxy-bt6k2" [cae0a1e2-c041-4c6c-8772-978a2c544879] Running
	I0916 11:12:02.537995  288908 system_pods.go:89] "kube-scheduler-embed-certs-679624" [3eb8a130-09a9-4ae6-b12a-6dff3309cbae] Running
	I0916 11:12:02.538000  288908 system_pods.go:89] "storage-provisioner" [3b5477b8-ac39-4acc-9e16-a13a7b1d3e10] Running
	I0916 11:12:02.538009  288908 system_pods.go:126] duration metric: took 202.410695ms to wait for k8s-apps to be running ...
	I0916 11:12:02.538017  288908 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 11:12:02.538066  288908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:12:02.549283  288908 system_svc.go:56] duration metric: took 11.252338ms WaitForService to wait for kubelet
	I0916 11:12:02.549315  288908 kubeadm.go:582] duration metric: took 16.615095592s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:12:02.549372  288908 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:12:02.736116  288908 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 11:12:02.736146  288908 node_conditions.go:123] node cpu capacity is 8
	I0916 11:12:02.736168  288908 node_conditions.go:105] duration metric: took 186.790688ms to run NodePressure ...
	I0916 11:12:02.736182  288908 start.go:241] waiting for startup goroutines ...
	I0916 11:12:02.736190  288908 start.go:246] waiting for cluster config update ...
	I0916 11:12:02.736206  288908 start.go:255] writing updated cluster config ...
	I0916 11:12:02.736490  288908 ssh_runner.go:195] Run: rm -f paused
	I0916 11:12:02.743407  288908 out.go:177] * Done! kubectl is now configured to use "embed-certs-679624" cluster and "default" namespace by default
	E0916 11:12:02.744289  288908 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3dab298bfe5b5       c69fa2e9cbf5f       3 seconds ago       Running             coredns                   0                   c9b661400e384       coredns-7c65d6cfc9-dmv6t
	f590d121c5d6d       6e38f40d628db       16 seconds ago      Running             storage-provisioner       0                   985f1b4472131       storage-provisioner
	2dbb170a519e8       12968670680f4       17 seconds ago      Running             kindnet-cni               0                   06e595c1fc81f       kindnet-78kp5
	c182b9d7c07df       60c005f310ff3       17 seconds ago      Running             kube-proxy                0                   d47fcd0c3fa57       kube-proxy-bt6k2
	debbdc082cc9c       6bab7719df100       28 seconds ago      Running             kube-apiserver            0                   9df038a9105dc       kube-apiserver-embed-certs-679624
	7637dc0ee3d4d       9aa1fad941575       28 seconds ago      Running             kube-scheduler            0                   ba28ed2ba4c4a       kube-scheduler-embed-certs-679624
	98ba0135cf4f3       175ffd71cce3d       28 seconds ago      Running             kube-controller-manager   0                   ab668cab99a4f       kube-controller-manager-embed-certs-679624
	e7db7be77ed78       2e96e5913fc06       28 seconds ago      Running             etcd                      0                   c206875f93f94       etcd-embed-certs-679624
	
	
	==> containerd <==
	Sep 16 11:11:46 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:46.520357026Z" level=info msg="StartContainer for \"c182b9d7c07dff5e07ae6d7a2a6ee44903b7d16b719cbe713e148ddee7a32eae\" returns successfully"
	Sep 16 11:11:46 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:46.520357096Z" level=info msg="CreateContainer within sandbox \"06e595c1fc81f2c081ceb8d59c372d409faafdfcf3c12800b84909c663b82bf1\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"2dbb170a519e89c32bdecca493bd2479c29f546b0a9ba488f7768c7e7cdd8cc6\""
	Sep 16 11:11:46 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:46.521361892Z" level=info msg="StartContainer for \"2dbb170a519e89c32bdecca493bd2479c29f546b0a9ba488f7768c7e7cdd8cc6\""
	Sep 16 11:11:46 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:46.745064252Z" level=info msg="StartContainer for \"2dbb170a519e89c32bdecca493bd2479c29f546b0a9ba488f7768c7e7cdd8cc6\" returns successfully"
	Sep 16 11:11:47 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:47.486134659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:3b5477b8-ac39-4acc-9e16-a13a7b1d3e10,Namespace:kube-system,Attempt:0,}"
	Sep 16 11:11:47 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:47.507546887Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 11:11:47 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:47.507626577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 11:11:47 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:47.507642316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:11:47 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:47.507859627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:11:47 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:47.563463276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:3b5477b8-ac39-4acc-9e16-a13a7b1d3e10,Namespace:kube-system,Attempt:0,} returns sandbox id \"985f1b4472131607d8a9ef9844a23f17639f03d828fdfa8fe9bcdbdf16a1125e\""
	Sep 16 11:11:47 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:47.566480877Z" level=info msg="CreateContainer within sandbox \"985f1b4472131607d8a9ef9844a23f17639f03d828fdfa8fe9bcdbdf16a1125e\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Sep 16 11:11:47 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:47.580004902Z" level=info msg="CreateContainer within sandbox \"985f1b4472131607d8a9ef9844a23f17639f03d828fdfa8fe9bcdbdf16a1125e\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"f590d121c5d6d1937cefe41166b62f9f511f34157bec2b92d38767e94a5b8ba8\""
	Sep 16 11:11:47 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:47.580646352Z" level=info msg="StartContainer for \"f590d121c5d6d1937cefe41166b62f9f511f34157bec2b92d38767e94a5b8ba8\""
	Sep 16 11:11:47 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:47.633530791Z" level=info msg="StartContainer for \"f590d121c5d6d1937cefe41166b62f9f511f34157bec2b92d38767e94a5b8ba8\" returns successfully"
	Sep 16 11:11:51 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:51.239254534Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Sep 16 11:11:59 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:59.836571108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dmv6t,Uid:95a9589e-1385-4fb0-8b68-fb26098daf01,Namespace:kube-system,Attempt:0,}"
	Sep 16 11:11:59 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:59.877183985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 11:11:59 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:59.877991138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 11:11:59 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:59.878020603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:11:59 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:59.878153724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:11:59 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:59.928098331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dmv6t,Uid:95a9589e-1385-4fb0-8b68-fb26098daf01,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9b661400e3843a52856b80e1ea45e1fd13416f005a9e149a0718e77a406f2e0\""
	Sep 16 11:11:59 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:59.931020108Z" level=info msg="CreateContainer within sandbox \"c9b661400e3843a52856b80e1ea45e1fd13416f005a9e149a0718e77a406f2e0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Sep 16 11:11:59 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:59.946892222Z" level=info msg="CreateContainer within sandbox \"c9b661400e3843a52856b80e1ea45e1fd13416f005a9e149a0718e77a406f2e0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3dab298bfe5b5f466c279d41236e5592bfcab2429f9c5b2ce3ac5bf7c01c183d\""
	Sep 16 11:11:59 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:59.947484287Z" level=info msg="StartContainer for \"3dab298bfe5b5f466c279d41236e5592bfcab2429f9c5b2ce3ac5bf7c01c183d\""
	Sep 16 11:11:59 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:59.995349695Z" level=info msg="StartContainer for \"3dab298bfe5b5f466c279d41236e5592bfcab2429f9c5b2ce3ac5bf7c01c183d\" returns successfully"
	
	
	==> coredns [3dab298bfe5b5f466c279d41236e5592bfcab2429f9c5b2ce3ac5bf7c01c183d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:55078 - 62834 "HINFO IN 5079472268666806265.2239314299196871410. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008456339s
	
	
	==> describe nodes <==
	Name:               embed-certs-679624
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-679624
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=embed-certs-679624
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T11_11_41_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:11:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-679624
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:12:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:11:51 +0000   Mon, 16 Sep 2024 11:11:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:11:51 +0000   Mon, 16 Sep 2024 11:11:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:11:51 +0000   Mon, 16 Sep 2024 11:11:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:11:51 +0000   Mon, 16 Sep 2024 11:11:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-679624
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 68cf6edacc48492dad36911d3d7a1ae0
	  System UUID:                cc7366e5-b963-44cb-99a5-daef6ab18709
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-dmv6t                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     18s
	  kube-system                 etcd-embed-certs-679624                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         22s
	  kube-system                 kindnet-78kp5                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      18s
	  kube-system                 kube-apiserver-embed-certs-679624             250m (3%)     0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-controller-manager-embed-certs-679624    200m (2%)     0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-proxy-bt6k2                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         18s
	  kube-system                 kube-scheduler-embed-certs-679624             100m (1%)     0 (0%)      0 (0%)           0 (0%)         22s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 17s                kube-proxy       
	  Normal   NodeHasSufficientMemory  29s (x8 over 29s)  kubelet          Node embed-certs-679624 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    29s (x7 over 29s)  kubelet          Node embed-certs-679624 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     29s (x7 over 29s)  kubelet          Node embed-certs-679624 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  29s                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 23s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 23s                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  23s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  22s                kubelet          Node embed-certs-679624 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    22s                kubelet          Node embed-certs-679624 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     22s                kubelet          Node embed-certs-679624 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           19s                node-controller  Node embed-certs-679624 event: Registered Node embed-certs-679624 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000001] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000001] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +1.003295] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000012] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.003959] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000006] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +2.011810] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000005] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.000001] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +4.063628] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000008] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.000030] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000007] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.003992] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000006] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +8.187268] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000006] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.000063] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000005] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.003939] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000006] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	
	
	==> etcd [e7db7be77ed78b3a10c8d675a7803092991d0e4a8c3e81f93fe10d600f5fd3a0] <==
	{"level":"info","ts":"2024-09-16T11:11:35.660657Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T11:11:35.660927Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T11:11:35.660956Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T11:11:35.661023Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2024-09-16T11:11:35.661042Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2024-09-16T11:11:36.545011Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T11:11:36.545063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T11:11:36.545117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2024-09-16T11:11:36.545137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T11:11:36.545150Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2024-09-16T11:11:36.545165Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2024-09-16T11:11:36.545180Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2024-09-16T11:11:36.546198Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:11:36.546663Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:11:36.546683Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:11:36.546665Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:embed-certs-679624 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T11:11:36.546933Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T11:11:36.546964Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T11:11:36.547066Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:11:36.547154Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:11:36.547183Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:11:36.548000Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:11:36.548092Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:11:36.548901Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2024-09-16T11:11:36.549253Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 11:12:04 up 54 min,  0 users,  load average: 2.73, 3.21, 2.24
	Linux embed-certs-679624 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [2dbb170a519e89c32bdecca493bd2479c29f546b0a9ba488f7768c7e7cdd8cc6] <==
	I0916 11:11:47.021998       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 11:11:47.023989       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0916 11:11:47.024566       1 main.go:148] setting mtu 1500 for CNI 
	I0916 11:11:47.025534       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 11:11:47.025627       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 11:11:47.420585       1 controller.go:334] Starting controller kube-network-policies
	I0916 11:11:47.421021       1 controller.go:338] Waiting for informer caches to sync
	I0916 11:11:47.421117       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 11:11:47.627002       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 11:11:47.627034       1 metrics.go:61] Registering metrics
	I0916 11:11:47.627087       1 controller.go:374] Syncing nftables rules
	I0916 11:11:57.424285       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0916 11:11:57.424361       1 main.go:299] handling current node
	
	
	==> kube-apiserver [debbdc082cc9ca9a3fe53ba70655a9e18f2b6df022109c93f5db8c5e17b5c30a] <==
	I0916 11:11:38.032726       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 11:11:38.032865       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 11:11:38.032486       1 controller.go:615] quota admission added evaluator for: namespaces
	E0916 11:11:38.033283       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0916 11:11:38.033344       1 aggregator.go:171] initial CRD sync complete...
	I0916 11:11:38.033358       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 11:11:38.033364       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 11:11:38.033370       1 cache.go:39] Caches are synced for autoregister controller
	I0916 11:11:38.236713       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 11:11:38.931382       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0916 11:11:38.935989       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0916 11:11:38.936007       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 11:11:39.360688       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 11:11:39.550286       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 11:11:39.885332       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0916 11:11:39.981669       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0916 11:11:39.983057       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 11:11:39.983086       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 11:11:39.989809       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 11:11:40.937234       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 11:11:40.951405       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 11:11:40.963172       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 11:11:45.562872       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0916 11:11:45.562874       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0916 11:11:45.712697       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [98ba0135cf4f3961fbf8ace4f9fb6ca27a2883eb5d31aa7115bb9c9828c71f32] <==
	I0916 11:11:44.859954       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0916 11:11:44.910809       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0916 11:11:44.911946       1 shared_informer.go:320] Caches are synced for endpoint
	I0916 11:11:44.913160       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 11:11:44.925855       1 shared_informer.go:320] Caches are synced for disruption
	I0916 11:11:44.931380       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 11:11:44.937542       1 shared_informer.go:320] Caches are synced for deployment
	I0916 11:11:44.943920       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0916 11:11:45.325629       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:11:45.408258       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:11:45.408287       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 11:11:45.828842       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="111.978923ms"
	I0916 11:11:45.842449       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="13.539417ms"
	I0916 11:11:45.842559       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="62.208µs"
	I0916 11:11:45.843676       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="53.216µs"
	I0916 11:11:46.851046       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="15.841905ms"
	I0916 11:11:46.858766       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="7.657412ms"
	I0916 11:11:46.859483       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="165.208µs"
	I0916 11:11:47.957358       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="66.062µs"
	I0916 11:11:47.964349       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="64.093µs"
	I0916 11:11:47.965886       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="68.029µs"
	I0916 11:11:51.248649       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-679624"
	I0916 11:12:00.965845       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="117.386µs"
	I0916 11:12:00.983957       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.090341ms"
	I0916 11:12:00.984089       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="84.88µs"
	
	
	==> kube-proxy [c182b9d7c07dff5e07ae6d7a2a6ee44903b7d16b719cbe713e148ddee7a32eae] <==
	I0916 11:11:46.629316       1 server_linux.go:66] "Using iptables proxy"
	I0916 11:11:46.830532       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.85.2"]
	E0916 11:11:46.830628       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 11:11:46.926994       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 11:11:46.927247       1 server_linux.go:169] "Using iptables Proxier"
	I0916 11:11:46.930151       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 11:11:46.930796       1 server.go:483] "Version info" version="v1.31.1"
	I0916 11:11:46.930829       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:11:46.932160       1 config.go:199] "Starting service config controller"
	I0916 11:11:46.932195       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 11:11:46.932254       1 config.go:328] "Starting node config controller"
	I0916 11:11:46.932264       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 11:11:46.932283       1 config.go:105] "Starting endpoint slice config controller"
	I0916 11:11:46.932300       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 11:11:47.033501       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 11:11:47.033621       1 shared_informer.go:320] Caches are synced for service config
	I0916 11:11:47.033942       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7637dc0ee3d4d0dad8e111abe371f5489a1304abc063851969b1c262fb4b2a10] <==
	W0916 11:11:38.120528       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 11:11:38.120569       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 11:11:38.120569       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0916 11:11:38.120569       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 11:11:38.120610       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E0916 11:11:38.120610       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:11:38.120674       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 11:11:38.120697       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:11:38.918573       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 11:11:38.918616       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:11:39.040886       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 11:11:39.040945       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:11:39.113732       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 11:11:39.113779       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:11:39.119266       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 11:11:39.119303       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:11:39.126330       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 11:11:39.126368       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:11:39.133675       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 11:11:39.133725       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:11:39.158407       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 11:11:39.158460       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:11:39.324525       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:11:39.324580       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 11:11:41.243501       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 11:11:45 embed-certs-679624 kubelet[1613]: I0916 11:11:45.853339    1613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn5kr\" (UniqueName: \"kubernetes.io/projected/281fa9a8-3479-46dc-a1df-9dc1d7985344-kube-api-access-mn5kr\") pod \"coredns-7c65d6cfc9-x4f6n\" (UID: \"281fa9a8-3479-46dc-a1df-9dc1d7985344\") " pod="kube-system/coredns-7c65d6cfc9-x4f6n"
	Sep 16 11:11:45 embed-certs-679624 kubelet[1613]: I0916 11:11:45.853489    1613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lvgt\" (UniqueName: \"kubernetes.io/projected/95a9589e-1385-4fb0-8b68-fb26098daf01-kube-api-access-4lvgt\") pod \"coredns-7c65d6cfc9-dmv6t\" (UID: \"95a9589e-1385-4fb0-8b68-fb26098daf01\") " pod="kube-system/coredns-7c65d6cfc9-dmv6t"
	Sep 16 11:11:46 embed-certs-679624 kubelet[1613]: E0916 11:11:46.234554    1613 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21be9c12a59f91f508573266a2a1fd40007e7eae8cfe12b3460b142c170c8245\": failed to find network info for sandbox \"21be9c12a59f91f508573266a2a1fd40007e7eae8cfe12b3460b142c170c8245\""
	Sep 16 11:11:46 embed-certs-679624 kubelet[1613]: E0916 11:11:46.234645    1613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21be9c12a59f91f508573266a2a1fd40007e7eae8cfe12b3460b142c170c8245\": failed to find network info for sandbox \"21be9c12a59f91f508573266a2a1fd40007e7eae8cfe12b3460b142c170c8245\"" pod="kube-system/coredns-7c65d6cfc9-dmv6t"
	Sep 16 11:11:46 embed-certs-679624 kubelet[1613]: E0916 11:11:46.234674    1613 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21be9c12a59f91f508573266a2a1fd40007e7eae8cfe12b3460b142c170c8245\": failed to find network info for sandbox \"21be9c12a59f91f508573266a2a1fd40007e7eae8cfe12b3460b142c170c8245\"" pod="kube-system/coredns-7c65d6cfc9-dmv6t"
	Sep 16 11:11:46 embed-certs-679624 kubelet[1613]: E0916 11:11:46.234728    1613 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-dmv6t_kube-system(95a9589e-1385-4fb0-8b68-fb26098daf01)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-dmv6t_kube-system(95a9589e-1385-4fb0-8b68-fb26098daf01)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21be9c12a59f91f508573266a2a1fd40007e7eae8cfe12b3460b142c170c8245\\\": failed to find network info for sandbox \\\"21be9c12a59f91f508573266a2a1fd40007e7eae8cfe12b3460b142c170c8245\\\"\"" pod="kube-system/coredns-7c65d6cfc9-dmv6t" podUID="95a9589e-1385-4fb0-8b68-fb26098daf01"
	Sep 16 11:11:46 embed-certs-679624 kubelet[1613]: E0916 11:11:46.243017    1613 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3250d92c6cbeccfb0153a110aede6c0d9e789dabaa3cf4ac553050e5d10b4d04\": failed to find network info for sandbox \"3250d92c6cbeccfb0153a110aede6c0d9e789dabaa3cf4ac553050e5d10b4d04\""
	Sep 16 11:11:46 embed-certs-679624 kubelet[1613]: E0916 11:11:46.243111    1613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3250d92c6cbeccfb0153a110aede6c0d9e789dabaa3cf4ac553050e5d10b4d04\": failed to find network info for sandbox \"3250d92c6cbeccfb0153a110aede6c0d9e789dabaa3cf4ac553050e5d10b4d04\"" pod="kube-system/coredns-7c65d6cfc9-x4f6n"
	Sep 16 11:11:46 embed-certs-679624 kubelet[1613]: E0916 11:11:46.243138    1613 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3250d92c6cbeccfb0153a110aede6c0d9e789dabaa3cf4ac553050e5d10b4d04\": failed to find network info for sandbox \"3250d92c6cbeccfb0153a110aede6c0d9e789dabaa3cf4ac553050e5d10b4d04\"" pod="kube-system/coredns-7c65d6cfc9-x4f6n"
	Sep 16 11:11:46 embed-certs-679624 kubelet[1613]: E0916 11:11:46.243192    1613 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-x4f6n_kube-system(281fa9a8-3479-46dc-a1df-9dc1d7985344)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-x4f6n_kube-system(281fa9a8-3479-46dc-a1df-9dc1d7985344)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3250d92c6cbeccfb0153a110aede6c0d9e789dabaa3cf4ac553050e5d10b4d04\\\": failed to find network info for sandbox \\\"3250d92c6cbeccfb0153a110aede6c0d9e789dabaa3cf4ac553050e5d10b4d04\\\"\"" pod="kube-system/coredns-7c65d6cfc9-x4f6n" podUID="281fa9a8-3479-46dc-a1df-9dc1d7985344"
	Sep 16 11:11:46 embed-certs-679624 kubelet[1613]: I0916 11:11:46.936307    1613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bt6k2" podStartSLOduration=1.9362803259999999 podStartE2EDuration="1.936280326s" podCreationTimestamp="2024-09-16 11:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:11:46.93539239 +0000 UTC m=+6.230931077" watchObservedRunningTime="2024-09-16 11:11:46.936280326 +0000 UTC m=+6.231819013"
	Sep 16 11:11:47 embed-certs-679624 kubelet[1613]: I0916 11:11:47.042983    1613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-78kp5" podStartSLOduration=2.042955881 podStartE2EDuration="2.042955881s" podCreationTimestamp="2024-09-16 11:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:11:47.027232044 +0000 UTC m=+6.322770729" watchObservedRunningTime="2024-09-16 11:11:47.042955881 +0000 UTC m=+6.338494569"
	Sep 16 11:11:47 embed-certs-679624 kubelet[1613]: I0916 11:11:47.128660    1613 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/281fa9a8-3479-46dc-a1df-9dc1d7985344-config-volume\") pod \"281fa9a8-3479-46dc-a1df-9dc1d7985344\" (UID: \"281fa9a8-3479-46dc-a1df-9dc1d7985344\") "
	Sep 16 11:11:47 embed-certs-679624 kubelet[1613]: I0916 11:11:47.128726    1613 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn5kr\" (UniqueName: \"kubernetes.io/projected/281fa9a8-3479-46dc-a1df-9dc1d7985344-kube-api-access-mn5kr\") pod \"281fa9a8-3479-46dc-a1df-9dc1d7985344\" (UID: \"281fa9a8-3479-46dc-a1df-9dc1d7985344\") "
	Sep 16 11:11:47 embed-certs-679624 kubelet[1613]: I0916 11:11:47.129072    1613 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/281fa9a8-3479-46dc-a1df-9dc1d7985344-config-volume" (OuterVolumeSpecName: "config-volume") pod "281fa9a8-3479-46dc-a1df-9dc1d7985344" (UID: "281fa9a8-3479-46dc-a1df-9dc1d7985344"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Sep 16 11:11:47 embed-certs-679624 kubelet[1613]: I0916 11:11:47.131020    1613 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/281fa9a8-3479-46dc-a1df-9dc1d7985344-kube-api-access-mn5kr" (OuterVolumeSpecName: "kube-api-access-mn5kr") pod "281fa9a8-3479-46dc-a1df-9dc1d7985344" (UID: "281fa9a8-3479-46dc-a1df-9dc1d7985344"). InnerVolumeSpecName "kube-api-access-mn5kr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 11:11:47 embed-certs-679624 kubelet[1613]: I0916 11:11:47.229070    1613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhtxr\" (UniqueName: \"kubernetes.io/projected/3b5477b8-ac39-4acc-9e16-a13a7b1d3e10-kube-api-access-rhtxr\") pod \"storage-provisioner\" (UID: \"3b5477b8-ac39-4acc-9e16-a13a7b1d3e10\") " pod="kube-system/storage-provisioner"
	Sep 16 11:11:47 embed-certs-679624 kubelet[1613]: I0916 11:11:47.229155    1613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3b5477b8-ac39-4acc-9e16-a13a7b1d3e10-tmp\") pod \"storage-provisioner\" (UID: \"3b5477b8-ac39-4acc-9e16-a13a7b1d3e10\") " pod="kube-system/storage-provisioner"
	Sep 16 11:11:47 embed-certs-679624 kubelet[1613]: I0916 11:11:47.229198    1613 reconciler_common.go:288] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/281fa9a8-3479-46dc-a1df-9dc1d7985344-config-volume\") on node \"embed-certs-679624\" DevicePath \"\""
	Sep 16 11:11:47 embed-certs-679624 kubelet[1613]: I0916 11:11:47.229220    1613 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-mn5kr\" (UniqueName: \"kubernetes.io/projected/281fa9a8-3479-46dc-a1df-9dc1d7985344-kube-api-access-mn5kr\") on node \"embed-certs-679624\" DevicePath \"\""
	Sep 16 11:11:47 embed-certs-679624 kubelet[1613]: I0916 11:11:47.947516    1613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=0.947491757 podStartE2EDuration="947.491757ms" podCreationTimestamp="2024-09-16 11:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:11:47.947191961 +0000 UTC m=+7.242730646" watchObservedRunningTime="2024-09-16 11:11:47.947491757 +0000 UTC m=+7.243030463"
	Sep 16 11:11:48 embed-certs-679624 kubelet[1613]: I0916 11:11:48.838386    1613 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="281fa9a8-3479-46dc-a1df-9dc1d7985344" path="/var/lib/kubelet/pods/281fa9a8-3479-46dc-a1df-9dc1d7985344/volumes"
	Sep 16 11:11:51 embed-certs-679624 kubelet[1613]: I0916 11:11:51.238671    1613 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 16 11:11:51 embed-certs-679624 kubelet[1613]: I0916 11:11:51.239550    1613 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 16 11:12:00 embed-certs-679624 kubelet[1613]: I0916 11:12:00.977086    1613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-dmv6t" podStartSLOduration=15.977061402 podStartE2EDuration="15.977061402s" podCreationTimestamp="2024-09-16 11:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:12:00.966248932 +0000 UTC m=+20.261787617" watchObservedRunningTime="2024-09-16 11:12:00.977061402 +0000 UTC m=+20.272600088"
	
	
	==> storage-provisioner [f590d121c5d6d1937cefe41166b62f9f511f34157bec2b92d38767e94a5b8ba8] <==
	I0916 11:11:47.640871       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 11:11:47.650046       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 11:11:47.650086       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 11:11:47.659227       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 11:11:47.659353       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"af47b140-7661-4805-8791-5af1e81aebf7", APIVersion:"v1", ResourceVersion:"421", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-679624_78a7dd88-7eba-4513-b3b6-513ea370e3ab became leader
	I0916 11:11:47.659420       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-679624_78a7dd88-7eba-4513-b3b6-513ea370e3ab!
	I0916 11:11:47.760481       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-679624_78a7dd88-7eba-4513-b3b6-513ea370e3ab!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-679624 -n embed-certs-679624
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-679624 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context embed-certs-679624 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (474.348µs)
helpers_test.go:263: kubectl --context embed-certs-679624 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-679624
helpers_test.go:235: (dbg) docker inspect embed-certs-679624:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8a143ceb3281ed1e96f6f01866c194c6d8fa87e7cc776258c58582b8b277ac01",
	        "Created": "2024-09-16T11:11:24.339291508Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 289930,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T11:11:24.472248835Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/8a143ceb3281ed1e96f6f01866c194c6d8fa87e7cc776258c58582b8b277ac01/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8a143ceb3281ed1e96f6f01866c194c6d8fa87e7cc776258c58582b8b277ac01/hostname",
	        "HostsPath": "/var/lib/docker/containers/8a143ceb3281ed1e96f6f01866c194c6d8fa87e7cc776258c58582b8b277ac01/hosts",
	        "LogPath": "/var/lib/docker/containers/8a143ceb3281ed1e96f6f01866c194c6d8fa87e7cc776258c58582b8b277ac01/8a143ceb3281ed1e96f6f01866c194c6d8fa87e7cc776258c58582b8b277ac01-json.log",
	        "Name": "/embed-certs-679624",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-679624:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-679624",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/cb7af4404e9e88f242c67c60187c08f2160de8b651471dce3186948c08f3dc76-init/diff:/var/lib/docker/overlay2/e391a31d00d345931d18da68f8a7d7338830f6d9b98fe203461225d03a65d594/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cb7af4404e9e88f242c67c60187c08f2160de8b651471dce3186948c08f3dc76/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cb7af4404e9e88f242c67c60187c08f2160de8b651471dce3186948c08f3dc76/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cb7af4404e9e88f242c67c60187c08f2160de8b651471dce3186948c08f3dc76/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-679624",
	                "Source": "/var/lib/docker/volumes/embed-certs-679624/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-679624",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-679624",
	                "name.minikube.sigs.k8s.io": "embed-certs-679624",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ff60825c25c0c32e46c9786671ffef996b2342a731555808d9dc885e9b8cac8e",
	            "SandboxKey": "/var/run/docker/netns/ff60825c25c0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-679624": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null,
	                    "NetworkID": "5c8d67185b352feb5e2b0195e3f409fe6cf79bd750730cb6897291fef1a3c3d7",
	                    "EndpointID": "dddf70084024b7c890e66e96d6c39e3f3c7ed4ae631ca39642acb6c9b79a1c44",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-679624",
	                        "8a143ceb3281"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-679624 -n embed-certs-679624
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-679624 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-679624 logs -n 25: (1.110090088s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-021107                              | cert-expiration-021107    | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:08 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| start   | -p force-systemd-flag-917705                           | force-systemd-flag-917705 | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:08 UTC |
	|         | --memory=2048 --force-systemd                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                                   |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-846070                               | force-systemd-env-846070  | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | ssh cat                                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-846070                            | force-systemd-env-846070  | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| stop    | -p kubernetes-upgrade-311911                           | kubernetes-upgrade-311911 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| start   | -p cert-options-840054                                 | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-311911                           | kubernetes-upgrade-311911 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                                   |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-917705                              | force-systemd-flag-917705 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | ssh cat                                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-917705                           | force-systemd-flag-917705 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| start   | -p old-k8s-version-371039                              | old-k8s-version-371039    | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:10 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| ssh     | cert-options-840054 ssh                                | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | openssl x509 -text -noout -in                          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-840054 -- sudo                         | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                           |         |         |                     |                     |
	| delete  | -p cert-options-840054                                 | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| start   | -p no-preload-349453                                   | no-preload-349453         | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:09 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-349453             | no-preload-349453         | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC | 16 Sep 24 11:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p no-preload-349453                                   | no-preload-349453         | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC | 16 Sep 24 11:09 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-349453                  | no-preload-349453         | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC | 16 Sep 24 11:09 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p no-preload-349453                                   | no-preload-349453         | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-371039        | old-k8s-version-371039    | jenkins | v1.34.0 | 16 Sep 24 11:10 UTC | 16 Sep 24 11:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p old-k8s-version-371039                              | old-k8s-version-371039    | jenkins | v1.34.0 | 16 Sep 24 11:10 UTC | 16 Sep 24 11:10 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-371039             | old-k8s-version-371039    | jenkins | v1.34.0 | 16 Sep 24 11:10 UTC | 16 Sep 24 11:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-371039                              | old-k8s-version-371039    | jenkins | v1.34.0 | 16 Sep 24 11:10 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| start   | -p cert-expiration-021107                              | cert-expiration-021107    | jenkins | v1.34.0 | 16 Sep 24 11:11 UTC | 16 Sep 24 11:11 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-021107                              | cert-expiration-021107    | jenkins | v1.34.0 | 16 Sep 24 11:11 UTC | 16 Sep 24 11:11 UTC |
	| start   | -p embed-certs-679624                                  | embed-certs-679624        | jenkins | v1.34.0 | 16 Sep 24 11:11 UTC | 16 Sep 24 11:12 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:11:18
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:11:18.856155  288908 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:11:18.856262  288908 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:11:18.856269  288908 out.go:358] Setting ErrFile to fd 2...
	I0916 11:11:18.856274  288908 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:11:18.856461  288908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 11:11:18.857036  288908 out.go:352] Setting JSON to false
	I0916 11:11:18.858346  288908 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3223,"bootTime":1726481856,"procs":348,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:11:18.858451  288908 start.go:139] virtualization: kvm guest
	I0916 11:11:18.860470  288908 out.go:177] * [embed-certs-679624] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:11:18.862286  288908 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:11:18.862325  288908 notify.go:220] Checking for updates...
	I0916 11:11:18.864825  288908 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:11:18.865999  288908 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:11:18.867166  288908 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 11:11:18.868600  288908 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:11:18.870074  288908 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:11:18.871834  288908 config.go:182] Loaded profile config "kubernetes-upgrade-311911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:11:18.871944  288908 config.go:182] Loaded profile config "no-preload-349453": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:11:18.872024  288908 config.go:182] Loaded profile config "old-k8s-version-371039": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0916 11:11:18.872127  288908 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:11:18.894405  288908 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 11:11:18.894515  288908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:11:18.948949  288908 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:86 SystemTime:2024-09-16 11:11:18.937344705 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:11:18.949132  288908 docker.go:318] overlay module found
	I0916 11:11:18.950939  288908 out.go:177] * Using the docker driver based on user configuration
	I0916 11:11:18.952281  288908 start.go:297] selected driver: docker
	I0916 11:11:18.952313  288908 start.go:901] validating driver "docker" against <nil>
	I0916 11:11:18.952331  288908 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:11:18.953507  288908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:11:19.001625  288908 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:86 SystemTime:2024-09-16 11:11:18.99185584 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:11:19.001804  288908 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 11:11:19.002056  288908 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:11:19.003908  288908 out.go:177] * Using Docker driver with root privileges
	I0916 11:11:19.005402  288908 cni.go:84] Creating CNI manager for ""
	I0916 11:11:19.005465  288908 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:11:19.005479  288908 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 11:11:19.005564  288908 start.go:340] cluster config:
	{Name:embed-certs-679624 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-679624 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:11:19.007384  288908 out.go:177] * Starting "embed-certs-679624" primary control-plane node in "embed-certs-679624" cluster
	I0916 11:11:19.009150  288908 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 11:11:19.010840  288908 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 11:11:19.012215  288908 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:11:19.012278  288908 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4
	I0916 11:11:19.012297  288908 cache.go:56] Caching tarball of preloaded images
	I0916 11:11:19.012311  288908 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 11:11:19.012483  288908 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 11:11:19.012514  288908 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 11:11:19.012637  288908 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/config.json ...
	I0916 11:11:19.012667  288908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/config.json: {Name:mk779755db7fc6d270e9404ca4b6e4963d78e149 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0916 11:11:19.033306  288908 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 11:11:19.033331  288908 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 11:11:19.033415  288908 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 11:11:19.033429  288908 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 11:11:19.033435  288908 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 11:11:19.033442  288908 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 11:11:19.033458  288908 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 11:11:19.086983  288908 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 11:11:19.087029  288908 cache.go:194] Successfully downloaded all kic artifacts
	I0916 11:11:19.087070  288908 start.go:360] acquireMachinesLock for embed-certs-679624: {Name:mk5c5a1695ab7bba9827e17eb437dd80adf4e091 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:11:19.087184  288908 start.go:364] duration metric: took 93.132µs to acquireMachinesLock for "embed-certs-679624"
	I0916 11:11:19.087215  288908 start.go:93] Provisioning new machine with config: &{Name:embed-certs-679624 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-679624 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 11:11:19.087341  288908 start.go:125] createHost starting for "" (driver="docker")
	I0916 11:11:17.757111  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:20.258429  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:17.707064  254463 logs.go:123] Gathering logs for kube-apiserver [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7] ...
	I0916 11:11:17.707097  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:11:17.745431  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:11:17.745460  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:17.807745  254463 logs.go:123] Gathering logs for kube-controller-manager [5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0] ...
	I0916 11:11:17.807796  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:17.841462  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:11:17.841493  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:11:17.927928  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:11:17.927966  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:11:17.951261  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:11:17.951305  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:11:18.013608  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:11:18.013640  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:11:18.013660  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:20.558195  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:11:20.558623  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:11:20.558677  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:11:20.558734  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:11:20.595321  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:20.595346  254463 cri.go:89] found id: ""
	I0916 11:11:20.595355  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:11:20.595413  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:20.599420  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:11:20.599497  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:11:20.641184  254463 cri.go:89] found id: ""
	I0916 11:11:20.641211  254463 logs.go:276] 0 containers: []
	W0916 11:11:20.641223  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:11:20.641232  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:11:20.641292  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:11:20.682399  254463 cri.go:89] found id: ""
	I0916 11:11:20.682431  254463 logs.go:276] 0 containers: []
	W0916 11:11:20.682443  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:11:20.682451  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:11:20.682516  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:11:20.721644  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:20.721669  254463 cri.go:89] found id: ""
	I0916 11:11:20.721678  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:11:20.721731  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:20.725651  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:11:20.725724  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:11:20.767294  254463 cri.go:89] found id: ""
	I0916 11:11:20.767321  254463 logs.go:276] 0 containers: []
	W0916 11:11:20.767329  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:11:20.767335  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:11:20.767382  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:11:20.801830  254463 cri.go:89] found id: "5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:20.801855  254463 cri.go:89] found id: ""
	I0916 11:11:20.801865  254463 logs.go:276] 1 containers: [5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0]
	I0916 11:11:20.801922  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:20.805407  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:11:20.805482  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:11:20.840869  254463 cri.go:89] found id: ""
	I0916 11:11:20.840900  254463 logs.go:276] 0 containers: []
	W0916 11:11:20.840912  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:11:20.840919  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:11:20.840979  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:11:20.878195  254463 cri.go:89] found id: ""
	I0916 11:11:20.878221  254463 logs.go:276] 0 containers: []
	W0916 11:11:20.878229  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:11:20.878237  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:11:20.878248  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:11:20.925361  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:11:20.925388  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:11:21.019564  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:11:21.019600  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:11:21.048676  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:11:21.048723  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:11:21.112999  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:11:21.113033  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:11:21.113051  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:21.154086  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:11:21.154114  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:21.235856  254463 logs.go:123] Gathering logs for kube-controller-manager [5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0] ...
	I0916 11:11:21.235897  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:21.278612  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:11:21.278650  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:11:19.238965  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:21.239025  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:23.738819  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:19.090071  288908 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 11:11:19.090308  288908 start.go:159] libmachine.API.Create for "embed-certs-679624" (driver="docker")
	I0916 11:11:19.090338  288908 client.go:168] LocalClient.Create starting
	I0916 11:11:19.090401  288908 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem
	I0916 11:11:19.090431  288908 main.go:141] libmachine: Decoding PEM data...
	I0916 11:11:19.090448  288908 main.go:141] libmachine: Parsing certificate...
	I0916 11:11:19.090505  288908 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem
	I0916 11:11:19.090523  288908 main.go:141] libmachine: Decoding PEM data...
	I0916 11:11:19.090534  288908 main.go:141] libmachine: Parsing certificate...
	I0916 11:11:19.090850  288908 cli_runner.go:164] Run: docker network inspect embed-certs-679624 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 11:11:19.107706  288908 cli_runner.go:211] docker network inspect embed-certs-679624 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 11:11:19.107836  288908 network_create.go:284] running [docker network inspect embed-certs-679624] to gather additional debugging logs...
	I0916 11:11:19.107862  288908 cli_runner.go:164] Run: docker network inspect embed-certs-679624
	W0916 11:11:19.124412  288908 cli_runner.go:211] docker network inspect embed-certs-679624 returned with exit code 1
	I0916 11:11:19.124439  288908 network_create.go:287] error running [docker network inspect embed-certs-679624]: docker network inspect embed-certs-679624: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-679624 not found
	I0916 11:11:19.124466  288908 network_create.go:289] output of [docker network inspect embed-certs-679624]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-679624 not found
	
	** /stderr **
	I0916 11:11:19.124580  288908 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:11:19.142536  288908 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c95c64bb41bd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:bd:76:ab:c4} reservation:<nil>}
	I0916 11:11:19.143504  288908 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fad43aa9929b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:94:3f:61:fd} reservation:<nil>}
	I0916 11:11:19.144458  288908 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-49585fce923a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:ad:45:94:54} reservation:<nil>}
	I0916 11:11:19.145163  288908 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-45dc384def28 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:95:3e:48:c3} reservation:<nil>}
	I0916 11:11:19.146136  288908 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cbec20}
	I0916 11:11:19.146158  288908 network_create.go:124] attempt to create docker network embed-certs-679624 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0916 11:11:19.146211  288908 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-679624 embed-certs-679624
	I0916 11:11:19.210275  288908 network_create.go:108] docker network embed-certs-679624 192.168.85.0/24 created
	I0916 11:11:19.210306  288908 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-679624" container
	I0916 11:11:19.210356  288908 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 11:11:19.227600  288908 cli_runner.go:164] Run: docker volume create embed-certs-679624 --label name.minikube.sigs.k8s.io=embed-certs-679624 --label created_by.minikube.sigs.k8s.io=true
	I0916 11:11:19.245579  288908 oci.go:103] Successfully created a docker volume embed-certs-679624
	I0916 11:11:19.245640  288908 cli_runner.go:164] Run: docker run --rm --name embed-certs-679624-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-679624 --entrypoint /usr/bin/test -v embed-certs-679624:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 11:11:19.757598  288908 oci.go:107] Successfully prepared a docker volume embed-certs-679624
	I0916 11:11:19.757638  288908 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:11:19.757655  288908 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 11:11:19.757735  288908 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-679624:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 11:11:22.757918  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:24.758241  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:23.825300  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:11:23.825689  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:11:23.825738  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:11:23.825786  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:11:23.859216  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:23.859235  254463 cri.go:89] found id: ""
	I0916 11:11:23.859242  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:11:23.859286  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:23.862764  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:11:23.862821  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:11:23.895042  254463 cri.go:89] found id: ""
	I0916 11:11:23.895069  254463 logs.go:276] 0 containers: []
	W0916 11:11:23.895078  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:11:23.895084  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:11:23.895139  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:11:23.926804  254463 cri.go:89] found id: ""
	I0916 11:11:23.926829  254463 logs.go:276] 0 containers: []
	W0916 11:11:23.926842  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:11:23.926850  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:11:23.926897  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:11:23.961138  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:23.961159  254463 cri.go:89] found id: ""
	I0916 11:11:23.961166  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:11:23.961218  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:23.964777  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:11:23.964842  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:11:24.007913  254463 cri.go:89] found id: ""
	I0916 11:11:24.007939  254463 logs.go:276] 0 containers: []
	W0916 11:11:24.007951  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:11:24.007959  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:11:24.008029  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:11:24.049372  254463 cri.go:89] found id: "5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:24.049444  254463 cri.go:89] found id: ""
	I0916 11:11:24.049460  254463 logs.go:276] 1 containers: [5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0]
	I0916 11:11:24.049523  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:24.054045  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:11:24.054127  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:11:24.093835  254463 cri.go:89] found id: ""
	I0916 11:11:24.093864  254463 logs.go:276] 0 containers: []
	W0916 11:11:24.093875  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:11:24.093883  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:11:24.093939  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:11:24.129861  254463 cri.go:89] found id: ""
	I0916 11:11:24.129888  254463 logs.go:276] 0 containers: []
	W0916 11:11:24.129896  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:11:24.129904  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:11:24.129916  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:11:24.179039  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:11:24.179086  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:11:24.218126  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:11:24.218159  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:11:24.318420  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:11:24.318456  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:11:24.349622  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:11:24.349663  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:11:24.429380  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:11:24.429415  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:11:24.429433  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:24.468570  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:11:24.468615  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:24.557739  254463 logs.go:123] Gathering logs for kube-controller-manager [5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0] ...
	I0916 11:11:24.557776  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:27.098528  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:11:27.098979  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:11:27.099032  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:11:27.099086  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:11:27.135416  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:27.135437  254463 cri.go:89] found id: ""
	I0916 11:11:27.135444  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:11:27.135489  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:27.138909  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:11:27.138973  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:11:27.177050  254463 cri.go:89] found id: ""
	I0916 11:11:27.177080  254463 logs.go:276] 0 containers: []
	W0916 11:11:27.177091  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:11:27.177099  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:11:27.177160  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:11:27.212036  254463 cri.go:89] found id: ""
	I0916 11:11:27.212061  254463 logs.go:276] 0 containers: []
	W0916 11:11:27.212073  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:11:27.212081  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:11:27.212136  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:11:27.251569  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:27.251590  254463 cri.go:89] found id: ""
	I0916 11:11:27.251598  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:11:27.251651  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:27.258394  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:11:27.258463  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:11:27.296919  254463 cri.go:89] found id: ""
	I0916 11:11:27.296950  254463 logs.go:276] 0 containers: []
	W0916 11:11:27.296960  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:11:27.296965  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:11:27.297023  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:11:27.335315  254463 cri.go:89] found id: "5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:27.335334  254463 cri.go:89] found id: ""
	I0916 11:11:27.335342  254463 logs.go:276] 1 containers: [5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0]
	I0916 11:11:27.335384  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:27.338919  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:11:27.338984  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:11:27.375852  254463 cri.go:89] found id: ""
	I0916 11:11:27.375877  254463 logs.go:276] 0 containers: []
	W0916 11:11:27.375890  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:11:27.375905  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:11:27.375963  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:11:27.413862  254463 cri.go:89] found id: ""
	I0916 11:11:27.413883  254463 logs.go:276] 0 containers: []
	W0916 11:11:27.413891  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:11:27.413899  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:11:27.413909  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:11:27.526092  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:11:27.526127  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:11:27.550647  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:11:27.550682  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:11:27.620133  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:11:27.620156  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:11:27.620170  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:27.665894  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:11:27.665929  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:25.739512  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:28.239069  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:24.264807  288908 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-679624:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.506989871s)
	I0916 11:11:24.264850  288908 kic.go:203] duration metric: took 4.507189916s to extract preloaded images to volume ...
	W0916 11:11:24.265015  288908 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 11:11:24.265175  288908 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 11:11:24.316681  288908 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-679624 --name embed-certs-679624 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-679624 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-679624 --network embed-certs-679624 --ip 192.168.85.2 --volume embed-certs-679624:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 11:11:24.669712  288908 cli_runner.go:164] Run: docker container inspect embed-certs-679624 --format={{.State.Running}}
	I0916 11:11:24.689977  288908 cli_runner.go:164] Run: docker container inspect embed-certs-679624 --format={{.State.Status}}
	I0916 11:11:24.710159  288908 cli_runner.go:164] Run: docker exec embed-certs-679624 stat /var/lib/dpkg/alternatives/iptables
	I0916 11:11:24.751713  288908 oci.go:144] the created container "embed-certs-679624" has a running status.
	I0916 11:11:24.751782  288908 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/embed-certs-679624/id_rsa...
	I0916 11:11:24.870719  288908 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3687/.minikube/machines/embed-certs-679624/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 11:11:24.897688  288908 cli_runner.go:164] Run: docker container inspect embed-certs-679624 --format={{.State.Status}}
	I0916 11:11:24.915975  288908 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 11:11:24.915999  288908 kic_runner.go:114] Args: [docker exec --privileged embed-certs-679624 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 11:11:24.973386  288908 cli_runner.go:164] Run: docker container inspect embed-certs-679624 --format={{.State.Status}}
	I0916 11:11:24.992710  288908 machine.go:93] provisionDockerMachine start ...
	I0916 11:11:24.992788  288908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:11:25.013373  288908 main.go:141] libmachine: Using SSH client type: native
	I0916 11:11:25.013666  288908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I0916 11:11:25.013688  288908 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 11:11:25.014308  288908 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45610->127.0.0.1:33078: read: connection reset by peer
	I0916 11:11:28.148063  288908 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-679624
	
	I0916 11:11:28.148089  288908 ubuntu.go:169] provisioning hostname "embed-certs-679624"
	I0916 11:11:28.148161  288908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:11:28.169027  288908 main.go:141] libmachine: Using SSH client type: native
	I0916 11:11:28.169265  288908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I0916 11:11:28.169282  288908 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-679624 && echo "embed-certs-679624" | sudo tee /etc/hostname
	I0916 11:11:28.355513  288908 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-679624
	
	I0916 11:11:28.355629  288908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:11:28.374039  288908 main.go:141] libmachine: Using SSH client type: native
	I0916 11:11:28.374264  288908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I0916 11:11:28.374294  288908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-679624' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-679624/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-679624' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:11:28.508073  288908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:11:28.508100  288908 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 11:11:28.508138  288908 ubuntu.go:177] setting up certificates
	I0916 11:11:28.508156  288908 provision.go:84] configureAuth start
	I0916 11:11:28.508223  288908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-679624
	I0916 11:11:28.529363  288908 provision.go:143] copyHostCerts
	I0916 11:11:28.529425  288908 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 11:11:28.529444  288908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 11:11:28.529506  288908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 11:11:28.529605  288908 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 11:11:28.529616  288908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 11:11:28.529646  288908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 11:11:28.529753  288908 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 11:11:28.529767  288908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 11:11:28.529800  288908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 11:11:28.529884  288908 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.embed-certs-679624 san=[127.0.0.1 192.168.85.2 embed-certs-679624 localhost minikube]
	I0916 11:11:28.660139  288908 provision.go:177] copyRemoteCerts
	I0916 11:11:28.660207  288908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:11:28.660257  288908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:11:28.686030  288908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/embed-certs-679624/id_rsa Username:docker}
	I0916 11:11:28.781031  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 11:11:28.805291  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0916 11:11:28.828019  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 11:11:28.852211  288908 provision.go:87] duration metric: took 344.043242ms to configureAuth
	I0916 11:11:28.852237  288908 ubuntu.go:193] setting minikube options for container-runtime
	I0916 11:11:28.852389  288908 config.go:182] Loaded profile config "embed-certs-679624": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:11:28.852399  288908 machine.go:96] duration metric: took 3.859669611s to provisionDockerMachine
	I0916 11:11:28.852422  288908 client.go:171] duration metric: took 9.762061004s to LocalClient.Create
	I0916 11:11:28.852442  288908 start.go:167] duration metric: took 9.762135091s to libmachine.API.Create "embed-certs-679624"
	I0916 11:11:28.852450  288908 start.go:293] postStartSetup for "embed-certs-679624" (driver="docker")
	I0916 11:11:28.852458  288908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:11:28.852498  288908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:11:28.852531  288908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:11:28.870309  288908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/embed-certs-679624/id_rsa Username:docker}
	I0916 11:11:28.965110  288908 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:11:28.968523  288908 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 11:11:28.968563  288908 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 11:11:28.968575  288908 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 11:11:28.968583  288908 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 11:11:28.968596  288908 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 11:11:28.968713  288908 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 11:11:28.968785  288908 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 11:11:28.968871  288908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:11:28.977835  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:11:29.001876  288908 start.go:296] duration metric: took 149.414216ms for postStartSetup
	I0916 11:11:29.002250  288908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-679624
	I0916 11:11:29.019869  288908 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/config.json ...
	I0916 11:11:29.020153  288908 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 11:11:29.020205  288908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:11:29.038049  288908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/embed-certs-679624/id_rsa Username:docker}
	I0916 11:11:29.128967  288908 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 11:11:29.133547  288908 start.go:128] duration metric: took 10.046188671s to createHost
	I0916 11:11:29.133576  288908 start.go:83] releasing machines lock for "embed-certs-679624", held for 10.046377271s
	I0916 11:11:29.133643  288908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-679624
	I0916 11:11:29.152662  288908 ssh_runner.go:195] Run: cat /version.json
	I0916 11:11:29.152692  288908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:11:29.152722  288908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:11:29.152762  288908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:11:29.171183  288908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/embed-certs-679624/id_rsa Username:docker}
	I0916 11:11:29.171187  288908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/embed-certs-679624/id_rsa Username:docker}
	I0916 11:11:29.263485  288908 ssh_runner.go:195] Run: systemctl --version
	I0916 11:11:29.342939  288908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:11:29.347342  288908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 11:11:29.371959  288908 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 11:11:29.372033  288908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:11:29.398988  288908 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 11:11:29.399013  288908 start.go:495] detecting cgroup driver to use...
	I0916 11:11:29.399046  288908 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 11:11:29.399095  288908 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 11:11:29.410609  288908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 11:11:29.422113  288908 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:11:29.422178  288908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:11:29.436056  288908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:11:29.449916  288908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:11:29.528110  288908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:11:29.607390  288908 docker.go:233] disabling docker service ...
	I0916 11:11:29.607457  288908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:11:29.625383  288908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:11:29.637734  288908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:11:29.715467  288908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:11:29.796841  288908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:11:29.807894  288908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:11:29.824334  288908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 11:11:29.834092  288908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 11:11:29.845179  288908 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 11:11:29.845243  288908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 11:11:29.854840  288908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 11:11:29.864202  288908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 11:11:29.873608  288908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 11:11:29.883253  288908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:11:29.892391  288908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 11:11:29.901723  288908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 11:11:29.910902  288908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 11:11:29.920511  288908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:11:29.928496  288908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:11:29.937029  288908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:11:30.021638  288908 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 11:11:30.130291  288908 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 11:11:30.130362  288908 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 11:11:30.134196  288908 start.go:563] Will wait 60s for crictl version
	I0916 11:11:30.134260  288908 ssh_runner.go:195] Run: which crictl
	I0916 11:11:30.137609  288908 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:11:30.170590  288908 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 11:11:30.170645  288908 ssh_runner.go:195] Run: containerd --version
	I0916 11:11:30.192976  288908 ssh_runner.go:195] Run: containerd --version
	I0916 11:11:30.217368  288908 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 11:11:27.257831  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:29.759232  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:30.218805  288908 cli_runner.go:164] Run: docker network inspect embed-certs-679624 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:11:30.236609  288908 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0916 11:11:30.240710  288908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:11:30.251608  288908 kubeadm.go:883] updating cluster {Name:embed-certs-679624 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-679624 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:11:30.251732  288908 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:11:30.251856  288908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:11:30.289360  288908 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 11:11:30.289390  288908 containerd.go:534] Images already preloaded, skipping extraction
	I0916 11:11:30.289443  288908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:11:30.322306  288908 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 11:11:30.322325  288908 cache_images.go:84] Images are preloaded, skipping loading
	I0916 11:11:30.322332  288908 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.31.1 containerd true true} ...
	I0916 11:11:30.322410  288908 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-679624 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-679624 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:11:30.322458  288908 ssh_runner.go:195] Run: sudo crictl info
	I0916 11:11:30.357287  288908 cni.go:84] Creating CNI manager for ""
	I0916 11:11:30.357313  288908 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:11:30.357328  288908 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:11:30.357356  288908 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-679624 NodeName:embed-certs-679624 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 11:11:30.357533  288908 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-679624"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:11:30.357614  288908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 11:11:30.366434  288908 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 11:11:30.366500  288908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:11:30.375187  288908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0916 11:11:30.392300  288908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:11:30.410224  288908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0916 11:11:30.430159  288908 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0916 11:11:30.433926  288908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:11:30.444984  288908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:11:30.528873  288908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:11:30.543894  288908 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624 for IP: 192.168.85.2
	I0916 11:11:30.543916  288908 certs.go:194] generating shared ca certs ...
	I0916 11:11:30.543936  288908 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:11:30.544125  288908 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 11:11:30.544187  288908 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 11:11:30.544201  288908 certs.go:256] generating profile certs ...
	I0916 11:11:30.544273  288908 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/client.key
	I0916 11:11:30.544301  288908 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/client.crt with IP's: []
	I0916 11:11:30.788131  288908 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/client.crt ...
	I0916 11:11:30.788166  288908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/client.crt: {Name:mk02095d3afb4fad8c6d28e1f88b13ba36a9f6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:11:30.788368  288908 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/client.key ...
	I0916 11:11:30.788382  288908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/client.key: {Name:mk6908273136c2132f294f84c2cf9245d566117f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:11:30.788485  288908 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/apiserver.key.67e55e90
	I0916 11:11:30.788507  288908 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/apiserver.crt.67e55e90 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0916 11:11:30.999277  288908 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/apiserver.crt.67e55e90 ...
	I0916 11:11:30.999316  288908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/apiserver.crt.67e55e90: {Name:mk955ebd562252fd3d65acb6c2e198ab5e903fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:11:30.999516  288908 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/apiserver.key.67e55e90 ...
	I0916 11:11:30.999535  288908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/apiserver.key.67e55e90: {Name:mkc82f26c1c509a023699ea12765ff496bced47f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:11:30.999625  288908 certs.go:381] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/apiserver.crt.67e55e90 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/apiserver.crt
	I0916 11:11:30.999750  288908 certs.go:385] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/apiserver.key.67e55e90 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/apiserver.key
	I0916 11:11:30.999843  288908 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/proxy-client.key
	I0916 11:11:30.999865  288908 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/proxy-client.crt with IP's: []
	I0916 11:11:31.288838  288908 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/proxy-client.crt ...
	I0916 11:11:31.288945  288908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/proxy-client.crt: {Name:mk8bd14445a9da8b563b4c4456dcb6ef5aa0023c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:11:31.289235  288908 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/proxy-client.key ...
	I0916 11:11:31.289294  288908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/proxy-client.key: {Name:mk97c2379e3649b3d274265134c4b6a81c84d628 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:11:31.289625  288908 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 11:11:31.289722  288908 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 11:11:31.289752  288908 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:11:31.289809  288908 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 11:11:31.289858  288908 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:11:31.289915  288908 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 11:11:31.289997  288908 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:11:31.290950  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:11:31.317053  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:11:31.344299  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:11:31.373008  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:11:31.399445  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0916 11:11:31.425552  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 11:11:31.452299  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:11:31.480024  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 11:11:31.507034  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:11:31.533755  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 11:11:31.560944  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 11:11:31.588146  288908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:11:31.607340  288908 ssh_runner.go:195] Run: openssl version
	I0916 11:11:31.613749  288908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 11:11:31.623827  288908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 11:11:31.628105  288908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 11:11:31.628170  288908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 11:11:31.636053  288908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:11:31.646541  288908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:11:31.657059  288908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:11:31.661092  288908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:11:31.661152  288908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:11:31.668468  288908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:11:31.678986  288908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 11:11:31.688721  288908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 11:11:31.692740  288908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 11:11:31.692806  288908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 11:11:31.700158  288908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 11:11:31.710466  288908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:11:31.714043  288908 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 11:11:31.714124  288908 kubeadm.go:392] StartCluster: {Name:embed-certs-679624 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-679624 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:11:31.714222  288908 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 11:11:31.714261  288908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:11:31.756398  288908 cri.go:89] found id: ""
	I0916 11:11:31.756465  288908 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:11:31.766605  288908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 11:11:31.777090  288908 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 11:11:31.777143  288908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 11:11:31.787168  288908 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 11:11:31.787188  288908 kubeadm.go:157] found existing configuration files:
	
	I0916 11:11:31.787251  288908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 11:11:31.796664  288908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 11:11:31.796730  288908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 11:11:31.806726  288908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 11:11:31.816111  288908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 11:11:31.816165  288908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 11:11:31.825102  288908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 11:11:31.834700  288908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 11:11:31.834757  288908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 11:11:31.845052  288908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 11:11:31.854270  288908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 11:11:31.854344  288908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 11:11:31.864084  288908 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 11:11:31.911207  288908 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 11:11:31.911280  288908 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 11:11:31.929566  288908 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 11:11:31.929629  288908 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 11:11:31.929721  288908 kubeadm.go:310] OS: Linux
	I0916 11:11:31.929795  288908 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 11:11:31.929868  288908 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 11:11:31.929930  288908 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 11:11:31.929999  288908 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 11:11:31.930043  288908 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 11:11:31.930089  288908 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 11:11:31.930127  288908 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 11:11:31.930168  288908 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 11:11:31.930207  288908 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 11:11:32.003661  288908 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 11:11:32.003913  288908 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 11:11:32.004027  288908 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 11:11:32.009787  288908 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 11:11:27.745904  254463 logs.go:123] Gathering logs for kube-controller-manager [5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0] ...
	I0916 11:11:27.745938  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:27.786487  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:11:27.786512  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:11:27.843816  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:11:27.843853  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:11:30.387079  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:11:30.387476  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:11:30.387543  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:11:30.387611  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:11:30.423116  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:30.423146  254463 cri.go:89] found id: ""
	I0916 11:11:30.423157  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:11:30.423209  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:30.427346  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:11:30.427415  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:11:30.464033  254463 cri.go:89] found id: ""
	I0916 11:11:30.464064  254463 logs.go:276] 0 containers: []
	W0916 11:11:30.464076  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:11:30.464084  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:11:30.464149  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:11:30.506628  254463 cri.go:89] found id: ""
	I0916 11:11:30.506660  254463 logs.go:276] 0 containers: []
	W0916 11:11:30.506673  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:11:30.506682  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:11:30.506741  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:11:30.541832  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:30.541860  254463 cri.go:89] found id: ""
	I0916 11:11:30.541874  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:11:30.541932  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:30.546020  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:11:30.546090  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:11:30.586076  254463 cri.go:89] found id: ""
	I0916 11:11:30.586101  254463 logs.go:276] 0 containers: []
	W0916 11:11:30.586111  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:11:30.586118  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:11:30.586175  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:11:30.627319  254463 cri.go:89] found id: "5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:30.627343  254463 cri.go:89] found id: ""
	I0916 11:11:30.627352  254463 logs.go:276] 1 containers: [5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0]
	I0916 11:11:30.627404  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:30.630804  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:11:30.630871  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:11:30.672322  254463 cri.go:89] found id: ""
	I0916 11:11:30.672349  254463 logs.go:276] 0 containers: []
	W0916 11:11:30.672360  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:11:30.672368  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:11:30.672427  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:11:30.711423  254463 cri.go:89] found id: ""
	I0916 11:11:30.711445  254463 logs.go:276] 0 containers: []
	W0916 11:11:30.711453  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:11:30.711461  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:11:30.711473  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:30.787457  254463 logs.go:123] Gathering logs for kube-controller-manager [5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0] ...
	I0916 11:11:30.787499  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:30.825566  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:11:30.825596  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:11:30.873424  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:11:30.873458  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:11:30.912596  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:11:30.912622  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:11:31.041509  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:11:31.041554  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:11:31.069628  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:11:31.069671  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:11:31.147283  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:11:31.147317  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:11:31.147333  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:30.239104  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:32.739847  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:32.012718  288908 out.go:235]   - Generating certificates and keys ...
	I0916 11:11:32.012811  288908 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 11:11:32.012866  288908 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 11:11:32.274323  288908 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 11:11:32.645738  288908 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 11:11:32.802923  288908 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 11:11:32.869257  288908 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 11:11:33.074216  288908 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 11:11:33.074453  288908 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-679624 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0916 11:11:33.198709  288908 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 11:11:33.198917  288908 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-679624 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0916 11:11:33.288526  288908 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 11:11:33.371633  288908 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 11:11:33.467662  288908 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 11:11:33.467854  288908 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 11:11:33.610889  288908 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 11:11:33.928327  288908 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 11:11:34.209629  288908 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 11:11:34.318731  288908 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 11:11:34.497638  288908 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 11:11:34.498358  288908 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 11:11:34.501042  288908 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 11:11:32.258180  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:34.258663  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:36.258970  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:33.692712  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:11:33.693191  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:11:33.693260  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:11:33.693318  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:11:33.729008  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:33.729033  254463 cri.go:89] found id: ""
	I0916 11:11:33.729043  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:11:33.729109  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:33.733530  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:11:33.733664  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:11:33.781978  254463 cri.go:89] found id: ""
	I0916 11:11:33.782012  254463 logs.go:276] 0 containers: []
	W0916 11:11:33.782023  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:11:33.782031  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:11:33.782097  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:11:33.834507  254463 cri.go:89] found id: ""
	I0916 11:11:33.834606  254463 logs.go:276] 0 containers: []
	W0916 11:11:33.834635  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:11:33.834670  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:11:33.834747  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:11:33.871434  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:33.871453  254463 cri.go:89] found id: ""
	I0916 11:11:33.871460  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:11:33.871506  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:33.876069  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:11:33.876139  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:11:33.939474  254463 cri.go:89] found id: ""
	I0916 11:11:33.939507  254463 logs.go:276] 0 containers: []
	W0916 11:11:33.939518  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:11:33.939525  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:11:33.939579  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:11:33.980476  254463 cri.go:89] found id: "1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:33.980501  254463 cri.go:89] found id: "5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:33.980507  254463 cri.go:89] found id: ""
	I0916 11:11:33.980514  254463 logs.go:276] 2 containers: [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff 5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0]
	I0916 11:11:33.980577  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:33.984110  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:33.987346  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:11:33.987409  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:11:34.040605  254463 cri.go:89] found id: ""
	I0916 11:11:34.040633  254463 logs.go:276] 0 containers: []
	W0916 11:11:34.040644  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:11:34.040655  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:11:34.040719  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:11:34.077726  254463 cri.go:89] found id: ""
	I0916 11:11:34.077754  254463 logs.go:276] 0 containers: []
	W0916 11:11:34.077765  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:11:34.077783  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:11:34.077799  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:11:34.170123  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:11:34.170148  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:11:34.170162  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:34.230253  254463 logs.go:123] Gathering logs for kube-controller-manager [5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0] ...
	I0916 11:11:34.230291  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:34.271506  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:11:34.271533  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:11:34.327836  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:11:34.327865  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:11:34.448242  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:11:34.448278  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:11:34.471341  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:11:34.471385  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:11:34.521420  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:11:34.521454  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:34.601090  254463 logs.go:123] Gathering logs for kube-controller-manager [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff] ...
	I0916 11:11:34.601130  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:37.138930  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:11:37.139314  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:11:37.139360  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:11:37.139403  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:11:37.180304  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:37.180327  254463 cri.go:89] found id: ""
	I0916 11:11:37.180335  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:11:37.180393  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:37.184635  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:11:37.184700  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:11:37.218889  254463 cri.go:89] found id: ""
	I0916 11:11:37.218917  254463 logs.go:276] 0 containers: []
	W0916 11:11:37.218928  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:11:37.218936  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:11:37.218992  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:11:37.256844  254463 cri.go:89] found id: ""
	I0916 11:11:37.256871  254463 logs.go:276] 0 containers: []
	W0916 11:11:37.256881  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:11:37.256888  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:11:37.256946  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:11:37.297431  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:37.297456  254463 cri.go:89] found id: ""
	I0916 11:11:37.297466  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:11:37.297526  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:37.301491  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:11:37.301548  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:11:37.337632  254463 cri.go:89] found id: ""
	I0916 11:11:37.337660  254463 logs.go:276] 0 containers: []
	W0916 11:11:37.337671  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:11:37.337682  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:11:37.337738  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:11:37.376904  254463 cri.go:89] found id: "1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:37.376933  254463 cri.go:89] found id: "5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:37.376939  254463 cri.go:89] found id: ""
	I0916 11:11:37.376950  254463 logs.go:276] 2 containers: [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff 5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0]
	I0916 11:11:37.377006  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:37.380947  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:37.384225  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:11:37.384278  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:11:37.419944  254463 cri.go:89] found id: ""
	I0916 11:11:37.419974  254463 logs.go:276] 0 containers: []
	W0916 11:11:37.419985  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:11:37.419994  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:11:37.420047  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:11:37.454586  254463 cri.go:89] found id: ""
	I0916 11:11:37.454615  254463 logs.go:276] 0 containers: []
	W0916 11:11:37.454635  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:11:37.454651  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:11:37.454670  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:11:37.501786  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:11:37.501815  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:11:37.611024  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:11:37.611066  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:11:37.675810  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:11:37.675834  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:11:37.675858  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:35.238935  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:37.737929  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:34.503090  288908 out.go:235]   - Booting up control plane ...
	I0916 11:11:34.503204  288908 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 11:11:34.503307  288908 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 11:11:34.503428  288908 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 11:11:34.512767  288908 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 11:11:34.518364  288908 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 11:11:34.518434  288908 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 11:11:34.609756  288908 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 11:11:34.609882  288908 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 11:11:35.111264  288908 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.674049ms
	I0916 11:11:35.111379  288908 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 11:11:40.113566  288908 kubeadm.go:310] [api-check] The API server is healthy after 5.002308876s
	I0916 11:11:40.124445  288908 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 11:11:40.136433  288908 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 11:11:40.158632  288908 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 11:11:40.158882  288908 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-679624 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 11:11:40.166356  288908 kubeadm.go:310] [bootstrap-token] Using token: 84spig.4y8nxn4hci96swit
	I0916 11:11:40.168019  288908 out.go:235]   - Configuring RBAC rules ...
	I0916 11:11:40.168133  288908 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 11:11:40.171476  288908 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 11:11:40.177632  288908 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 11:11:40.180530  288908 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 11:11:40.183240  288908 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 11:11:40.187632  288908 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 11:11:40.520291  288908 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 11:11:40.953108  288908 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 11:11:41.520171  288908 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 11:11:41.520855  288908 kubeadm.go:310] 
	I0916 11:11:41.520935  288908 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 11:11:41.520944  288908 kubeadm.go:310] 
	I0916 11:11:41.521009  288908 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 11:11:41.521016  288908 kubeadm.go:310] 
	I0916 11:11:41.521036  288908 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 11:11:41.521083  288908 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 11:11:41.521124  288908 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 11:11:41.521130  288908 kubeadm.go:310] 
	I0916 11:11:41.521171  288908 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 11:11:41.521176  288908 kubeadm.go:310] 
	I0916 11:11:41.521214  288908 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 11:11:41.521219  288908 kubeadm.go:310] 
	I0916 11:11:41.521259  288908 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 11:11:41.521324  288908 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 11:11:41.521379  288908 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 11:11:41.521386  288908 kubeadm.go:310] 
	I0916 11:11:41.521450  288908 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 11:11:41.521511  288908 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 11:11:41.521517  288908 kubeadm.go:310] 
	I0916 11:11:41.521582  288908 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 84spig.4y8nxn4hci96swit \
	I0916 11:11:41.521679  288908 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 \
	I0916 11:11:41.521701  288908 kubeadm.go:310] 	--control-plane 
	I0916 11:11:41.521705  288908 kubeadm.go:310] 
	I0916 11:11:41.521785  288908 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 11:11:41.521793  288908 kubeadm.go:310] 
	I0916 11:11:41.521875  288908 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 84spig.4y8nxn4hci96swit \
	I0916 11:11:41.521955  288908 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 
	I0916 11:11:41.524979  288908 kubeadm.go:310] W0916 11:11:31.907821    1135 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:11:41.525354  288908 kubeadm.go:310] W0916 11:11:31.908743    1135 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:11:41.525562  288908 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 11:11:41.525672  288908 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 11:11:41.525704  288908 cni.go:84] Creating CNI manager for ""
	I0916 11:11:41.525719  288908 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:11:41.527698  288908 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 11:11:38.757671  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:40.761143  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:37.758822  254463 logs.go:123] Gathering logs for kube-controller-manager [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff] ...
	I0916 11:11:37.758871  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:37.797236  254463 logs.go:123] Gathering logs for kube-controller-manager [5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0] ...
	I0916 11:11:37.797263  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:37.842272  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:11:37.842314  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:11:37.892228  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:11:37.892268  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:11:37.913264  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:11:37.913303  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:40.469419  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:11:40.469842  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:11:40.469897  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:11:40.469972  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:11:40.504839  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:40.504859  254463 cri.go:89] found id: ""
	I0916 11:11:40.504867  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:11:40.504910  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:40.509056  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:11:40.509144  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:11:40.544727  254463 cri.go:89] found id: ""
	I0916 11:11:40.544754  254463 logs.go:276] 0 containers: []
	W0916 11:11:40.544764  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:11:40.544769  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:11:40.544824  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:11:40.585143  254463 cri.go:89] found id: ""
	I0916 11:11:40.585177  254463 logs.go:276] 0 containers: []
	W0916 11:11:40.585188  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:11:40.585197  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:11:40.585253  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:11:40.618406  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:40.618433  254463 cri.go:89] found id: ""
	I0916 11:11:40.618442  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:11:40.618497  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:40.622183  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:11:40.622241  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:11:40.654226  254463 cri.go:89] found id: ""
	I0916 11:11:40.654257  254463 logs.go:276] 0 containers: []
	W0916 11:11:40.654270  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:11:40.654278  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:11:40.654338  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:11:40.704703  254463 cri.go:89] found id: "1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:40.704731  254463 cri.go:89] found id: "5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:40.704737  254463 cri.go:89] found id: ""
	I0916 11:11:40.704747  254463 logs.go:276] 2 containers: [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff 5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0]
	I0916 11:11:40.704804  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:40.709695  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:40.714182  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:11:40.714283  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:11:40.769401  254463 cri.go:89] found id: ""
	I0916 11:11:40.769432  254463 logs.go:276] 0 containers: []
	W0916 11:11:40.769443  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:11:40.769450  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:11:40.769508  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:11:40.814114  254463 cri.go:89] found id: ""
	I0916 11:11:40.814180  254463 logs.go:276] 0 containers: []
	W0916 11:11:40.814203  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:11:40.814224  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:11:40.814242  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:11:40.923888  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:11:40.923942  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:11:40.954712  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:11:40.954756  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:11:41.019515  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:11:41.019535  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:11:41.019547  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:41.091866  254463 logs.go:123] Gathering logs for kube-controller-manager [5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0] ...
	I0916 11:11:41.091908  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:41.126670  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:11:41.126702  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:11:41.165890  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:11:41.165924  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:41.203538  254463 logs.go:123] Gathering logs for kube-controller-manager [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff] ...
	I0916 11:11:41.203568  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:41.241297  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:11:41.241325  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:11:39.738817  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:41.739547  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:41.528971  288908 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 11:11:41.532973  288908 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 11:11:41.532990  288908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 11:11:41.550641  288908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 11:11:41.759420  288908 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 11:11:41.759500  288908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:11:41.759538  288908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-679624 minikube.k8s.io/updated_at=2024_09_16T11_11_41_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=embed-certs-679624 minikube.k8s.io/primary=true
	I0916 11:11:41.843186  288908 ops.go:34] apiserver oom_adj: -16
	I0916 11:11:41.843192  288908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:11:42.344100  288908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:11:42.843846  288908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:11:43.343804  288908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:11:43.843597  288908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:11:44.344103  288908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:11:44.843919  288908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:11:45.344112  288908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:11:45.843558  288908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:11:45.931329  288908 kubeadm.go:1113] duration metric: took 4.171896183s to wait for elevateKubeSystemPrivileges
	I0916 11:11:45.931371  288908 kubeadm.go:394] duration metric: took 14.217250544s to StartCluster
	I0916 11:11:45.931395  288908 settings.go:142] acquiring lock: {Name:mk5f140da961bb985c566f732d611f3e238f1f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:11:45.931468  288908 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:11:45.933917  288908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:11:45.934189  288908 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 11:11:45.934349  288908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 11:11:45.934378  288908 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:11:45.934476  288908 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-679624"
	I0916 11:11:45.934514  288908 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-679624"
	I0916 11:11:45.934555  288908 host.go:66] Checking if "embed-certs-679624" exists ...
	I0916 11:11:45.934544  288908 addons.go:69] Setting default-storageclass=true in profile "embed-certs-679624"
	I0916 11:11:45.934561  288908 config.go:182] Loaded profile config "embed-certs-679624": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:11:45.934573  288908 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-679624"
	I0916 11:11:45.935002  288908 cli_runner.go:164] Run: docker container inspect embed-certs-679624 --format={{.State.Status}}
	I0916 11:11:45.935187  288908 cli_runner.go:164] Run: docker container inspect embed-certs-679624 --format={{.State.Status}}
	I0916 11:11:45.936273  288908 out.go:177] * Verifying Kubernetes components...
	I0916 11:11:45.937809  288908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:11:45.969287  288908 addons.go:234] Setting addon default-storageclass=true in "embed-certs-679624"
	I0916 11:11:45.969351  288908 host.go:66] Checking if "embed-certs-679624" exists ...
	I0916 11:11:45.969852  288908 cli_runner.go:164] Run: docker container inspect embed-certs-679624 --format={{.State.Status}}
	I0916 11:11:45.974500  288908 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:11:43.257133  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:45.258494  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:45.975949  288908 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:11:45.975972  288908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:11:45.976045  288908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:11:45.990299  288908 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:11:45.990325  288908 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:11:45.990383  288908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:11:45.994530  288908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/embed-certs-679624/id_rsa Username:docker}
	I0916 11:11:46.007683  288908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/embed-certs-679624/id_rsa Username:docker}
	I0916 11:11:46.233531  288908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 11:11:46.234917  288908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:11:46.241775  288908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:11:46.249620  288908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:11:46.762554  288908 start.go:971] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0916 11:11:46.764311  288908 node_ready.go:35] waiting up to 6m0s for node "embed-certs-679624" to be "Ready" ...
	I0916 11:11:46.821592  288908 node_ready.go:49] node "embed-certs-679624" has status "Ready":"True"
	I0916 11:11:46.821625  288908 node_ready.go:38] duration metric: took 57.288494ms for node "embed-certs-679624" to be "Ready" ...
	I0916 11:11:46.821637  288908 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:11:46.831195  288908 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dmv6t" in "kube-system" namespace to be "Ready" ...
	I0916 11:11:47.181058  288908 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0916 11:11:43.787247  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:11:43.787686  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:11:43.787788  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:11:43.787845  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:11:43.820358  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:43.820379  254463 cri.go:89] found id: ""
	I0916 11:11:43.820386  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:11:43.820429  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:43.823977  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:11:43.824036  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:11:43.858303  254463 cri.go:89] found id: ""
	I0916 11:11:43.858331  254463 logs.go:276] 0 containers: []
	W0916 11:11:43.858342  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:11:43.858350  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:11:43.858410  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:11:43.896708  254463 cri.go:89] found id: ""
	I0916 11:11:43.896738  254463 logs.go:276] 0 containers: []
	W0916 11:11:43.896750  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:11:43.896758  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:11:43.896818  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:11:43.930745  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:43.930785  254463 cri.go:89] found id: ""
	I0916 11:11:43.930794  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:11:43.930857  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:43.934261  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:11:43.934324  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:11:43.967505  254463 cri.go:89] found id: ""
	I0916 11:11:43.967532  254463 logs.go:276] 0 containers: []
	W0916 11:11:43.967542  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:11:43.967549  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:11:43.967609  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:11:44.001802  254463 cri.go:89] found id: "1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:44.001822  254463 cri.go:89] found id: "5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:44.001826  254463 cri.go:89] found id: ""
	I0916 11:11:44.001833  254463 logs.go:276] 2 containers: [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff 5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0]
	I0916 11:11:44.001877  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:44.005500  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:44.008954  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:11:44.009028  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:11:44.042735  254463 cri.go:89] found id: ""
	I0916 11:11:44.042758  254463 logs.go:276] 0 containers: []
	W0916 11:11:44.042766  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:11:44.042771  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:11:44.042825  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:11:44.076718  254463 cri.go:89] found id: ""
	I0916 11:11:44.076741  254463 logs.go:276] 0 containers: []
	W0916 11:11:44.076749  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:11:44.076760  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:11:44.076770  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:11:44.124987  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:11:44.125027  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:44.197752  254463 logs.go:123] Gathering logs for kube-controller-manager [5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0] ...
	I0916 11:11:44.197791  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:44.231307  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:11:44.231335  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:11:44.292499  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:11:44.292527  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:11:44.292542  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:44.328765  254463 logs.go:123] Gathering logs for kube-controller-manager [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff] ...
	I0916 11:11:44.328796  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:44.366047  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:11:44.366073  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:11:44.403288  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:11:44.403313  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:11:44.498895  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:11:44.498933  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:11:47.021378  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:11:47.021855  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:11:47.021915  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:11:47.021977  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:11:47.074174  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:47.074248  254463 cri.go:89] found id: ""
	I0916 11:11:47.074262  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:11:47.074560  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:47.078609  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:11:47.078682  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:11:47.111355  254463 cri.go:89] found id: ""
	I0916 11:11:47.111380  254463 logs.go:276] 0 containers: []
	W0916 11:11:47.111388  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:11:47.111396  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:11:47.111446  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:11:47.154273  254463 cri.go:89] found id: ""
	I0916 11:11:47.154301  254463 logs.go:276] 0 containers: []
	W0916 11:11:47.154313  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:11:47.154321  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:11:47.154380  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:11:47.196698  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:47.196719  254463 cri.go:89] found id: ""
	I0916 11:11:47.196728  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:11:47.196793  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:47.200205  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:11:47.200282  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:11:47.239306  254463 cri.go:89] found id: ""
	I0916 11:11:47.239328  254463 logs.go:276] 0 containers: []
	W0916 11:11:47.239336  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:11:47.239341  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:11:47.239388  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:11:47.275473  254463 cri.go:89] found id: "1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:47.275494  254463 cri.go:89] found id: ""
	I0916 11:11:47.275501  254463 logs.go:276] 1 containers: [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff]
	I0916 11:11:47.275547  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:47.279217  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:11:47.279271  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:11:47.312601  254463 cri.go:89] found id: ""
	I0916 11:11:47.312630  254463 logs.go:276] 0 containers: []
	W0916 11:11:47.312643  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:11:47.312651  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:11:47.312703  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:11:47.351786  254463 cri.go:89] found id: ""
	I0916 11:11:47.351818  254463 logs.go:276] 0 containers: []
	W0916 11:11:47.351830  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:11:47.351841  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:11:47.351856  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:47.388358  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:11:47.388390  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:47.458891  254463 logs.go:123] Gathering logs for kube-controller-manager [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff] ...
	I0916 11:11:47.458925  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:47.495067  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:11:47.495095  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:11:47.556395  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:11:47.556436  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:11:47.606059  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:11:47.606089  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:11:44.237845  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:46.240764  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:48.737615  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:47.182277  288908 addons.go:510] duration metric: took 1.24791353s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0916 11:11:47.267907  288908 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-679624" context rescaled to 1 replicas
	I0916 11:11:48.836602  288908 pod_ready.go:103] pod "coredns-7c65d6cfc9-dmv6t" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:47.757335  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:49.757395  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:47.703200  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:11:47.703236  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:11:47.724642  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:11:47.724684  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:11:47.783498  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:11:50.283928  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:11:50.284374  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:11:50.284423  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:11:50.284474  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:11:50.316834  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:50.316862  254463 cri.go:89] found id: ""
	I0916 11:11:50.316873  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:11:50.316935  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:50.320355  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:11:50.320432  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:11:50.352376  254463 cri.go:89] found id: ""
	I0916 11:11:50.352396  254463 logs.go:276] 0 containers: []
	W0916 11:11:50.352405  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:11:50.352412  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:11:50.352472  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:11:50.387428  254463 cri.go:89] found id: ""
	I0916 11:11:50.387468  254463 logs.go:276] 0 containers: []
	W0916 11:11:50.387479  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:11:50.387487  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:11:50.387537  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:11:50.420454  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:50.420473  254463 cri.go:89] found id: ""
	I0916 11:11:50.420479  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:11:50.420521  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:50.423917  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:11:50.423975  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:11:50.458163  254463 cri.go:89] found id: ""
	I0916 11:11:50.458184  254463 logs.go:276] 0 containers: []
	W0916 11:11:50.458192  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:11:50.458199  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:11:50.458251  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:11:50.490942  254463 cri.go:89] found id: "1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:50.490970  254463 cri.go:89] found id: ""
	I0916 11:11:50.490980  254463 logs.go:276] 1 containers: [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff]
	I0916 11:11:50.491034  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:50.494494  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:11:50.494557  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:11:50.525559  254463 cri.go:89] found id: ""
	I0916 11:11:50.525586  254463 logs.go:276] 0 containers: []
	W0916 11:11:50.525597  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:11:50.525605  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:11:50.525669  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:11:50.557477  254463 cri.go:89] found id: ""
	I0916 11:11:50.557499  254463 logs.go:276] 0 containers: []
	W0916 11:11:50.557507  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:11:50.557522  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:11:50.557534  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:11:50.604317  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:11:50.604355  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:11:50.641507  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:11:50.641536  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:11:50.730228  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:11:50.730266  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:11:50.756357  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:11:50.756403  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:11:50.815959  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:11:50.815992  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:11:50.816005  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:50.853332  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:11:50.853362  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:50.922239  254463 logs.go:123] Gathering logs for kube-controller-manager [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff] ...
	I0916 11:11:50.922282  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:50.739091  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:53.238404  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:50.837082  288908 pod_ready.go:103] pod "coredns-7c65d6cfc9-dmv6t" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:53.337228  288908 pod_ready.go:103] pod "coredns-7c65d6cfc9-dmv6t" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:52.257690  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:54.758372  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:53.459773  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:11:53.460269  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:11:53.460322  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:11:53.460371  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:11:53.495261  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:53.495288  254463 cri.go:89] found id: ""
	I0916 11:11:53.495298  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:11:53.495359  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:53.499351  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:11:53.499415  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:11:53.532686  254463 cri.go:89] found id: ""
	I0916 11:11:53.532716  254463 logs.go:276] 0 containers: []
	W0916 11:11:53.532728  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:11:53.532736  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:11:53.532788  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:11:53.568013  254463 cri.go:89] found id: ""
	I0916 11:11:53.568043  254463 logs.go:276] 0 containers: []
	W0916 11:11:53.568054  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:11:53.568062  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:11:53.568117  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:11:53.601908  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:53.601931  254463 cri.go:89] found id: ""
	I0916 11:11:53.601938  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:11:53.601983  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:53.605669  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:11:53.605742  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:11:53.638394  254463 cri.go:89] found id: ""
	I0916 11:11:53.638420  254463 logs.go:276] 0 containers: []
	W0916 11:11:53.638428  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:11:53.638441  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:11:53.638484  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:11:53.670648  254463 cri.go:89] found id: "1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:53.670669  254463 cri.go:89] found id: ""
	I0916 11:11:53.670678  254463 logs.go:276] 1 containers: [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff]
	I0916 11:11:53.670736  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:53.674142  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:11:53.674193  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:11:53.707669  254463 cri.go:89] found id: ""
	I0916 11:11:53.707698  254463 logs.go:276] 0 containers: []
	W0916 11:11:53.707708  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:11:53.707714  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:11:53.707825  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:11:53.742075  254463 cri.go:89] found id: ""
	I0916 11:11:53.742102  254463 logs.go:276] 0 containers: []
	W0916 11:11:53.742113  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:11:53.742125  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:11:53.742140  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:53.811381  254463 logs.go:123] Gathering logs for kube-controller-manager [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff] ...
	I0916 11:11:53.811415  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:53.846858  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:11:53.846888  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:11:53.891595  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:11:53.891630  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:11:53.925443  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:11:53.925468  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:11:54.015424  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:11:54.015460  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:11:54.036290  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:11:54.036325  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:11:54.096466  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:11:54.096489  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:11:54.096503  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:56.631912  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:11:56.632364  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:11:56.632424  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:11:56.632484  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:11:56.665467  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:56.665487  254463 cri.go:89] found id: ""
	I0916 11:11:56.665494  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:11:56.665540  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:56.669053  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:11:56.669132  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:11:56.701684  254463 cri.go:89] found id: ""
	I0916 11:11:56.701710  254463 logs.go:276] 0 containers: []
	W0916 11:11:56.701721  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:11:56.701728  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:11:56.701790  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:11:56.737251  254463 cri.go:89] found id: ""
	I0916 11:11:56.737289  254463 logs.go:276] 0 containers: []
	W0916 11:11:56.737300  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:11:56.737309  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:11:56.737369  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:11:56.771303  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:56.771332  254463 cri.go:89] found id: ""
	I0916 11:11:56.771340  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:11:56.771382  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:56.774735  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:11:56.774801  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:11:56.807663  254463 cri.go:89] found id: ""
	I0916 11:11:56.807685  254463 logs.go:276] 0 containers: []
	W0916 11:11:56.807693  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:11:56.807698  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:11:56.807788  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:11:56.841120  254463 cri.go:89] found id: "1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:56.841142  254463 cri.go:89] found id: ""
	I0916 11:11:56.841156  254463 logs.go:276] 1 containers: [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff]
	I0916 11:11:56.841200  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:56.844692  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:11:56.844748  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:11:56.877007  254463 cri.go:89] found id: ""
	I0916 11:11:56.877028  254463 logs.go:276] 0 containers: []
	W0916 11:11:56.877036  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:11:56.877041  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:11:56.877088  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:11:56.909108  254463 cri.go:89] found id: ""
	I0916 11:11:56.909136  254463 logs.go:276] 0 containers: []
	W0916 11:11:56.909147  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:11:56.909157  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:11:56.909168  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:11:56.955888  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:11:56.955935  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:11:56.993135  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:11:56.993180  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:11:57.082361  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:11:57.082402  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:11:57.103865  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:11:57.103902  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:11:57.164129  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:11:57.164146  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:11:57.164158  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:57.200538  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:11:57.200568  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:57.273343  254463 logs.go:123] Gathering logs for kube-controller-manager [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff] ...
	I0916 11:11:57.273378  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:55.738690  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:58.238544  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:55.337472  288908 pod_ready.go:103] pod "coredns-7c65d6cfc9-dmv6t" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:57.838821  288908 pod_ready.go:103] pod "coredns-7c65d6cfc9-dmv6t" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:57.257474  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:59.756980  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:01.757290  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:59.806641  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:11:59.807071  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:11:59.807129  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:11:59.807189  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:11:59.841203  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:59.841230  254463 cri.go:89] found id: ""
	I0916 11:11:59.841242  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:11:59.841300  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:59.845256  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:11:59.845334  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:11:59.883444  254463 cri.go:89] found id: ""
	I0916 11:11:59.883480  254463 logs.go:276] 0 containers: []
	W0916 11:11:59.883489  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:11:59.883495  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:11:59.883555  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:11:59.917754  254463 cri.go:89] found id: ""
	I0916 11:11:59.917777  254463 logs.go:276] 0 containers: []
	W0916 11:11:59.917788  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:11:59.917795  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:11:59.917863  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:11:59.956094  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:59.956119  254463 cri.go:89] found id: ""
	I0916 11:11:59.956133  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:11:59.956190  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:59.959827  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:11:59.959913  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:11:59.999060  254463 cri.go:89] found id: ""
	I0916 11:11:59.999087  254463 logs.go:276] 0 containers: []
	W0916 11:11:59.999097  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:11:59.999105  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:11:59.999173  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:12:00.034193  254463 cri.go:89] found id: "1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:12:00.034214  254463 cri.go:89] found id: ""
	I0916 11:12:00.034223  254463 logs.go:276] 1 containers: [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff]
	I0916 11:12:00.034285  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:12:00.037736  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:12:00.037798  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:12:00.070142  254463 cri.go:89] found id: ""
	I0916 11:12:00.070169  254463 logs.go:276] 0 containers: []
	W0916 11:12:00.070177  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:12:00.070183  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:12:00.070231  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:12:00.103691  254463 cri.go:89] found id: ""
	I0916 11:12:00.103716  254463 logs.go:276] 0 containers: []
	W0916 11:12:00.103724  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:12:00.103773  254463 logs.go:123] Gathering logs for kube-controller-manager [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff] ...
	I0916 11:12:00.103790  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:12:00.137085  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:12:00.137111  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:12:00.185521  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:12:00.185555  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:12:00.221687  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:12:00.221717  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:12:00.313223  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:12:00.313269  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:12:00.337700  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:12:00.337742  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:12:00.396098  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:12:00.396119  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:12:00.396130  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:12:00.433027  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:12:00.433077  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:12:00.337500  288908 pod_ready.go:103] pod "coredns-7c65d6cfc9-dmv6t" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:01.337371  288908 pod_ready.go:93] pod "coredns-7c65d6cfc9-dmv6t" in "kube-system" namespace has status "Ready":"True"
	I0916 11:12:01.337393  288908 pod_ready.go:82] duration metric: took 14.506166654s for pod "coredns-7c65d6cfc9-dmv6t" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:01.337404  288908 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-x4f6n" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:01.339056  288908 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-x4f6n" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-x4f6n" not found
	I0916 11:12:01.339081  288908 pod_ready.go:82] duration metric: took 1.668579ms for pod "coredns-7c65d6cfc9-x4f6n" in "kube-system" namespace to be "Ready" ...
	E0916 11:12:01.339093  288908 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-x4f6n" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-x4f6n" not found
	I0916 11:12:01.339102  288908 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-679624" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:01.342921  288908 pod_ready.go:93] pod "etcd-embed-certs-679624" in "kube-system" namespace has status "Ready":"True"
	I0916 11:12:01.342938  288908 pod_ready.go:82] duration metric: took 3.82908ms for pod "etcd-embed-certs-679624" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:01.342949  288908 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-679624" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:01.346533  288908 pod_ready.go:93] pod "kube-apiserver-embed-certs-679624" in "kube-system" namespace has status "Ready":"True"
	I0916 11:12:01.346552  288908 pod_ready.go:82] duration metric: took 3.596798ms for pod "kube-apiserver-embed-certs-679624" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:01.346560  288908 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-679624" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:01.350192  288908 pod_ready.go:93] pod "kube-controller-manager-embed-certs-679624" in "kube-system" namespace has status "Ready":"True"
	I0916 11:12:01.350208  288908 pod_ready.go:82] duration metric: took 3.643463ms for pod "kube-controller-manager-embed-certs-679624" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:01.350217  288908 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bt6k2" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:01.535253  288908 pod_ready.go:93] pod "kube-proxy-bt6k2" in "kube-system" namespace has status "Ready":"True"
	I0916 11:12:01.535276  288908 pod_ready.go:82] duration metric: took 185.05015ms for pod "kube-proxy-bt6k2" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:01.535286  288908 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-679624" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:01.935780  288908 pod_ready.go:93] pod "kube-scheduler-embed-certs-679624" in "kube-system" namespace has status "Ready":"True"
	I0916 11:12:01.935805  288908 pod_ready.go:82] duration metric: took 400.511614ms for pod "kube-scheduler-embed-certs-679624" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:01.935814  288908 pod_ready.go:39] duration metric: took 15.114148588s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:12:01.935828  288908 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:12:01.935879  288908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:12:01.948406  288908 api_server.go:72] duration metric: took 16.014183768s to wait for apiserver process to appear ...
	I0916 11:12:01.948432  288908 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:12:01.948456  288908 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0916 11:12:01.952961  288908 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0916 11:12:01.954088  288908 api_server.go:141] control plane version: v1.31.1
	I0916 11:12:01.954120  288908 api_server.go:131] duration metric: took 5.681186ms to wait for apiserver health ...
	I0916 11:12:01.954129  288908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 11:12:02.138246  288908 system_pods.go:59] 8 kube-system pods found
	I0916 11:12:02.138282  288908 system_pods.go:61] "coredns-7c65d6cfc9-dmv6t" [95a9589e-1385-4fb0-8b68-fb26098daf01] Running
	I0916 11:12:02.138288  288908 system_pods.go:61] "etcd-embed-certs-679624" [b351fd38-6c30-4fa6-b5da-159580232c88] Running
	I0916 11:12:02.138294  288908 system_pods.go:61] "kindnet-78kp5" [795aad3a-f96f-4477-9bd5-49a233890f1e] Running
	I0916 11:12:02.138303  288908 system_pods.go:61] "kube-apiserver-embed-certs-679624" [858cb7c8-8e66-4952-bf89-228525cffafb] Running
	I0916 11:12:02.138309  288908 system_pods.go:61] "kube-controller-manager-embed-certs-679624" [f6aaf262-2f80-4875-bbbf-1b5918a23787] Running
	I0916 11:12:02.138314  288908 system_pods.go:61] "kube-proxy-bt6k2" [cae0a1e2-c041-4c6c-8772-978a2c544879] Running
	I0916 11:12:02.138320  288908 system_pods.go:61] "kube-scheduler-embed-certs-679624" [3eb8a130-09a9-4ae6-b12a-6dff3309cbae] Running
	I0916 11:12:02.138328  288908 system_pods.go:61] "storage-provisioner" [3b5477b8-ac39-4acc-9e16-a13a7b1d3e10] Running
	I0916 11:12:02.138334  288908 system_pods.go:74] duration metric: took 184.199914ms to wait for pod list to return data ...
	I0916 11:12:02.138346  288908 default_sa.go:34] waiting for default service account to be created ...
	I0916 11:12:02.335554  288908 default_sa.go:45] found service account: "default"
	I0916 11:12:02.335581  288908 default_sa.go:55] duration metric: took 197.225628ms for default service account to be created ...
	I0916 11:12:02.335592  288908 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 11:12:02.537944  288908 system_pods.go:86] 8 kube-system pods found
	I0916 11:12:02.537972  288908 system_pods.go:89] "coredns-7c65d6cfc9-dmv6t" [95a9589e-1385-4fb0-8b68-fb26098daf01] Running
	I0916 11:12:02.537977  288908 system_pods.go:89] "etcd-embed-certs-679624" [b351fd38-6c30-4fa6-b5da-159580232c88] Running
	I0916 11:12:02.537981  288908 system_pods.go:89] "kindnet-78kp5" [795aad3a-f96f-4477-9bd5-49a233890f1e] Running
	I0916 11:12:02.537985  288908 system_pods.go:89] "kube-apiserver-embed-certs-679624" [858cb7c8-8e66-4952-bf89-228525cffafb] Running
	I0916 11:12:02.537989  288908 system_pods.go:89] "kube-controller-manager-embed-certs-679624" [f6aaf262-2f80-4875-bbbf-1b5918a23787] Running
	I0916 11:12:02.537992  288908 system_pods.go:89] "kube-proxy-bt6k2" [cae0a1e2-c041-4c6c-8772-978a2c544879] Running
	I0916 11:12:02.537995  288908 system_pods.go:89] "kube-scheduler-embed-certs-679624" [3eb8a130-09a9-4ae6-b12a-6dff3309cbae] Running
	I0916 11:12:02.538000  288908 system_pods.go:89] "storage-provisioner" [3b5477b8-ac39-4acc-9e16-a13a7b1d3e10] Running
	I0916 11:12:02.538009  288908 system_pods.go:126] duration metric: took 202.410695ms to wait for k8s-apps to be running ...
	I0916 11:12:02.538017  288908 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 11:12:02.538066  288908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:12:02.549283  288908 system_svc.go:56] duration metric: took 11.252338ms WaitForService to wait for kubelet
	I0916 11:12:02.549315  288908 kubeadm.go:582] duration metric: took 16.615095592s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:12:02.549372  288908 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:12:02.736116  288908 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 11:12:02.736146  288908 node_conditions.go:123] node cpu capacity is 8
	I0916 11:12:02.736168  288908 node_conditions.go:105] duration metric: took 186.790688ms to run NodePressure ...
	I0916 11:12:02.736182  288908 start.go:241] waiting for startup goroutines ...
	I0916 11:12:02.736190  288908 start.go:246] waiting for cluster config update ...
	I0916 11:12:02.736206  288908 start.go:255] writing updated cluster config ...
	I0916 11:12:02.736490  288908 ssh_runner.go:195] Run: rm -f paused
	I0916 11:12:02.743407  288908 out.go:177] * Done! kubectl is now configured to use "embed-certs-679624" cluster and "default" namespace by default
	E0916 11:12:02.744289  288908 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	I0916 11:12:00.240179  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:02.738229  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3dab298bfe5b5       c69fa2e9cbf5f       5 seconds ago       Running             coredns                   0                   c9b661400e384       coredns-7c65d6cfc9-dmv6t
	f590d121c5d6d       6e38f40d628db       18 seconds ago      Running             storage-provisioner       0                   985f1b4472131       storage-provisioner
	2dbb170a519e8       12968670680f4       19 seconds ago      Running             kindnet-cni               0                   06e595c1fc81f       kindnet-78kp5
	c182b9d7c07df       60c005f310ff3       19 seconds ago      Running             kube-proxy                0                   d47fcd0c3fa57       kube-proxy-bt6k2
	debbdc082cc9c       6bab7719df100       30 seconds ago      Running             kube-apiserver            0                   9df038a9105dc       kube-apiserver-embed-certs-679624
	7637dc0ee3d4d       9aa1fad941575       30 seconds ago      Running             kube-scheduler            0                   ba28ed2ba4c4a       kube-scheduler-embed-certs-679624
	98ba0135cf4f3       175ffd71cce3d       30 seconds ago      Running             kube-controller-manager   0                   ab668cab99a4f       kube-controller-manager-embed-certs-679624
	e7db7be77ed78       2e96e5913fc06       30 seconds ago      Running             etcd                      0                   c206875f93f94       etcd-embed-certs-679624
	
	
	==> containerd <==
	Sep 16 11:11:46 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:46.520357026Z" level=info msg="StartContainer for \"c182b9d7c07dff5e07ae6d7a2a6ee44903b7d16b719cbe713e148ddee7a32eae\" returns successfully"
	Sep 16 11:11:46 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:46.520357096Z" level=info msg="CreateContainer within sandbox \"06e595c1fc81f2c081ceb8d59c372d409faafdfcf3c12800b84909c663b82bf1\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"2dbb170a519e89c32bdecca493bd2479c29f546b0a9ba488f7768c7e7cdd8cc6\""
	Sep 16 11:11:46 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:46.521361892Z" level=info msg="StartContainer for \"2dbb170a519e89c32bdecca493bd2479c29f546b0a9ba488f7768c7e7cdd8cc6\""
	Sep 16 11:11:46 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:46.745064252Z" level=info msg="StartContainer for \"2dbb170a519e89c32bdecca493bd2479c29f546b0a9ba488f7768c7e7cdd8cc6\" returns successfully"
	Sep 16 11:11:47 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:47.486134659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:3b5477b8-ac39-4acc-9e16-a13a7b1d3e10,Namespace:kube-system,Attempt:0,}"
	Sep 16 11:11:47 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:47.507546887Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 11:11:47 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:47.507626577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 11:11:47 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:47.507642316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:11:47 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:47.507859627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:11:47 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:47.563463276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:3b5477b8-ac39-4acc-9e16-a13a7b1d3e10,Namespace:kube-system,Attempt:0,} returns sandbox id \"985f1b4472131607d8a9ef9844a23f17639f03d828fdfa8fe9bcdbdf16a1125e\""
	Sep 16 11:11:47 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:47.566480877Z" level=info msg="CreateContainer within sandbox \"985f1b4472131607d8a9ef9844a23f17639f03d828fdfa8fe9bcdbdf16a1125e\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Sep 16 11:11:47 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:47.580004902Z" level=info msg="CreateContainer within sandbox \"985f1b4472131607d8a9ef9844a23f17639f03d828fdfa8fe9bcdbdf16a1125e\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"f590d121c5d6d1937cefe41166b62f9f511f34157bec2b92d38767e94a5b8ba8\""
	Sep 16 11:11:47 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:47.580646352Z" level=info msg="StartContainer for \"f590d121c5d6d1937cefe41166b62f9f511f34157bec2b92d38767e94a5b8ba8\""
	Sep 16 11:11:47 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:47.633530791Z" level=info msg="StartContainer for \"f590d121c5d6d1937cefe41166b62f9f511f34157bec2b92d38767e94a5b8ba8\" returns successfully"
	Sep 16 11:11:51 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:51.239254534Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Sep 16 11:11:59 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:59.836571108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dmv6t,Uid:95a9589e-1385-4fb0-8b68-fb26098daf01,Namespace:kube-system,Attempt:0,}"
	Sep 16 11:11:59 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:59.877183985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 11:11:59 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:59.877991138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 11:11:59 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:59.878020603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:11:59 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:59.878153724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:11:59 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:59.928098331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dmv6t,Uid:95a9589e-1385-4fb0-8b68-fb26098daf01,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9b661400e3843a52856b80e1ea45e1fd13416f005a9e149a0718e77a406f2e0\""
	Sep 16 11:11:59 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:59.931020108Z" level=info msg="CreateContainer within sandbox \"c9b661400e3843a52856b80e1ea45e1fd13416f005a9e149a0718e77a406f2e0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Sep 16 11:11:59 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:59.946892222Z" level=info msg="CreateContainer within sandbox \"c9b661400e3843a52856b80e1ea45e1fd13416f005a9e149a0718e77a406f2e0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3dab298bfe5b5f466c279d41236e5592bfcab2429f9c5b2ce3ac5bf7c01c183d\""
	Sep 16 11:11:59 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:59.947484287Z" level=info msg="StartContainer for \"3dab298bfe5b5f466c279d41236e5592bfcab2429f9c5b2ce3ac5bf7c01c183d\""
	Sep 16 11:11:59 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:59.995349695Z" level=info msg="StartContainer for \"3dab298bfe5b5f466c279d41236e5592bfcab2429f9c5b2ce3ac5bf7c01c183d\" returns successfully"
	
	
	==> coredns [3dab298bfe5b5f466c279d41236e5592bfcab2429f9c5b2ce3ac5bf7c01c183d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:55078 - 62834 "HINFO IN 5079472268666806265.2239314299196871410. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008456339s
	
	
	==> describe nodes <==
	Name:               embed-certs-679624
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-679624
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=embed-certs-679624
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T11_11_41_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:11:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-679624
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:12:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:11:51 +0000   Mon, 16 Sep 2024 11:11:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:11:51 +0000   Mon, 16 Sep 2024 11:11:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:11:51 +0000   Mon, 16 Sep 2024 11:11:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:11:51 +0000   Mon, 16 Sep 2024 11:11:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-679624
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 68cf6edacc48492dad36911d3d7a1ae0
	  System UUID:                cc7366e5-b963-44cb-99a5-daef6ab18709
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-dmv6t                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     20s
	  kube-system                 etcd-embed-certs-679624                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         24s
	  kube-system                 kindnet-78kp5                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      20s
	  kube-system                 kube-apiserver-embed-certs-679624             250m (3%)     0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-controller-manager-embed-certs-679624    200m (2%)     0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-proxy-bt6k2                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         20s
	  kube-system                 kube-scheduler-embed-certs-679624             100m (1%)     0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 18s                kube-proxy       
	  Normal   NodeHasSufficientMemory  31s (x8 over 31s)  kubelet          Node embed-certs-679624 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    31s (x7 over 31s)  kubelet          Node embed-certs-679624 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     31s (x7 over 31s)  kubelet          Node embed-certs-679624 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  31s                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 25s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 25s                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  25s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  24s                kubelet          Node embed-certs-679624 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    24s                kubelet          Node embed-certs-679624 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     24s                kubelet          Node embed-certs-679624 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           21s                node-controller  Node embed-certs-679624 event: Registered Node embed-certs-679624 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000001] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000001] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +1.003295] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000012] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.003959] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000006] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +2.011810] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000005] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.000001] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +4.063628] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000008] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.000030] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000007] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.003992] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000006] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +8.187268] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000006] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.000063] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000005] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.003939] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000006] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	
	
	==> etcd [e7db7be77ed78b3a10c8d675a7803092991d0e4a8c3e81f93fe10d600f5fd3a0] <==
	{"level":"info","ts":"2024-09-16T11:11:35.660657Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T11:11:35.660927Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T11:11:35.660956Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T11:11:35.661023Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2024-09-16T11:11:35.661042Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2024-09-16T11:11:36.545011Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T11:11:36.545063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T11:11:36.545117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2024-09-16T11:11:36.545137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T11:11:36.545150Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2024-09-16T11:11:36.545165Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2024-09-16T11:11:36.545180Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2024-09-16T11:11:36.546198Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:11:36.546663Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:11:36.546683Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:11:36.546665Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:embed-certs-679624 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T11:11:36.546933Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T11:11:36.546964Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T11:11:36.547066Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:11:36.547154Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:11:36.547183Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:11:36.548000Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:11:36.548092Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:11:36.548901Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2024-09-16T11:11:36.549253Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 11:12:05 up 54 min,  0 users,  load average: 2.59, 3.17, 2.23
	Linux embed-certs-679624 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [2dbb170a519e89c32bdecca493bd2479c29f546b0a9ba488f7768c7e7cdd8cc6] <==
	I0916 11:11:47.021998       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 11:11:47.023989       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0916 11:11:47.024566       1 main.go:148] setting mtu 1500 for CNI 
	I0916 11:11:47.025534       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 11:11:47.025627       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 11:11:47.420585       1 controller.go:334] Starting controller kube-network-policies
	I0916 11:11:47.421021       1 controller.go:338] Waiting for informer caches to sync
	I0916 11:11:47.421117       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 11:11:47.627002       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 11:11:47.627034       1 metrics.go:61] Registering metrics
	I0916 11:11:47.627087       1 controller.go:374] Syncing nftables rules
	I0916 11:11:57.424285       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0916 11:11:57.424361       1 main.go:299] handling current node
	
	
	==> kube-apiserver [debbdc082cc9ca9a3fe53ba70655a9e18f2b6df022109c93f5db8c5e17b5c30a] <==
	I0916 11:11:38.032726       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0916 11:11:38.032865       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0916 11:11:38.032486       1 controller.go:615] quota admission added evaluator for: namespaces
	E0916 11:11:38.033283       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0916 11:11:38.033344       1 aggregator.go:171] initial CRD sync complete...
	I0916 11:11:38.033358       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 11:11:38.033364       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 11:11:38.033370       1 cache.go:39] Caches are synced for autoregister controller
	I0916 11:11:38.236713       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 11:11:38.931382       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0916 11:11:38.935989       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0916 11:11:38.936007       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 11:11:39.360688       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 11:11:39.550286       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 11:11:39.885332       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0916 11:11:39.981669       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0916 11:11:39.983057       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 11:11:39.983086       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 11:11:39.989809       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 11:11:40.937234       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 11:11:40.951405       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 11:11:40.963172       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 11:11:45.562872       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0916 11:11:45.562874       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0916 11:11:45.712697       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [98ba0135cf4f3961fbf8ace4f9fb6ca27a2883eb5d31aa7115bb9c9828c71f32] <==
	I0916 11:11:44.859954       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0916 11:11:44.910809       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0916 11:11:44.911946       1 shared_informer.go:320] Caches are synced for endpoint
	I0916 11:11:44.913160       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 11:11:44.925855       1 shared_informer.go:320] Caches are synced for disruption
	I0916 11:11:44.931380       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 11:11:44.937542       1 shared_informer.go:320] Caches are synced for deployment
	I0916 11:11:44.943920       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0916 11:11:45.325629       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:11:45.408258       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:11:45.408287       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 11:11:45.828842       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="111.978923ms"
	I0916 11:11:45.842449       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="13.539417ms"
	I0916 11:11:45.842559       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="62.208µs"
	I0916 11:11:45.843676       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="53.216µs"
	I0916 11:11:46.851046       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="15.841905ms"
	I0916 11:11:46.858766       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="7.657412ms"
	I0916 11:11:46.859483       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="165.208µs"
	I0916 11:11:47.957358       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="66.062µs"
	I0916 11:11:47.964349       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="64.093µs"
	I0916 11:11:47.965886       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="68.029µs"
	I0916 11:11:51.248649       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-679624"
	I0916 11:12:00.965845       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="117.386µs"
	I0916 11:12:00.983957       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.090341ms"
	I0916 11:12:00.984089       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="84.88µs"
	
	
	==> kube-proxy [c182b9d7c07dff5e07ae6d7a2a6ee44903b7d16b719cbe713e148ddee7a32eae] <==
	I0916 11:11:46.629316       1 server_linux.go:66] "Using iptables proxy"
	I0916 11:11:46.830532       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.85.2"]
	E0916 11:11:46.830628       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 11:11:46.926994       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 11:11:46.927247       1 server_linux.go:169] "Using iptables Proxier"
	I0916 11:11:46.930151       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 11:11:46.930796       1 server.go:483] "Version info" version="v1.31.1"
	I0916 11:11:46.930829       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:11:46.932160       1 config.go:199] "Starting service config controller"
	I0916 11:11:46.932195       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 11:11:46.932254       1 config.go:328] "Starting node config controller"
	I0916 11:11:46.932264       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 11:11:46.932283       1 config.go:105] "Starting endpoint slice config controller"
	I0916 11:11:46.932300       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 11:11:47.033501       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 11:11:47.033621       1 shared_informer.go:320] Caches are synced for service config
	I0916 11:11:47.033942       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7637dc0ee3d4d0dad8e111abe371f5489a1304abc063851969b1c262fb4b2a10] <==
	W0916 11:11:38.120528       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 11:11:38.120569       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 11:11:38.120569       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0916 11:11:38.120569       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 11:11:38.120610       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E0916 11:11:38.120610       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:11:38.120674       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 11:11:38.120697       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:11:38.918573       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 11:11:38.918616       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:11:39.040886       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 11:11:39.040945       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:11:39.113732       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 11:11:39.113779       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:11:39.119266       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 11:11:39.119303       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:11:39.126330       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 11:11:39.126368       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:11:39.133675       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 11:11:39.133725       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:11:39.158407       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 11:11:39.158460       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:11:39.324525       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:11:39.324580       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 11:11:41.243501       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 11:11:45 embed-certs-679624 kubelet[1613]: I0916 11:11:45.853339    1613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn5kr\" (UniqueName: \"kubernetes.io/projected/281fa9a8-3479-46dc-a1df-9dc1d7985344-kube-api-access-mn5kr\") pod \"coredns-7c65d6cfc9-x4f6n\" (UID: \"281fa9a8-3479-46dc-a1df-9dc1d7985344\") " pod="kube-system/coredns-7c65d6cfc9-x4f6n"
	Sep 16 11:11:45 embed-certs-679624 kubelet[1613]: I0916 11:11:45.853489    1613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lvgt\" (UniqueName: \"kubernetes.io/projected/95a9589e-1385-4fb0-8b68-fb26098daf01-kube-api-access-4lvgt\") pod \"coredns-7c65d6cfc9-dmv6t\" (UID: \"95a9589e-1385-4fb0-8b68-fb26098daf01\") " pod="kube-system/coredns-7c65d6cfc9-dmv6t"
	Sep 16 11:11:46 embed-certs-679624 kubelet[1613]: E0916 11:11:46.234554    1613 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21be9c12a59f91f508573266a2a1fd40007e7eae8cfe12b3460b142c170c8245\": failed to find network info for sandbox \"21be9c12a59f91f508573266a2a1fd40007e7eae8cfe12b3460b142c170c8245\""
	Sep 16 11:11:46 embed-certs-679624 kubelet[1613]: E0916 11:11:46.234645    1613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21be9c12a59f91f508573266a2a1fd40007e7eae8cfe12b3460b142c170c8245\": failed to find network info for sandbox \"21be9c12a59f91f508573266a2a1fd40007e7eae8cfe12b3460b142c170c8245\"" pod="kube-system/coredns-7c65d6cfc9-dmv6t"
	Sep 16 11:11:46 embed-certs-679624 kubelet[1613]: E0916 11:11:46.234674    1613 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21be9c12a59f91f508573266a2a1fd40007e7eae8cfe12b3460b142c170c8245\": failed to find network info for sandbox \"21be9c12a59f91f508573266a2a1fd40007e7eae8cfe12b3460b142c170c8245\"" pod="kube-system/coredns-7c65d6cfc9-dmv6t"
	Sep 16 11:11:46 embed-certs-679624 kubelet[1613]: E0916 11:11:46.234728    1613 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-dmv6t_kube-system(95a9589e-1385-4fb0-8b68-fb26098daf01)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-dmv6t_kube-system(95a9589e-1385-4fb0-8b68-fb26098daf01)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21be9c12a59f91f508573266a2a1fd40007e7eae8cfe12b3460b142c170c8245\\\": failed to find network info for sandbox \\\"21be9c12a59f91f508573266a2a1fd40007e7eae8cfe12b3460b142c170c8245\\\"\"" pod="kube-system/coredns-7c65d6cfc9-dmv6t" podUID="95a9589e-1385-4fb0-8b68-fb26098daf01"
	Sep 16 11:11:46 embed-certs-679624 kubelet[1613]: E0916 11:11:46.243017    1613 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3250d92c6cbeccfb0153a110aede6c0d9e789dabaa3cf4ac553050e5d10b4d04\": failed to find network info for sandbox \"3250d92c6cbeccfb0153a110aede6c0d9e789dabaa3cf4ac553050e5d10b4d04\""
	Sep 16 11:11:46 embed-certs-679624 kubelet[1613]: E0916 11:11:46.243111    1613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3250d92c6cbeccfb0153a110aede6c0d9e789dabaa3cf4ac553050e5d10b4d04\": failed to find network info for sandbox \"3250d92c6cbeccfb0153a110aede6c0d9e789dabaa3cf4ac553050e5d10b4d04\"" pod="kube-system/coredns-7c65d6cfc9-x4f6n"
	Sep 16 11:11:46 embed-certs-679624 kubelet[1613]: E0916 11:11:46.243138    1613 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3250d92c6cbeccfb0153a110aede6c0d9e789dabaa3cf4ac553050e5d10b4d04\": failed to find network info for sandbox \"3250d92c6cbeccfb0153a110aede6c0d9e789dabaa3cf4ac553050e5d10b4d04\"" pod="kube-system/coredns-7c65d6cfc9-x4f6n"
	Sep 16 11:11:46 embed-certs-679624 kubelet[1613]: E0916 11:11:46.243192    1613 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-x4f6n_kube-system(281fa9a8-3479-46dc-a1df-9dc1d7985344)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-x4f6n_kube-system(281fa9a8-3479-46dc-a1df-9dc1d7985344)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3250d92c6cbeccfb0153a110aede6c0d9e789dabaa3cf4ac553050e5d10b4d04\\\": failed to find network info for sandbox \\\"3250d92c6cbeccfb0153a110aede6c0d9e789dabaa3cf4ac553050e5d10b4d04\\\"\"" pod="kube-system/coredns-7c65d6cfc9-x4f6n" podUID="281fa9a8-3479-46dc-a1df-9dc1d7985344"
	Sep 16 11:11:46 embed-certs-679624 kubelet[1613]: I0916 11:11:46.936307    1613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bt6k2" podStartSLOduration=1.9362803259999999 podStartE2EDuration="1.936280326s" podCreationTimestamp="2024-09-16 11:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:11:46.93539239 +0000 UTC m=+6.230931077" watchObservedRunningTime="2024-09-16 11:11:46.936280326 +0000 UTC m=+6.231819013"
	Sep 16 11:11:47 embed-certs-679624 kubelet[1613]: I0916 11:11:47.042983    1613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-78kp5" podStartSLOduration=2.042955881 podStartE2EDuration="2.042955881s" podCreationTimestamp="2024-09-16 11:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:11:47.027232044 +0000 UTC m=+6.322770729" watchObservedRunningTime="2024-09-16 11:11:47.042955881 +0000 UTC m=+6.338494569"
	Sep 16 11:11:47 embed-certs-679624 kubelet[1613]: I0916 11:11:47.128660    1613 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/281fa9a8-3479-46dc-a1df-9dc1d7985344-config-volume\") pod \"281fa9a8-3479-46dc-a1df-9dc1d7985344\" (UID: \"281fa9a8-3479-46dc-a1df-9dc1d7985344\") "
	Sep 16 11:11:47 embed-certs-679624 kubelet[1613]: I0916 11:11:47.128726    1613 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn5kr\" (UniqueName: \"kubernetes.io/projected/281fa9a8-3479-46dc-a1df-9dc1d7985344-kube-api-access-mn5kr\") pod \"281fa9a8-3479-46dc-a1df-9dc1d7985344\" (UID: \"281fa9a8-3479-46dc-a1df-9dc1d7985344\") "
	Sep 16 11:11:47 embed-certs-679624 kubelet[1613]: I0916 11:11:47.129072    1613 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/281fa9a8-3479-46dc-a1df-9dc1d7985344-config-volume" (OuterVolumeSpecName: "config-volume") pod "281fa9a8-3479-46dc-a1df-9dc1d7985344" (UID: "281fa9a8-3479-46dc-a1df-9dc1d7985344"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Sep 16 11:11:47 embed-certs-679624 kubelet[1613]: I0916 11:11:47.131020    1613 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/281fa9a8-3479-46dc-a1df-9dc1d7985344-kube-api-access-mn5kr" (OuterVolumeSpecName: "kube-api-access-mn5kr") pod "281fa9a8-3479-46dc-a1df-9dc1d7985344" (UID: "281fa9a8-3479-46dc-a1df-9dc1d7985344"). InnerVolumeSpecName "kube-api-access-mn5kr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 11:11:47 embed-certs-679624 kubelet[1613]: I0916 11:11:47.229070    1613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhtxr\" (UniqueName: \"kubernetes.io/projected/3b5477b8-ac39-4acc-9e16-a13a7b1d3e10-kube-api-access-rhtxr\") pod \"storage-provisioner\" (UID: \"3b5477b8-ac39-4acc-9e16-a13a7b1d3e10\") " pod="kube-system/storage-provisioner"
	Sep 16 11:11:47 embed-certs-679624 kubelet[1613]: I0916 11:11:47.229155    1613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3b5477b8-ac39-4acc-9e16-a13a7b1d3e10-tmp\") pod \"storage-provisioner\" (UID: \"3b5477b8-ac39-4acc-9e16-a13a7b1d3e10\") " pod="kube-system/storage-provisioner"
	Sep 16 11:11:47 embed-certs-679624 kubelet[1613]: I0916 11:11:47.229198    1613 reconciler_common.go:288] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/281fa9a8-3479-46dc-a1df-9dc1d7985344-config-volume\") on node \"embed-certs-679624\" DevicePath \"\""
	Sep 16 11:11:47 embed-certs-679624 kubelet[1613]: I0916 11:11:47.229220    1613 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-mn5kr\" (UniqueName: \"kubernetes.io/projected/281fa9a8-3479-46dc-a1df-9dc1d7985344-kube-api-access-mn5kr\") on node \"embed-certs-679624\" DevicePath \"\""
	Sep 16 11:11:47 embed-certs-679624 kubelet[1613]: I0916 11:11:47.947516    1613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=0.947491757 podStartE2EDuration="947.491757ms" podCreationTimestamp="2024-09-16 11:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:11:47.947191961 +0000 UTC m=+7.242730646" watchObservedRunningTime="2024-09-16 11:11:47.947491757 +0000 UTC m=+7.243030463"
	Sep 16 11:11:48 embed-certs-679624 kubelet[1613]: I0916 11:11:48.838386    1613 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="281fa9a8-3479-46dc-a1df-9dc1d7985344" path="/var/lib/kubelet/pods/281fa9a8-3479-46dc-a1df-9dc1d7985344/volumes"
	Sep 16 11:11:51 embed-certs-679624 kubelet[1613]: I0916 11:11:51.238671    1613 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 16 11:11:51 embed-certs-679624 kubelet[1613]: I0916 11:11:51.239550    1613 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 16 11:12:00 embed-certs-679624 kubelet[1613]: I0916 11:12:00.977086    1613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-dmv6t" podStartSLOduration=15.977061402 podStartE2EDuration="15.977061402s" podCreationTimestamp="2024-09-16 11:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:12:00.966248932 +0000 UTC m=+20.261787617" watchObservedRunningTime="2024-09-16 11:12:00.977061402 +0000 UTC m=+20.272600088"
	
	
	==> storage-provisioner [f590d121c5d6d1937cefe41166b62f9f511f34157bec2b92d38767e94a5b8ba8] <==
	I0916 11:11:47.640871       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 11:11:47.650046       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 11:11:47.650086       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 11:11:47.659227       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 11:11:47.659353       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"af47b140-7661-4805-8791-5af1e81aebf7", APIVersion:"v1", ResourceVersion:"421", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-679624_78a7dd88-7eba-4513-b3b6-513ea370e3ab became leader
	I0916 11:11:47.659420       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-679624_78a7dd88-7eba-4513-b3b6-513ea370e3ab!
	I0916 11:11:47.760481       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-679624_78a7dd88-7eba-4513-b3b6-513ea370e3ab!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-679624 -n embed-certs-679624
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-679624 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context embed-certs-679624 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (515.367µs)
helpers_test.go:263: kubectl --context embed-certs-679624 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (3.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-679624 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-679624 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-679624 describe deploy/metrics-server -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (533.585µs)
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-679624 describe deploy/metrics-server -n kube-system": fork/exec /usr/local/bin/kubectl: exec format error
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-679624
helpers_test.go:235: (dbg) docker inspect embed-certs-679624:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8a143ceb3281ed1e96f6f01866c194c6d8fa87e7cc776258c58582b8b277ac01",
	        "Created": "2024-09-16T11:11:24.339291508Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 289930,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T11:11:24.472248835Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/8a143ceb3281ed1e96f6f01866c194c6d8fa87e7cc776258c58582b8b277ac01/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8a143ceb3281ed1e96f6f01866c194c6d8fa87e7cc776258c58582b8b277ac01/hostname",
	        "HostsPath": "/var/lib/docker/containers/8a143ceb3281ed1e96f6f01866c194c6d8fa87e7cc776258c58582b8b277ac01/hosts",
	        "LogPath": "/var/lib/docker/containers/8a143ceb3281ed1e96f6f01866c194c6d8fa87e7cc776258c58582b8b277ac01/8a143ceb3281ed1e96f6f01866c194c6d8fa87e7cc776258c58582b8b277ac01-json.log",
	        "Name": "/embed-certs-679624",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-679624:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-679624",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/cb7af4404e9e88f242c67c60187c08f2160de8b651471dce3186948c08f3dc76-init/diff:/var/lib/docker/overlay2/e391a31d00d345931d18da68f8a7d7338830f6d9b98fe203461225d03a65d594/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cb7af4404e9e88f242c67c60187c08f2160de8b651471dce3186948c08f3dc76/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cb7af4404e9e88f242c67c60187c08f2160de8b651471dce3186948c08f3dc76/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cb7af4404e9e88f242c67c60187c08f2160de8b651471dce3186948c08f3dc76/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-679624",
	                "Source": "/var/lib/docker/volumes/embed-certs-679624/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-679624",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-679624",
	                "name.minikube.sigs.k8s.io": "embed-certs-679624",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ff60825c25c0c32e46c9786671ffef996b2342a731555808d9dc885e9b8cac8e",
	            "SandboxKey": "/var/run/docker/netns/ff60825c25c0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-679624": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null,
	                    "NetworkID": "5c8d67185b352feb5e2b0195e3f409fe6cf79bd750730cb6897291fef1a3c3d7",
	                    "EndpointID": "dddf70084024b7c890e66e96d6c39e3f3c7ed4ae631ca39642acb6c9b79a1c44",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-679624",
	                        "8a143ceb3281"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-679624 -n embed-certs-679624
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-679624 logs -n 25
E0916 11:12:08.256893   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-679624 logs -n 25: (1.109215426s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-917705                           | force-systemd-flag-917705 | jenkins | v1.34.0 | 16 Sep 24 11:07 UTC | 16 Sep 24 11:08 UTC |
	|         | --memory=2048 --force-systemd                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                                   |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-846070                               | force-systemd-env-846070  | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | ssh cat                                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-846070                            | force-systemd-env-846070  | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| stop    | -p kubernetes-upgrade-311911                           | kubernetes-upgrade-311911 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| start   | -p cert-options-840054                                 | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-311911                           | kubernetes-upgrade-311911 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=docker                                   |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-917705                              | force-systemd-flag-917705 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | ssh cat                                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-917705                           | force-systemd-flag-917705 | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| start   | -p old-k8s-version-371039                              | old-k8s-version-371039    | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:10 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| ssh     | cert-options-840054 ssh                                | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | openssl x509 -text -noout -in                          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-840054 -- sudo                         | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                           |         |         |                     |                     |
	| delete  | -p cert-options-840054                                 | cert-options-840054       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| start   | -p no-preload-349453                                   | no-preload-349453         | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:09 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-349453             | no-preload-349453         | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC | 16 Sep 24 11:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p no-preload-349453                                   | no-preload-349453         | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC | 16 Sep 24 11:09 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-349453                  | no-preload-349453         | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC | 16 Sep 24 11:09 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p no-preload-349453                                   | no-preload-349453         | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-371039        | old-k8s-version-371039    | jenkins | v1.34.0 | 16 Sep 24 11:10 UTC | 16 Sep 24 11:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p old-k8s-version-371039                              | old-k8s-version-371039    | jenkins | v1.34.0 | 16 Sep 24 11:10 UTC | 16 Sep 24 11:10 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-371039             | old-k8s-version-371039    | jenkins | v1.34.0 | 16 Sep 24 11:10 UTC | 16 Sep 24 11:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-371039                              | old-k8s-version-371039    | jenkins | v1.34.0 | 16 Sep 24 11:10 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| start   | -p cert-expiration-021107                              | cert-expiration-021107    | jenkins | v1.34.0 | 16 Sep 24 11:11 UTC | 16 Sep 24 11:11 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-021107                              | cert-expiration-021107    | jenkins | v1.34.0 | 16 Sep 24 11:11 UTC | 16 Sep 24 11:11 UTC |
	| start   | -p embed-certs-679624                                  | embed-certs-679624        | jenkins | v1.34.0 | 16 Sep 24 11:11 UTC | 16 Sep 24 11:12 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-679624            | embed-certs-679624        | jenkins | v1.34.0 | 16 Sep 24 11:12 UTC | 16 Sep 24 11:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:11:18
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:11:18.856155  288908 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:11:18.856262  288908 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:11:18.856269  288908 out.go:358] Setting ErrFile to fd 2...
	I0916 11:11:18.856274  288908 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:11:18.856461  288908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 11:11:18.857036  288908 out.go:352] Setting JSON to false
	I0916 11:11:18.858346  288908 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3223,"bootTime":1726481856,"procs":348,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:11:18.858451  288908 start.go:139] virtualization: kvm guest
	I0916 11:11:18.860470  288908 out.go:177] * [embed-certs-679624] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:11:18.862286  288908 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:11:18.862325  288908 notify.go:220] Checking for updates...
	I0916 11:11:18.864825  288908 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:11:18.865999  288908 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:11:18.867166  288908 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 11:11:18.868600  288908 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:11:18.870074  288908 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:11:18.871834  288908 config.go:182] Loaded profile config "kubernetes-upgrade-311911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:11:18.871944  288908 config.go:182] Loaded profile config "no-preload-349453": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:11:18.872024  288908 config.go:182] Loaded profile config "old-k8s-version-371039": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0916 11:11:18.872127  288908 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:11:18.894405  288908 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 11:11:18.894515  288908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:11:18.948949  288908 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:86 SystemTime:2024-09-16 11:11:18.937344705 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:11:18.949132  288908 docker.go:318] overlay module found
	I0916 11:11:18.950939  288908 out.go:177] * Using the docker driver based on user configuration
	I0916 11:11:18.952281  288908 start.go:297] selected driver: docker
	I0916 11:11:18.952313  288908 start.go:901] validating driver "docker" against <nil>
	I0916 11:11:18.952331  288908 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:11:18.953507  288908 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:11:19.001625  288908 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:86 SystemTime:2024-09-16 11:11:18.99185584 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:11:19.001804  288908 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 11:11:19.002056  288908 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:11:19.003908  288908 out.go:177] * Using Docker driver with root privileges
	I0916 11:11:19.005402  288908 cni.go:84] Creating CNI manager for ""
	I0916 11:11:19.005465  288908 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:11:19.005479  288908 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 11:11:19.005564  288908 start.go:340] cluster config:
	{Name:embed-certs-679624 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-679624 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:11:19.007384  288908 out.go:177] * Starting "embed-certs-679624" primary control-plane node in "embed-certs-679624" cluster
	I0916 11:11:19.009150  288908 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 11:11:19.010840  288908 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 11:11:19.012215  288908 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:11:19.012278  288908 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4
	I0916 11:11:19.012297  288908 cache.go:56] Caching tarball of preloaded images
	I0916 11:11:19.012311  288908 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 11:11:19.012483  288908 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 11:11:19.012514  288908 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 11:11:19.012637  288908 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/config.json ...
	I0916 11:11:19.012667  288908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/config.json: {Name:mk779755db7fc6d270e9404ca4b6e4963d78e149 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0916 11:11:19.033306  288908 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 11:11:19.033331  288908 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 11:11:19.033415  288908 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 11:11:19.033429  288908 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 11:11:19.033435  288908 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 11:11:19.033442  288908 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 11:11:19.033458  288908 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 11:11:19.086983  288908 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 11:11:19.087029  288908 cache.go:194] Successfully downloaded all kic artifacts
	I0916 11:11:19.087070  288908 start.go:360] acquireMachinesLock for embed-certs-679624: {Name:mk5c5a1695ab7bba9827e17eb437dd80adf4e091 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:11:19.087184  288908 start.go:364] duration metric: took 93.132µs to acquireMachinesLock for "embed-certs-679624"
	I0916 11:11:19.087215  288908 start.go:93] Provisioning new machine with config: &{Name:embed-certs-679624 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-679624 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 11:11:19.087341  288908 start.go:125] createHost starting for "" (driver="docker")
	I0916 11:11:17.757111  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:20.258429  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:17.707064  254463 logs.go:123] Gathering logs for kube-apiserver [5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7] ...
	I0916 11:11:17.707097  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d9b3fbef8217a25fe641923b826cd8a9d18ee3c339b395c2d8ef55c122651e7"
	I0916 11:11:17.745431  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:11:17.745460  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:17.807745  254463 logs.go:123] Gathering logs for kube-controller-manager [5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0] ...
	I0916 11:11:17.807796  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:17.841462  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:11:17.841493  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:11:17.927928  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:11:17.927966  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:11:17.951261  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:11:17.951305  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:11:18.013608  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:11:18.013640  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:11:18.013660  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:20.558195  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:11:20.558623  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:11:20.558677  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:11:20.558734  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:11:20.595321  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:20.595346  254463 cri.go:89] found id: ""
	I0916 11:11:20.595355  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:11:20.595413  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:20.599420  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:11:20.599497  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:11:20.641184  254463 cri.go:89] found id: ""
	I0916 11:11:20.641211  254463 logs.go:276] 0 containers: []
	W0916 11:11:20.641223  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:11:20.641232  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:11:20.641292  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:11:20.682399  254463 cri.go:89] found id: ""
	I0916 11:11:20.682431  254463 logs.go:276] 0 containers: []
	W0916 11:11:20.682443  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:11:20.682451  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:11:20.682516  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:11:20.721644  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:20.721669  254463 cri.go:89] found id: ""
	I0916 11:11:20.721678  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:11:20.721731  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:20.725651  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:11:20.725724  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:11:20.767294  254463 cri.go:89] found id: ""
	I0916 11:11:20.767321  254463 logs.go:276] 0 containers: []
	W0916 11:11:20.767329  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:11:20.767335  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:11:20.767382  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:11:20.801830  254463 cri.go:89] found id: "5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:20.801855  254463 cri.go:89] found id: ""
	I0916 11:11:20.801865  254463 logs.go:276] 1 containers: [5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0]
	I0916 11:11:20.801922  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:20.805407  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:11:20.805482  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:11:20.840869  254463 cri.go:89] found id: ""
	I0916 11:11:20.840900  254463 logs.go:276] 0 containers: []
	W0916 11:11:20.840912  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:11:20.840919  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:11:20.840979  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:11:20.878195  254463 cri.go:89] found id: ""
	I0916 11:11:20.878221  254463 logs.go:276] 0 containers: []
	W0916 11:11:20.878229  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:11:20.878237  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:11:20.878248  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:11:20.925361  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:11:20.925388  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:11:21.019564  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:11:21.019600  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:11:21.048676  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:11:21.048723  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:11:21.112999  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:11:21.113033  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:11:21.113051  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:21.154086  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:11:21.154114  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:21.235856  254463 logs.go:123] Gathering logs for kube-controller-manager [5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0] ...
	I0916 11:11:21.235897  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:21.278612  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:11:21.278650  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:11:19.238965  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:21.239025  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:23.738819  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:19.090071  288908 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 11:11:19.090308  288908 start.go:159] libmachine.API.Create for "embed-certs-679624" (driver="docker")
	I0916 11:11:19.090338  288908 client.go:168] LocalClient.Create starting
	I0916 11:11:19.090401  288908 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem
	I0916 11:11:19.090431  288908 main.go:141] libmachine: Decoding PEM data...
	I0916 11:11:19.090448  288908 main.go:141] libmachine: Parsing certificate...
	I0916 11:11:19.090505  288908 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem
	I0916 11:11:19.090523  288908 main.go:141] libmachine: Decoding PEM data...
	I0916 11:11:19.090534  288908 main.go:141] libmachine: Parsing certificate...
	I0916 11:11:19.090850  288908 cli_runner.go:164] Run: docker network inspect embed-certs-679624 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 11:11:19.107706  288908 cli_runner.go:211] docker network inspect embed-certs-679624 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 11:11:19.107836  288908 network_create.go:284] running [docker network inspect embed-certs-679624] to gather additional debugging logs...
	I0916 11:11:19.107862  288908 cli_runner.go:164] Run: docker network inspect embed-certs-679624
	W0916 11:11:19.124412  288908 cli_runner.go:211] docker network inspect embed-certs-679624 returned with exit code 1
	I0916 11:11:19.124439  288908 network_create.go:287] error running [docker network inspect embed-certs-679624]: docker network inspect embed-certs-679624: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-679624 not found
	I0916 11:11:19.124466  288908 network_create.go:289] output of [docker network inspect embed-certs-679624]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-679624 not found
	
	** /stderr **
	I0916 11:11:19.124580  288908 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:11:19.142536  288908 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c95c64bb41bd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:bd:76:ab:c4} reservation:<nil>}
	I0916 11:11:19.143504  288908 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fad43aa9929b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:94:3f:61:fd} reservation:<nil>}
	I0916 11:11:19.144458  288908 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-49585fce923a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:ad:45:94:54} reservation:<nil>}
	I0916 11:11:19.145163  288908 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-45dc384def28 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:95:3e:48:c3} reservation:<nil>}
	I0916 11:11:19.146136  288908 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cbec20}
	I0916 11:11:19.146158  288908 network_create.go:124] attempt to create docker network embed-certs-679624 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0916 11:11:19.146211  288908 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-679624 embed-certs-679624
	I0916 11:11:19.210275  288908 network_create.go:108] docker network embed-certs-679624 192.168.85.0/24 created
	I0916 11:11:19.210306  288908 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-679624" container
	I0916 11:11:19.210356  288908 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 11:11:19.227600  288908 cli_runner.go:164] Run: docker volume create embed-certs-679624 --label name.minikube.sigs.k8s.io=embed-certs-679624 --label created_by.minikube.sigs.k8s.io=true
	I0916 11:11:19.245579  288908 oci.go:103] Successfully created a docker volume embed-certs-679624
	I0916 11:11:19.245640  288908 cli_runner.go:164] Run: docker run --rm --name embed-certs-679624-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-679624 --entrypoint /usr/bin/test -v embed-certs-679624:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 11:11:19.757598  288908 oci.go:107] Successfully prepared a docker volume embed-certs-679624
	I0916 11:11:19.757638  288908 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:11:19.757655  288908 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 11:11:19.757735  288908 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-679624:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 11:11:22.757918  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:24.758241  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:23.825300  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:11:23.825689  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:11:23.825738  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:11:23.825786  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:11:23.859216  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:23.859235  254463 cri.go:89] found id: ""
	I0916 11:11:23.859242  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:11:23.859286  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:23.862764  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:11:23.862821  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:11:23.895042  254463 cri.go:89] found id: ""
	I0916 11:11:23.895069  254463 logs.go:276] 0 containers: []
	W0916 11:11:23.895078  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:11:23.895084  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:11:23.895139  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:11:23.926804  254463 cri.go:89] found id: ""
	I0916 11:11:23.926829  254463 logs.go:276] 0 containers: []
	W0916 11:11:23.926842  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:11:23.926850  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:11:23.926897  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:11:23.961138  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:23.961159  254463 cri.go:89] found id: ""
	I0916 11:11:23.961166  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:11:23.961218  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:23.964777  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:11:23.964842  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:11:24.007913  254463 cri.go:89] found id: ""
	I0916 11:11:24.007939  254463 logs.go:276] 0 containers: []
	W0916 11:11:24.007951  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:11:24.007959  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:11:24.008029  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:11:24.049372  254463 cri.go:89] found id: "5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:24.049444  254463 cri.go:89] found id: ""
	I0916 11:11:24.049460  254463 logs.go:276] 1 containers: [5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0]
	I0916 11:11:24.049523  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:24.054045  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:11:24.054127  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:11:24.093835  254463 cri.go:89] found id: ""
	I0916 11:11:24.093864  254463 logs.go:276] 0 containers: []
	W0916 11:11:24.093875  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:11:24.093883  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:11:24.093939  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:11:24.129861  254463 cri.go:89] found id: ""
	I0916 11:11:24.129888  254463 logs.go:276] 0 containers: []
	W0916 11:11:24.129896  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:11:24.129904  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:11:24.129916  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:11:24.179039  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:11:24.179086  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:11:24.218126  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:11:24.218159  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:11:24.318420  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:11:24.318456  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:11:24.349622  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:11:24.349663  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:11:24.429380  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:11:24.429415  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:11:24.429433  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:24.468570  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:11:24.468615  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:24.557739  254463 logs.go:123] Gathering logs for kube-controller-manager [5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0] ...
	I0916 11:11:24.557776  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:27.098528  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:11:27.098979  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:11:27.099032  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:11:27.099086  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:11:27.135416  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:27.135437  254463 cri.go:89] found id: ""
	I0916 11:11:27.135444  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:11:27.135489  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:27.138909  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:11:27.138973  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:11:27.177050  254463 cri.go:89] found id: ""
	I0916 11:11:27.177080  254463 logs.go:276] 0 containers: []
	W0916 11:11:27.177091  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:11:27.177099  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:11:27.177160  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:11:27.212036  254463 cri.go:89] found id: ""
	I0916 11:11:27.212061  254463 logs.go:276] 0 containers: []
	W0916 11:11:27.212073  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:11:27.212081  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:11:27.212136  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:11:27.251569  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:27.251590  254463 cri.go:89] found id: ""
	I0916 11:11:27.251598  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:11:27.251651  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:27.258394  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:11:27.258463  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:11:27.296919  254463 cri.go:89] found id: ""
	I0916 11:11:27.296950  254463 logs.go:276] 0 containers: []
	W0916 11:11:27.296960  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:11:27.296965  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:11:27.297023  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:11:27.335315  254463 cri.go:89] found id: "5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:27.335334  254463 cri.go:89] found id: ""
	I0916 11:11:27.335342  254463 logs.go:276] 1 containers: [5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0]
	I0916 11:11:27.335384  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:27.338919  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:11:27.338984  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:11:27.375852  254463 cri.go:89] found id: ""
	I0916 11:11:27.375877  254463 logs.go:276] 0 containers: []
	W0916 11:11:27.375890  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:11:27.375905  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:11:27.375963  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:11:27.413862  254463 cri.go:89] found id: ""
	I0916 11:11:27.413883  254463 logs.go:276] 0 containers: []
	W0916 11:11:27.413891  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:11:27.413899  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:11:27.413909  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:11:27.526092  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:11:27.526127  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:11:27.550647  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:11:27.550682  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:11:27.620133  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:11:27.620156  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:11:27.620170  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:27.665894  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:11:27.665929  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:25.739512  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:28.239069  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:24.264807  288908 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-679624:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.506989871s)
	I0916 11:11:24.264850  288908 kic.go:203] duration metric: took 4.507189916s to extract preloaded images to volume ...
	W0916 11:11:24.265015  288908 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 11:11:24.265175  288908 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 11:11:24.316681  288908 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-679624 --name embed-certs-679624 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-679624 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-679624 --network embed-certs-679624 --ip 192.168.85.2 --volume embed-certs-679624:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 11:11:24.669712  288908 cli_runner.go:164] Run: docker container inspect embed-certs-679624 --format={{.State.Running}}
	I0916 11:11:24.689977  288908 cli_runner.go:164] Run: docker container inspect embed-certs-679624 --format={{.State.Status}}
	I0916 11:11:24.710159  288908 cli_runner.go:164] Run: docker exec embed-certs-679624 stat /var/lib/dpkg/alternatives/iptables
	I0916 11:11:24.751713  288908 oci.go:144] the created container "embed-certs-679624" has a running status.
	I0916 11:11:24.751782  288908 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/embed-certs-679624/id_rsa...
	I0916 11:11:24.870719  288908 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3687/.minikube/machines/embed-certs-679624/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 11:11:24.897688  288908 cli_runner.go:164] Run: docker container inspect embed-certs-679624 --format={{.State.Status}}
	I0916 11:11:24.915975  288908 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 11:11:24.915999  288908 kic_runner.go:114] Args: [docker exec --privileged embed-certs-679624 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 11:11:24.973386  288908 cli_runner.go:164] Run: docker container inspect embed-certs-679624 --format={{.State.Status}}
	I0916 11:11:24.992710  288908 machine.go:93] provisionDockerMachine start ...
	I0916 11:11:24.992788  288908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:11:25.013373  288908 main.go:141] libmachine: Using SSH client type: native
	I0916 11:11:25.013666  288908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I0916 11:11:25.013688  288908 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 11:11:25.014308  288908 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45610->127.0.0.1:33078: read: connection reset by peer
	I0916 11:11:28.148063  288908 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-679624
	
	I0916 11:11:28.148089  288908 ubuntu.go:169] provisioning hostname "embed-certs-679624"
	I0916 11:11:28.148161  288908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:11:28.169027  288908 main.go:141] libmachine: Using SSH client type: native
	I0916 11:11:28.169265  288908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I0916 11:11:28.169282  288908 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-679624 && echo "embed-certs-679624" | sudo tee /etc/hostname
	I0916 11:11:28.355513  288908 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-679624
	
	I0916 11:11:28.355629  288908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:11:28.374039  288908 main.go:141] libmachine: Using SSH client type: native
	I0916 11:11:28.374264  288908 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I0916 11:11:28.374294  288908 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-679624' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-679624/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-679624' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:11:28.508073  288908 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:11:28.508100  288908 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 11:11:28.508138  288908 ubuntu.go:177] setting up certificates
	I0916 11:11:28.508156  288908 provision.go:84] configureAuth start
	I0916 11:11:28.508223  288908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-679624
	I0916 11:11:28.529363  288908 provision.go:143] copyHostCerts
	I0916 11:11:28.529425  288908 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 11:11:28.529444  288908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 11:11:28.529506  288908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 11:11:28.529605  288908 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 11:11:28.529616  288908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 11:11:28.529646  288908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 11:11:28.529753  288908 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 11:11:28.529767  288908 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 11:11:28.529800  288908 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 11:11:28.529884  288908 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.embed-certs-679624 san=[127.0.0.1 192.168.85.2 embed-certs-679624 localhost minikube]
	I0916 11:11:28.660139  288908 provision.go:177] copyRemoteCerts
	I0916 11:11:28.660207  288908 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:11:28.660257  288908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:11:28.686030  288908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/embed-certs-679624/id_rsa Username:docker}
	I0916 11:11:28.781031  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 11:11:28.805291  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0916 11:11:28.828019  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 11:11:28.852211  288908 provision.go:87] duration metric: took 344.043242ms to configureAuth
	I0916 11:11:28.852237  288908 ubuntu.go:193] setting minikube options for container-runtime
	I0916 11:11:28.852389  288908 config.go:182] Loaded profile config "embed-certs-679624": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:11:28.852399  288908 machine.go:96] duration metric: took 3.859669611s to provisionDockerMachine
	I0916 11:11:28.852422  288908 client.go:171] duration metric: took 9.762061004s to LocalClient.Create
	I0916 11:11:28.852442  288908 start.go:167] duration metric: took 9.762135091s to libmachine.API.Create "embed-certs-679624"
	I0916 11:11:28.852450  288908 start.go:293] postStartSetup for "embed-certs-679624" (driver="docker")
	I0916 11:11:28.852458  288908 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:11:28.852498  288908 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:11:28.852531  288908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:11:28.870309  288908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/embed-certs-679624/id_rsa Username:docker}
	I0916 11:11:28.965110  288908 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:11:28.968523  288908 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 11:11:28.968563  288908 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 11:11:28.968575  288908 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 11:11:28.968583  288908 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 11:11:28.968596  288908 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 11:11:28.968713  288908 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 11:11:28.968785  288908 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 11:11:28.968871  288908 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:11:28.977835  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:11:29.001876  288908 start.go:296] duration metric: took 149.414216ms for postStartSetup
	I0916 11:11:29.002250  288908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-679624
	I0916 11:11:29.019869  288908 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/config.json ...
	I0916 11:11:29.020153  288908 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 11:11:29.020205  288908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:11:29.038049  288908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/embed-certs-679624/id_rsa Username:docker}
	I0916 11:11:29.128967  288908 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 11:11:29.133547  288908 start.go:128] duration metric: took 10.046188671s to createHost
	I0916 11:11:29.133576  288908 start.go:83] releasing machines lock for "embed-certs-679624", held for 10.046377271s
	I0916 11:11:29.133643  288908 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-679624
	I0916 11:11:29.152662  288908 ssh_runner.go:195] Run: cat /version.json
	I0916 11:11:29.152692  288908 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:11:29.152722  288908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:11:29.152762  288908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:11:29.171183  288908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/embed-certs-679624/id_rsa Username:docker}
	I0916 11:11:29.171187  288908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/embed-certs-679624/id_rsa Username:docker}
	I0916 11:11:29.263485  288908 ssh_runner.go:195] Run: systemctl --version
	I0916 11:11:29.342939  288908 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:11:29.347342  288908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 11:11:29.371959  288908 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 11:11:29.372033  288908 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:11:29.398988  288908 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 11:11:29.399013  288908 start.go:495] detecting cgroup driver to use...
	I0916 11:11:29.399046  288908 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 11:11:29.399095  288908 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 11:11:29.410609  288908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 11:11:29.422113  288908 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:11:29.422178  288908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:11:29.436056  288908 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:11:29.449916  288908 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:11:29.528110  288908 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:11:29.607390  288908 docker.go:233] disabling docker service ...
	I0916 11:11:29.607457  288908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:11:29.625383  288908 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:11:29.637734  288908 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:11:29.715467  288908 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:11:29.796841  288908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:11:29.807894  288908 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:11:29.824334  288908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 11:11:29.834092  288908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 11:11:29.845179  288908 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 11:11:29.845243  288908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 11:11:29.854840  288908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 11:11:29.864202  288908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 11:11:29.873608  288908 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 11:11:29.883253  288908 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:11:29.892391  288908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 11:11:29.901723  288908 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 11:11:29.910902  288908 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 11:11:29.920511  288908 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:11:29.928496  288908 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:11:29.937029  288908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:11:30.021638  288908 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 11:11:30.130291  288908 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 11:11:30.130362  288908 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 11:11:30.134196  288908 start.go:563] Will wait 60s for crictl version
	I0916 11:11:30.134260  288908 ssh_runner.go:195] Run: which crictl
	I0916 11:11:30.137609  288908 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:11:30.170590  288908 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 11:11:30.170645  288908 ssh_runner.go:195] Run: containerd --version
	I0916 11:11:30.192976  288908 ssh_runner.go:195] Run: containerd --version
	I0916 11:11:30.217368  288908 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 11:11:27.257831  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:29.759232  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:30.218805  288908 cli_runner.go:164] Run: docker network inspect embed-certs-679624 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:11:30.236609  288908 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0916 11:11:30.240710  288908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:11:30.251608  288908 kubeadm.go:883] updating cluster {Name:embed-certs-679624 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-679624 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:11:30.251732  288908 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:11:30.251856  288908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:11:30.289360  288908 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 11:11:30.289390  288908 containerd.go:534] Images already preloaded, skipping extraction
	I0916 11:11:30.289443  288908 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:11:30.322306  288908 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 11:11:30.322325  288908 cache_images.go:84] Images are preloaded, skipping loading
	I0916 11:11:30.322332  288908 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.31.1 containerd true true} ...
	I0916 11:11:30.322410  288908 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-679624 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-679624 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:11:30.322458  288908 ssh_runner.go:195] Run: sudo crictl info
	I0916 11:11:30.357287  288908 cni.go:84] Creating CNI manager for ""
	I0916 11:11:30.357313  288908 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:11:30.357328  288908 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:11:30.357356  288908 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-679624 NodeName:embed-certs-679624 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 11:11:30.357533  288908 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-679624"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:11:30.357614  288908 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 11:11:30.366434  288908 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 11:11:30.366500  288908 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:11:30.375187  288908 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0916 11:11:30.392300  288908 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:11:30.410224  288908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0916 11:11:30.430159  288908 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0916 11:11:30.433926  288908 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:11:30.444984  288908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:11:30.528873  288908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:11:30.543894  288908 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624 for IP: 192.168.85.2
	I0916 11:11:30.543916  288908 certs.go:194] generating shared ca certs ...
	I0916 11:11:30.543936  288908 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:11:30.544125  288908 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 11:11:30.544187  288908 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 11:11:30.544201  288908 certs.go:256] generating profile certs ...
	I0916 11:11:30.544273  288908 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/client.key
	I0916 11:11:30.544301  288908 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/client.crt with IP's: []
	I0916 11:11:30.788131  288908 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/client.crt ...
	I0916 11:11:30.788166  288908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/client.crt: {Name:mk02095d3afb4fad8c6d28e1f88b13ba36a9f6cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:11:30.788368  288908 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/client.key ...
	I0916 11:11:30.788382  288908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/client.key: {Name:mk6908273136c2132f294f84c2cf9245d566117f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:11:30.788485  288908 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/apiserver.key.67e55e90
	I0916 11:11:30.788507  288908 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/apiserver.crt.67e55e90 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0916 11:11:30.999277  288908 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/apiserver.crt.67e55e90 ...
	I0916 11:11:30.999316  288908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/apiserver.crt.67e55e90: {Name:mk955ebd562252fd3d65acb6c2e198ab5e903fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:11:30.999516  288908 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/apiserver.key.67e55e90 ...
	I0916 11:11:30.999535  288908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/apiserver.key.67e55e90: {Name:mkc82f26c1c509a023699ea12765ff496bced47f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:11:30.999625  288908 certs.go:381] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/apiserver.crt.67e55e90 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/apiserver.crt
	I0916 11:11:30.999750  288908 certs.go:385] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/apiserver.key.67e55e90 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/apiserver.key
	I0916 11:11:30.999843  288908 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/proxy-client.key
	I0916 11:11:30.999865  288908 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/proxy-client.crt with IP's: []
	I0916 11:11:31.288838  288908 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/proxy-client.crt ...
	I0916 11:11:31.288945  288908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/proxy-client.crt: {Name:mk8bd14445a9da8b563b4c4456dcb6ef5aa0023c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:11:31.289235  288908 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/proxy-client.key ...
	I0916 11:11:31.289294  288908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/proxy-client.key: {Name:mk97c2379e3649b3d274265134c4b6a81c84d628 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:11:31.289625  288908 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 11:11:31.289722  288908 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 11:11:31.289752  288908 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:11:31.289809  288908 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 11:11:31.289858  288908 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:11:31.289915  288908 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 11:11:31.289997  288908 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:11:31.290950  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:11:31.317053  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:11:31.344299  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:11:31.373008  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:11:31.399445  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0916 11:11:31.425552  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 11:11:31.452299  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:11:31.480024  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/embed-certs-679624/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 11:11:31.507034  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:11:31.533755  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 11:11:31.560944  288908 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 11:11:31.588146  288908 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:11:31.607340  288908 ssh_runner.go:195] Run: openssl version
	I0916 11:11:31.613749  288908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 11:11:31.623827  288908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 11:11:31.628105  288908 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 11:11:31.628170  288908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 11:11:31.636053  288908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:11:31.646541  288908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:11:31.657059  288908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:11:31.661092  288908 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:11:31.661152  288908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:11:31.668468  288908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:11:31.678986  288908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 11:11:31.688721  288908 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 11:11:31.692740  288908 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 11:11:31.692806  288908 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 11:11:31.700158  288908 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 11:11:31.710466  288908 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:11:31.714043  288908 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 11:11:31.714124  288908 kubeadm.go:392] StartCluster: {Name:embed-certs-679624 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-679624 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:11:31.714222  288908 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 11:11:31.714261  288908 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:11:31.756398  288908 cri.go:89] found id: ""
	I0916 11:11:31.756465  288908 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:11:31.766605  288908 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 11:11:31.777090  288908 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 11:11:31.777143  288908 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 11:11:31.787168  288908 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 11:11:31.787188  288908 kubeadm.go:157] found existing configuration files:
	
	I0916 11:11:31.787251  288908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 11:11:31.796664  288908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 11:11:31.796730  288908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 11:11:31.806726  288908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 11:11:31.816111  288908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 11:11:31.816165  288908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 11:11:31.825102  288908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 11:11:31.834700  288908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 11:11:31.834757  288908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 11:11:31.845052  288908 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 11:11:31.854270  288908 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 11:11:31.854344  288908 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 11:11:31.864084  288908 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 11:11:31.911207  288908 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 11:11:31.911280  288908 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 11:11:31.929566  288908 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 11:11:31.929629  288908 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 11:11:31.929721  288908 kubeadm.go:310] OS: Linux
	I0916 11:11:31.929795  288908 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 11:11:31.929868  288908 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 11:11:31.929930  288908 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 11:11:31.929999  288908 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 11:11:31.930043  288908 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 11:11:31.930089  288908 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 11:11:31.930127  288908 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 11:11:31.930168  288908 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 11:11:31.930207  288908 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 11:11:32.003661  288908 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 11:11:32.003913  288908 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 11:11:32.004027  288908 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 11:11:32.009787  288908 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 11:11:27.745904  254463 logs.go:123] Gathering logs for kube-controller-manager [5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0] ...
	I0916 11:11:27.745938  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:27.786487  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:11:27.786512  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:11:27.843816  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:11:27.843853  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:11:30.387079  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:11:30.387476  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:11:30.387543  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:11:30.387611  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:11:30.423116  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:30.423146  254463 cri.go:89] found id: ""
	I0916 11:11:30.423157  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:11:30.423209  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:30.427346  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:11:30.427415  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:11:30.464033  254463 cri.go:89] found id: ""
	I0916 11:11:30.464064  254463 logs.go:276] 0 containers: []
	W0916 11:11:30.464076  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:11:30.464084  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:11:30.464149  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:11:30.506628  254463 cri.go:89] found id: ""
	I0916 11:11:30.506660  254463 logs.go:276] 0 containers: []
	W0916 11:11:30.506673  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:11:30.506682  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:11:30.506741  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:11:30.541832  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:30.541860  254463 cri.go:89] found id: ""
	I0916 11:11:30.541874  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:11:30.541932  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:30.546020  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:11:30.546090  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:11:30.586076  254463 cri.go:89] found id: ""
	I0916 11:11:30.586101  254463 logs.go:276] 0 containers: []
	W0916 11:11:30.586111  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:11:30.586118  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:11:30.586175  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:11:30.627319  254463 cri.go:89] found id: "5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:30.627343  254463 cri.go:89] found id: ""
	I0916 11:11:30.627352  254463 logs.go:276] 1 containers: [5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0]
	I0916 11:11:30.627404  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:30.630804  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:11:30.630871  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:11:30.672322  254463 cri.go:89] found id: ""
	I0916 11:11:30.672349  254463 logs.go:276] 0 containers: []
	W0916 11:11:30.672360  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:11:30.672368  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:11:30.672427  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:11:30.711423  254463 cri.go:89] found id: ""
	I0916 11:11:30.711445  254463 logs.go:276] 0 containers: []
	W0916 11:11:30.711453  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:11:30.711461  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:11:30.711473  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:30.787457  254463 logs.go:123] Gathering logs for kube-controller-manager [5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0] ...
	I0916 11:11:30.787499  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:30.825566  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:11:30.825596  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:11:30.873424  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:11:30.873458  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:11:30.912596  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:11:30.912622  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:11:31.041509  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:11:31.041554  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:11:31.069628  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:11:31.069671  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:11:31.147283  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:11:31.147317  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:11:31.147333  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:30.239104  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:32.739847  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:32.012718  288908 out.go:235]   - Generating certificates and keys ...
	I0916 11:11:32.012811  288908 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 11:11:32.012866  288908 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 11:11:32.274323  288908 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 11:11:32.645738  288908 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 11:11:32.802923  288908 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 11:11:32.869257  288908 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 11:11:33.074216  288908 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 11:11:33.074453  288908 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-679624 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0916 11:11:33.198709  288908 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 11:11:33.198917  288908 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-679624 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0916 11:11:33.288526  288908 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 11:11:33.371633  288908 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 11:11:33.467662  288908 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 11:11:33.467854  288908 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 11:11:33.610889  288908 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 11:11:33.928327  288908 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 11:11:34.209629  288908 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 11:11:34.318731  288908 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 11:11:34.497638  288908 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 11:11:34.498358  288908 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 11:11:34.501042  288908 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 11:11:32.258180  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:34.258663  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:36.258970  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:33.692712  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:11:33.693191  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:11:33.693260  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:11:33.693318  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:11:33.729008  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:33.729033  254463 cri.go:89] found id: ""
	I0916 11:11:33.729043  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:11:33.729109  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:33.733530  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:11:33.733664  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:11:33.781978  254463 cri.go:89] found id: ""
	I0916 11:11:33.782012  254463 logs.go:276] 0 containers: []
	W0916 11:11:33.782023  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:11:33.782031  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:11:33.782097  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:11:33.834507  254463 cri.go:89] found id: ""
	I0916 11:11:33.834606  254463 logs.go:276] 0 containers: []
	W0916 11:11:33.834635  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:11:33.834670  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:11:33.834747  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:11:33.871434  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:33.871453  254463 cri.go:89] found id: ""
	I0916 11:11:33.871460  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:11:33.871506  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:33.876069  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:11:33.876139  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:11:33.939474  254463 cri.go:89] found id: ""
	I0916 11:11:33.939507  254463 logs.go:276] 0 containers: []
	W0916 11:11:33.939518  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:11:33.939525  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:11:33.939579  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:11:33.980476  254463 cri.go:89] found id: "1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:33.980501  254463 cri.go:89] found id: "5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:33.980507  254463 cri.go:89] found id: ""
	I0916 11:11:33.980514  254463 logs.go:276] 2 containers: [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff 5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0]
	I0916 11:11:33.980577  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:33.984110  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:33.987346  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:11:33.987409  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:11:34.040605  254463 cri.go:89] found id: ""
	I0916 11:11:34.040633  254463 logs.go:276] 0 containers: []
	W0916 11:11:34.040644  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:11:34.040655  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:11:34.040719  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:11:34.077726  254463 cri.go:89] found id: ""
	I0916 11:11:34.077754  254463 logs.go:276] 0 containers: []
	W0916 11:11:34.077765  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:11:34.077783  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:11:34.077799  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:11:34.170123  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:11:34.170148  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:11:34.170162  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:34.230253  254463 logs.go:123] Gathering logs for kube-controller-manager [5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0] ...
	I0916 11:11:34.230291  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:34.271506  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:11:34.271533  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:11:34.327836  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:11:34.327865  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:11:34.448242  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:11:34.448278  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:11:34.471341  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:11:34.471385  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:11:34.521420  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:11:34.521454  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:34.601090  254463 logs.go:123] Gathering logs for kube-controller-manager [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff] ...
	I0916 11:11:34.601130  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:37.138930  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:11:37.139314  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:11:37.139360  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:11:37.139403  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:11:37.180304  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:37.180327  254463 cri.go:89] found id: ""
	I0916 11:11:37.180335  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:11:37.180393  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:37.184635  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:11:37.184700  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:11:37.218889  254463 cri.go:89] found id: ""
	I0916 11:11:37.218917  254463 logs.go:276] 0 containers: []
	W0916 11:11:37.218928  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:11:37.218936  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:11:37.218992  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:11:37.256844  254463 cri.go:89] found id: ""
	I0916 11:11:37.256871  254463 logs.go:276] 0 containers: []
	W0916 11:11:37.256881  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:11:37.256888  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:11:37.256946  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:11:37.297431  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:37.297456  254463 cri.go:89] found id: ""
	I0916 11:11:37.297466  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:11:37.297526  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:37.301491  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:11:37.301548  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:11:37.337632  254463 cri.go:89] found id: ""
	I0916 11:11:37.337660  254463 logs.go:276] 0 containers: []
	W0916 11:11:37.337671  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:11:37.337682  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:11:37.337738  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:11:37.376904  254463 cri.go:89] found id: "1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:37.376933  254463 cri.go:89] found id: "5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:37.376939  254463 cri.go:89] found id: ""
	I0916 11:11:37.376950  254463 logs.go:276] 2 containers: [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff 5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0]
	I0916 11:11:37.377006  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:37.380947  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:37.384225  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:11:37.384278  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:11:37.419944  254463 cri.go:89] found id: ""
	I0916 11:11:37.419974  254463 logs.go:276] 0 containers: []
	W0916 11:11:37.419985  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:11:37.419994  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:11:37.420047  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:11:37.454586  254463 cri.go:89] found id: ""
	I0916 11:11:37.454615  254463 logs.go:276] 0 containers: []
	W0916 11:11:37.454635  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:11:37.454651  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:11:37.454670  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:11:37.501786  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:11:37.501815  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:11:37.611024  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:11:37.611066  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:11:37.675810  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:11:37.675834  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:11:37.675858  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:35.238935  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:37.737929  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:34.503090  288908 out.go:235]   - Booting up control plane ...
	I0916 11:11:34.503204  288908 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 11:11:34.503307  288908 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 11:11:34.503428  288908 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 11:11:34.512767  288908 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 11:11:34.518364  288908 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 11:11:34.518434  288908 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 11:11:34.609756  288908 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 11:11:34.609882  288908 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 11:11:35.111264  288908 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.674049ms
	I0916 11:11:35.111379  288908 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 11:11:40.113566  288908 kubeadm.go:310] [api-check] The API server is healthy after 5.002308876s
	I0916 11:11:40.124445  288908 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 11:11:40.136433  288908 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 11:11:40.158632  288908 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 11:11:40.158882  288908 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-679624 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 11:11:40.166356  288908 kubeadm.go:310] [bootstrap-token] Using token: 84spig.4y8nxn4hci96swit
	I0916 11:11:40.168019  288908 out.go:235]   - Configuring RBAC rules ...
	I0916 11:11:40.168133  288908 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 11:11:40.171476  288908 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 11:11:40.177632  288908 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 11:11:40.180530  288908 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 11:11:40.183240  288908 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 11:11:40.187632  288908 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 11:11:40.520291  288908 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 11:11:40.953108  288908 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 11:11:41.520171  288908 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 11:11:41.520855  288908 kubeadm.go:310] 
	I0916 11:11:41.520935  288908 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 11:11:41.520944  288908 kubeadm.go:310] 
	I0916 11:11:41.521009  288908 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 11:11:41.521016  288908 kubeadm.go:310] 
	I0916 11:11:41.521036  288908 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 11:11:41.521083  288908 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 11:11:41.521124  288908 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 11:11:41.521130  288908 kubeadm.go:310] 
	I0916 11:11:41.521171  288908 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 11:11:41.521176  288908 kubeadm.go:310] 
	I0916 11:11:41.521214  288908 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 11:11:41.521219  288908 kubeadm.go:310] 
	I0916 11:11:41.521259  288908 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 11:11:41.521324  288908 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 11:11:41.521379  288908 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 11:11:41.521386  288908 kubeadm.go:310] 
	I0916 11:11:41.521450  288908 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 11:11:41.521511  288908 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 11:11:41.521517  288908 kubeadm.go:310] 
	I0916 11:11:41.521582  288908 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 84spig.4y8nxn4hci96swit \
	I0916 11:11:41.521679  288908 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 \
	I0916 11:11:41.521701  288908 kubeadm.go:310] 	--control-plane 
	I0916 11:11:41.521705  288908 kubeadm.go:310] 
	I0916 11:11:41.521785  288908 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 11:11:41.521793  288908 kubeadm.go:310] 
	I0916 11:11:41.521875  288908 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 84spig.4y8nxn4hci96swit \
	I0916 11:11:41.521955  288908 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 
	I0916 11:11:41.524979  288908 kubeadm.go:310] W0916 11:11:31.907821    1135 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:11:41.525354  288908 kubeadm.go:310] W0916 11:11:31.908743    1135 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:11:41.525562  288908 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 11:11:41.525672  288908 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 11:11:41.525704  288908 cni.go:84] Creating CNI manager for ""
	I0916 11:11:41.525719  288908 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:11:41.527698  288908 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 11:11:38.757671  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:40.761143  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:37.758822  254463 logs.go:123] Gathering logs for kube-controller-manager [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff] ...
	I0916 11:11:37.758871  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:37.797236  254463 logs.go:123] Gathering logs for kube-controller-manager [5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0] ...
	I0916 11:11:37.797263  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:37.842272  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:11:37.842314  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:11:37.892228  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:11:37.892268  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:11:37.913264  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:11:37.913303  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:40.469419  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:11:40.469842  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:11:40.469897  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:11:40.469972  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:11:40.504839  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:40.504859  254463 cri.go:89] found id: ""
	I0916 11:11:40.504867  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:11:40.504910  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:40.509056  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:11:40.509144  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:11:40.544727  254463 cri.go:89] found id: ""
	I0916 11:11:40.544754  254463 logs.go:276] 0 containers: []
	W0916 11:11:40.544764  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:11:40.544769  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:11:40.544824  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:11:40.585143  254463 cri.go:89] found id: ""
	I0916 11:11:40.585177  254463 logs.go:276] 0 containers: []
	W0916 11:11:40.585188  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:11:40.585197  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:11:40.585253  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:11:40.618406  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:40.618433  254463 cri.go:89] found id: ""
	I0916 11:11:40.618442  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:11:40.618497  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:40.622183  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:11:40.622241  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:11:40.654226  254463 cri.go:89] found id: ""
	I0916 11:11:40.654257  254463 logs.go:276] 0 containers: []
	W0916 11:11:40.654270  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:11:40.654278  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:11:40.654338  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:11:40.704703  254463 cri.go:89] found id: "1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:40.704731  254463 cri.go:89] found id: "5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:40.704737  254463 cri.go:89] found id: ""
	I0916 11:11:40.704747  254463 logs.go:276] 2 containers: [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff 5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0]
	I0916 11:11:40.704804  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:40.709695  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:40.714182  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:11:40.714283  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:11:40.769401  254463 cri.go:89] found id: ""
	I0916 11:11:40.769432  254463 logs.go:276] 0 containers: []
	W0916 11:11:40.769443  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:11:40.769450  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:11:40.769508  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:11:40.814114  254463 cri.go:89] found id: ""
	I0916 11:11:40.814180  254463 logs.go:276] 0 containers: []
	W0916 11:11:40.814203  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:11:40.814224  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:11:40.814242  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:11:40.923888  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:11:40.923942  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:11:40.954712  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:11:40.954756  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:11:41.019515  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:11:41.019535  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:11:41.019547  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:41.091866  254463 logs.go:123] Gathering logs for kube-controller-manager [5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0] ...
	I0916 11:11:41.091908  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:41.126670  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:11:41.126702  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:11:41.165890  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:11:41.165924  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:41.203538  254463 logs.go:123] Gathering logs for kube-controller-manager [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff] ...
	I0916 11:11:41.203568  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:41.241297  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:11:41.241325  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:11:39.738817  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:41.739547  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:41.528971  288908 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 11:11:41.532973  288908 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 11:11:41.532990  288908 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 11:11:41.550641  288908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 11:11:41.759420  288908 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 11:11:41.759500  288908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:11:41.759538  288908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-679624 minikube.k8s.io/updated_at=2024_09_16T11_11_41_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=embed-certs-679624 minikube.k8s.io/primary=true
	I0916 11:11:41.843186  288908 ops.go:34] apiserver oom_adj: -16
	I0916 11:11:41.843192  288908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:11:42.344100  288908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:11:42.843846  288908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:11:43.343804  288908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:11:43.843597  288908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:11:44.344103  288908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:11:44.843919  288908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:11:45.344112  288908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:11:45.843558  288908 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:11:45.931329  288908 kubeadm.go:1113] duration metric: took 4.171896183s to wait for elevateKubeSystemPrivileges
	I0916 11:11:45.931371  288908 kubeadm.go:394] duration metric: took 14.217250544s to StartCluster
	I0916 11:11:45.931395  288908 settings.go:142] acquiring lock: {Name:mk5f140da961bb985c566f732d611f3e238f1f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:11:45.931468  288908 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:11:45.933917  288908 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:11:45.934189  288908 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 11:11:45.934349  288908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 11:11:45.934378  288908 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:11:45.934476  288908 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-679624"
	I0916 11:11:45.934514  288908 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-679624"
	I0916 11:11:45.934555  288908 host.go:66] Checking if "embed-certs-679624" exists ...
	I0916 11:11:45.934544  288908 addons.go:69] Setting default-storageclass=true in profile "embed-certs-679624"
	I0916 11:11:45.934561  288908 config.go:182] Loaded profile config "embed-certs-679624": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:11:45.934573  288908 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-679624"
	I0916 11:11:45.935002  288908 cli_runner.go:164] Run: docker container inspect embed-certs-679624 --format={{.State.Status}}
	I0916 11:11:45.935187  288908 cli_runner.go:164] Run: docker container inspect embed-certs-679624 --format={{.State.Status}}
	I0916 11:11:45.936273  288908 out.go:177] * Verifying Kubernetes components...
	I0916 11:11:45.937809  288908 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:11:45.969287  288908 addons.go:234] Setting addon default-storageclass=true in "embed-certs-679624"
	I0916 11:11:45.969351  288908 host.go:66] Checking if "embed-certs-679624" exists ...
	I0916 11:11:45.969852  288908 cli_runner.go:164] Run: docker container inspect embed-certs-679624 --format={{.State.Status}}
	I0916 11:11:45.974500  288908 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:11:43.257133  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:45.258494  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:45.975949  288908 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:11:45.975972  288908 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:11:45.976045  288908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:11:45.990299  288908 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:11:45.990325  288908 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:11:45.990383  288908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-679624
	I0916 11:11:45.994530  288908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/embed-certs-679624/id_rsa Username:docker}
	I0916 11:11:46.007683  288908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/embed-certs-679624/id_rsa Username:docker}
	I0916 11:11:46.233531  288908 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 11:11:46.234917  288908 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:11:46.241775  288908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:11:46.249620  288908 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:11:46.762554  288908 start.go:971] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0916 11:11:46.764311  288908 node_ready.go:35] waiting up to 6m0s for node "embed-certs-679624" to be "Ready" ...
	I0916 11:11:46.821592  288908 node_ready.go:49] node "embed-certs-679624" has status "Ready":"True"
	I0916 11:11:46.821625  288908 node_ready.go:38] duration metric: took 57.288494ms for node "embed-certs-679624" to be "Ready" ...
	I0916 11:11:46.821637  288908 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:11:46.831195  288908 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dmv6t" in "kube-system" namespace to be "Ready" ...
	I0916 11:11:47.181058  288908 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0916 11:11:43.787247  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:11:43.787686  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:11:43.787788  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:11:43.787845  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:11:43.820358  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:43.820379  254463 cri.go:89] found id: ""
	I0916 11:11:43.820386  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:11:43.820429  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:43.823977  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:11:43.824036  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:11:43.858303  254463 cri.go:89] found id: ""
	I0916 11:11:43.858331  254463 logs.go:276] 0 containers: []
	W0916 11:11:43.858342  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:11:43.858350  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:11:43.858410  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:11:43.896708  254463 cri.go:89] found id: ""
	I0916 11:11:43.896738  254463 logs.go:276] 0 containers: []
	W0916 11:11:43.896750  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:11:43.896758  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:11:43.896818  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:11:43.930745  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:43.930785  254463 cri.go:89] found id: ""
	I0916 11:11:43.930794  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:11:43.930857  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:43.934261  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:11:43.934324  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:11:43.967505  254463 cri.go:89] found id: ""
	I0916 11:11:43.967532  254463 logs.go:276] 0 containers: []
	W0916 11:11:43.967542  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:11:43.967549  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:11:43.967609  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:11:44.001802  254463 cri.go:89] found id: "1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:44.001822  254463 cri.go:89] found id: "5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:44.001826  254463 cri.go:89] found id: ""
	I0916 11:11:44.001833  254463 logs.go:276] 2 containers: [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff 5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0]
	I0916 11:11:44.001877  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:44.005500  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:44.008954  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:11:44.009028  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:11:44.042735  254463 cri.go:89] found id: ""
	I0916 11:11:44.042758  254463 logs.go:276] 0 containers: []
	W0916 11:11:44.042766  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:11:44.042771  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:11:44.042825  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:11:44.076718  254463 cri.go:89] found id: ""
	I0916 11:11:44.076741  254463 logs.go:276] 0 containers: []
	W0916 11:11:44.076749  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:11:44.076760  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:11:44.076770  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:11:44.124987  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:11:44.125027  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:44.197752  254463 logs.go:123] Gathering logs for kube-controller-manager [5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0] ...
	I0916 11:11:44.197791  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a1bd1f31f805aca1761f6d2456fb0e2b861dd1afe8436c26061e0b00e90dbb0"
	I0916 11:11:44.231307  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:11:44.231335  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:11:44.292499  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:11:44.292527  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:11:44.292542  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:44.328765  254463 logs.go:123] Gathering logs for kube-controller-manager [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff] ...
	I0916 11:11:44.328796  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:44.366047  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:11:44.366073  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:11:44.403288  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:11:44.403313  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:11:44.498895  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:11:44.498933  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:11:47.021378  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:11:47.021855  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:11:47.021915  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:11:47.021977  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:11:47.074174  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:47.074248  254463 cri.go:89] found id: ""
	I0916 11:11:47.074262  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:11:47.074560  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:47.078609  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:11:47.078682  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:11:47.111355  254463 cri.go:89] found id: ""
	I0916 11:11:47.111380  254463 logs.go:276] 0 containers: []
	W0916 11:11:47.111388  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:11:47.111396  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:11:47.111446  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:11:47.154273  254463 cri.go:89] found id: ""
	I0916 11:11:47.154301  254463 logs.go:276] 0 containers: []
	W0916 11:11:47.154313  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:11:47.154321  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:11:47.154380  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:11:47.196698  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:47.196719  254463 cri.go:89] found id: ""
	I0916 11:11:47.196728  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:11:47.196793  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:47.200205  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:11:47.200282  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:11:47.239306  254463 cri.go:89] found id: ""
	I0916 11:11:47.239328  254463 logs.go:276] 0 containers: []
	W0916 11:11:47.239336  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:11:47.239341  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:11:47.239388  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:11:47.275473  254463 cri.go:89] found id: "1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:47.275494  254463 cri.go:89] found id: ""
	I0916 11:11:47.275501  254463 logs.go:276] 1 containers: [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff]
	I0916 11:11:47.275547  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:47.279217  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:11:47.279271  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:11:47.312601  254463 cri.go:89] found id: ""
	I0916 11:11:47.312630  254463 logs.go:276] 0 containers: []
	W0916 11:11:47.312643  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:11:47.312651  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:11:47.312703  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:11:47.351786  254463 cri.go:89] found id: ""
	I0916 11:11:47.351818  254463 logs.go:276] 0 containers: []
	W0916 11:11:47.351830  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:11:47.351841  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:11:47.351856  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:47.388358  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:11:47.388390  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:47.458891  254463 logs.go:123] Gathering logs for kube-controller-manager [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff] ...
	I0916 11:11:47.458925  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:47.495067  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:11:47.495095  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:11:47.556395  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:11:47.556436  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:11:47.606059  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:11:47.606089  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:11:44.237845  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:46.240764  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:48.737615  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:47.182277  288908 addons.go:510] duration metric: took 1.24791353s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0916 11:11:47.267907  288908 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-679624" context rescaled to 1 replicas
	I0916 11:11:48.836602  288908 pod_ready.go:103] pod "coredns-7c65d6cfc9-dmv6t" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:47.757335  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:49.757395  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:47.703200  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:11:47.703236  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:11:47.724642  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:11:47.724684  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:11:47.783498  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:11:50.283928  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:11:50.284374  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:11:50.284423  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:11:50.284474  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:11:50.316834  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:50.316862  254463 cri.go:89] found id: ""
	I0916 11:11:50.316873  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:11:50.316935  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:50.320355  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:11:50.320432  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:11:50.352376  254463 cri.go:89] found id: ""
	I0916 11:11:50.352396  254463 logs.go:276] 0 containers: []
	W0916 11:11:50.352405  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:11:50.352412  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:11:50.352472  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:11:50.387428  254463 cri.go:89] found id: ""
	I0916 11:11:50.387468  254463 logs.go:276] 0 containers: []
	W0916 11:11:50.387479  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:11:50.387487  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:11:50.387537  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:11:50.420454  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:50.420473  254463 cri.go:89] found id: ""
	I0916 11:11:50.420479  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:11:50.420521  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:50.423917  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:11:50.423975  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:11:50.458163  254463 cri.go:89] found id: ""
	I0916 11:11:50.458184  254463 logs.go:276] 0 containers: []
	W0916 11:11:50.458192  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:11:50.458199  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:11:50.458251  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:11:50.490942  254463 cri.go:89] found id: "1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:50.490970  254463 cri.go:89] found id: ""
	I0916 11:11:50.490980  254463 logs.go:276] 1 containers: [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff]
	I0916 11:11:50.491034  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:50.494494  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:11:50.494557  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:11:50.525559  254463 cri.go:89] found id: ""
	I0916 11:11:50.525586  254463 logs.go:276] 0 containers: []
	W0916 11:11:50.525597  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:11:50.525605  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:11:50.525669  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:11:50.557477  254463 cri.go:89] found id: ""
	I0916 11:11:50.557499  254463 logs.go:276] 0 containers: []
	W0916 11:11:50.557507  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:11:50.557522  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:11:50.557534  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:11:50.604317  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:11:50.604355  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:11:50.641507  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:11:50.641536  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:11:50.730228  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:11:50.730266  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:11:50.756357  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:11:50.756403  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:11:50.815959  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:11:50.815992  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:11:50.816005  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:50.853332  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:11:50.853362  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:50.922239  254463 logs.go:123] Gathering logs for kube-controller-manager [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff] ...
	I0916 11:11:50.922282  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:50.739091  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:53.238404  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:50.837082  288908 pod_ready.go:103] pod "coredns-7c65d6cfc9-dmv6t" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:53.337228  288908 pod_ready.go:103] pod "coredns-7c65d6cfc9-dmv6t" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:52.257690  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:54.758372  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:53.459773  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:11:53.460269  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:11:53.460322  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:11:53.460371  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:11:53.495261  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:53.495288  254463 cri.go:89] found id: ""
	I0916 11:11:53.495298  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:11:53.495359  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:53.499351  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:11:53.499415  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:11:53.532686  254463 cri.go:89] found id: ""
	I0916 11:11:53.532716  254463 logs.go:276] 0 containers: []
	W0916 11:11:53.532728  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:11:53.532736  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:11:53.532788  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:11:53.568013  254463 cri.go:89] found id: ""
	I0916 11:11:53.568043  254463 logs.go:276] 0 containers: []
	W0916 11:11:53.568054  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:11:53.568062  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:11:53.568117  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:11:53.601908  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:53.601931  254463 cri.go:89] found id: ""
	I0916 11:11:53.601938  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:11:53.601983  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:53.605669  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:11:53.605742  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:11:53.638394  254463 cri.go:89] found id: ""
	I0916 11:11:53.638420  254463 logs.go:276] 0 containers: []
	W0916 11:11:53.638428  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:11:53.638441  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:11:53.638484  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:11:53.670648  254463 cri.go:89] found id: "1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:53.670669  254463 cri.go:89] found id: ""
	I0916 11:11:53.670678  254463 logs.go:276] 1 containers: [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff]
	I0916 11:11:53.670736  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:53.674142  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:11:53.674193  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:11:53.707669  254463 cri.go:89] found id: ""
	I0916 11:11:53.707698  254463 logs.go:276] 0 containers: []
	W0916 11:11:53.707708  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:11:53.707714  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:11:53.707825  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:11:53.742075  254463 cri.go:89] found id: ""
	I0916 11:11:53.742102  254463 logs.go:276] 0 containers: []
	W0916 11:11:53.742113  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:11:53.742125  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:11:53.742140  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:53.811381  254463 logs.go:123] Gathering logs for kube-controller-manager [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff] ...
	I0916 11:11:53.811415  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:53.846858  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:11:53.846888  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:11:53.891595  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:11:53.891630  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:11:53.925443  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:11:53.925468  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:11:54.015424  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:11:54.015460  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:11:54.036290  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:11:54.036325  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:11:54.096466  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:11:54.096489  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:11:54.096503  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:56.631912  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:11:56.632364  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:11:56.632424  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:11:56.632484  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:11:56.665467  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:56.665487  254463 cri.go:89] found id: ""
	I0916 11:11:56.665494  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:11:56.665540  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:56.669053  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:11:56.669132  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:11:56.701684  254463 cri.go:89] found id: ""
	I0916 11:11:56.701710  254463 logs.go:276] 0 containers: []
	W0916 11:11:56.701721  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:11:56.701728  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:11:56.701790  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:11:56.737251  254463 cri.go:89] found id: ""
	I0916 11:11:56.737289  254463 logs.go:276] 0 containers: []
	W0916 11:11:56.737300  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:11:56.737309  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:11:56.737369  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:11:56.771303  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:56.771332  254463 cri.go:89] found id: ""
	I0916 11:11:56.771340  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:11:56.771382  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:56.774735  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:11:56.774801  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:11:56.807663  254463 cri.go:89] found id: ""
	I0916 11:11:56.807685  254463 logs.go:276] 0 containers: []
	W0916 11:11:56.807693  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:11:56.807698  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:11:56.807788  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:11:56.841120  254463 cri.go:89] found id: "1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:56.841142  254463 cri.go:89] found id: ""
	I0916 11:11:56.841156  254463 logs.go:276] 1 containers: [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff]
	I0916 11:11:56.841200  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:56.844692  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:11:56.844748  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:11:56.877007  254463 cri.go:89] found id: ""
	I0916 11:11:56.877028  254463 logs.go:276] 0 containers: []
	W0916 11:11:56.877036  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:11:56.877041  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:11:56.877088  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:11:56.909108  254463 cri.go:89] found id: ""
	I0916 11:11:56.909136  254463 logs.go:276] 0 containers: []
	W0916 11:11:56.909147  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:11:56.909157  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:11:56.909168  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:11:56.955888  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:11:56.955935  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:11:56.993135  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:11:56.993180  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:11:57.082361  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:11:57.082402  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:11:57.103865  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:11:57.103902  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:11:57.164129  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:11:57.164146  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:11:57.164158  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:57.200538  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:11:57.200568  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:57.273343  254463 logs.go:123] Gathering logs for kube-controller-manager [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff] ...
	I0916 11:11:57.273378  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:11:55.738690  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:58.238544  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:55.337472  288908 pod_ready.go:103] pod "coredns-7c65d6cfc9-dmv6t" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:57.838821  288908 pod_ready.go:103] pod "coredns-7c65d6cfc9-dmv6t" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:57.257474  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:59.756980  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:01.757290  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:11:59.806641  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:11:59.807071  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:11:59.807129  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:11:59.807189  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:11:59.841203  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:11:59.841230  254463 cri.go:89] found id: ""
	I0916 11:11:59.841242  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:11:59.841300  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:59.845256  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:11:59.845334  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:11:59.883444  254463 cri.go:89] found id: ""
	I0916 11:11:59.883480  254463 logs.go:276] 0 containers: []
	W0916 11:11:59.883489  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:11:59.883495  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:11:59.883555  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:11:59.917754  254463 cri.go:89] found id: ""
	I0916 11:11:59.917777  254463 logs.go:276] 0 containers: []
	W0916 11:11:59.917788  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:11:59.917795  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:11:59.917863  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:11:59.956094  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:11:59.956119  254463 cri.go:89] found id: ""
	I0916 11:11:59.956133  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:11:59.956190  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:11:59.959827  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:11:59.959913  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:11:59.999060  254463 cri.go:89] found id: ""
	I0916 11:11:59.999087  254463 logs.go:276] 0 containers: []
	W0916 11:11:59.999097  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:11:59.999105  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:11:59.999173  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:12:00.034193  254463 cri.go:89] found id: "1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:12:00.034214  254463 cri.go:89] found id: ""
	I0916 11:12:00.034223  254463 logs.go:276] 1 containers: [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff]
	I0916 11:12:00.034285  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:12:00.037736  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:12:00.037798  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:12:00.070142  254463 cri.go:89] found id: ""
	I0916 11:12:00.070169  254463 logs.go:276] 0 containers: []
	W0916 11:12:00.070177  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:12:00.070183  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:12:00.070231  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:12:00.103691  254463 cri.go:89] found id: ""
	I0916 11:12:00.103716  254463 logs.go:276] 0 containers: []
	W0916 11:12:00.103724  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:12:00.103773  254463 logs.go:123] Gathering logs for kube-controller-manager [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff] ...
	I0916 11:12:00.103790  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:12:00.137085  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:12:00.137111  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:12:00.185521  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:12:00.185555  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:12:00.221687  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:12:00.221717  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:12:00.313223  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:12:00.313269  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:12:00.337700  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:12:00.337742  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:12:00.396098  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:12:00.396119  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:12:00.396130  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:12:00.433027  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:12:00.433077  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:12:00.337500  288908 pod_ready.go:103] pod "coredns-7c65d6cfc9-dmv6t" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:01.337371  288908 pod_ready.go:93] pod "coredns-7c65d6cfc9-dmv6t" in "kube-system" namespace has status "Ready":"True"
	I0916 11:12:01.337393  288908 pod_ready.go:82] duration metric: took 14.506166654s for pod "coredns-7c65d6cfc9-dmv6t" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:01.337404  288908 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-x4f6n" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:01.339056  288908 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-x4f6n" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-x4f6n" not found
	I0916 11:12:01.339081  288908 pod_ready.go:82] duration metric: took 1.668579ms for pod "coredns-7c65d6cfc9-x4f6n" in "kube-system" namespace to be "Ready" ...
	E0916 11:12:01.339093  288908 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-x4f6n" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-x4f6n" not found
	I0916 11:12:01.339102  288908 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-679624" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:01.342921  288908 pod_ready.go:93] pod "etcd-embed-certs-679624" in "kube-system" namespace has status "Ready":"True"
	I0916 11:12:01.342938  288908 pod_ready.go:82] duration metric: took 3.82908ms for pod "etcd-embed-certs-679624" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:01.342949  288908 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-679624" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:01.346533  288908 pod_ready.go:93] pod "kube-apiserver-embed-certs-679624" in "kube-system" namespace has status "Ready":"True"
	I0916 11:12:01.346552  288908 pod_ready.go:82] duration metric: took 3.596798ms for pod "kube-apiserver-embed-certs-679624" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:01.346560  288908 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-679624" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:01.350192  288908 pod_ready.go:93] pod "kube-controller-manager-embed-certs-679624" in "kube-system" namespace has status "Ready":"True"
	I0916 11:12:01.350208  288908 pod_ready.go:82] duration metric: took 3.643463ms for pod "kube-controller-manager-embed-certs-679624" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:01.350217  288908 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bt6k2" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:01.535253  288908 pod_ready.go:93] pod "kube-proxy-bt6k2" in "kube-system" namespace has status "Ready":"True"
	I0916 11:12:01.535276  288908 pod_ready.go:82] duration metric: took 185.05015ms for pod "kube-proxy-bt6k2" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:01.535286  288908 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-679624" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:01.935780  288908 pod_ready.go:93] pod "kube-scheduler-embed-certs-679624" in "kube-system" namespace has status "Ready":"True"
	I0916 11:12:01.935805  288908 pod_ready.go:82] duration metric: took 400.511614ms for pod "kube-scheduler-embed-certs-679624" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:01.935814  288908 pod_ready.go:39] duration metric: took 15.114148588s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:12:01.935828  288908 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:12:01.935879  288908 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:12:01.948406  288908 api_server.go:72] duration metric: took 16.014183768s to wait for apiserver process to appear ...
	I0916 11:12:01.948432  288908 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:12:01.948456  288908 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0916 11:12:01.952961  288908 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0916 11:12:01.954088  288908 api_server.go:141] control plane version: v1.31.1
	I0916 11:12:01.954120  288908 api_server.go:131] duration metric: took 5.681186ms to wait for apiserver health ...
	I0916 11:12:01.954129  288908 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 11:12:02.138246  288908 system_pods.go:59] 8 kube-system pods found
	I0916 11:12:02.138282  288908 system_pods.go:61] "coredns-7c65d6cfc9-dmv6t" [95a9589e-1385-4fb0-8b68-fb26098daf01] Running
	I0916 11:12:02.138288  288908 system_pods.go:61] "etcd-embed-certs-679624" [b351fd38-6c30-4fa6-b5da-159580232c88] Running
	I0916 11:12:02.138294  288908 system_pods.go:61] "kindnet-78kp5" [795aad3a-f96f-4477-9bd5-49a233890f1e] Running
	I0916 11:12:02.138303  288908 system_pods.go:61] "kube-apiserver-embed-certs-679624" [858cb7c8-8e66-4952-bf89-228525cffafb] Running
	I0916 11:12:02.138309  288908 system_pods.go:61] "kube-controller-manager-embed-certs-679624" [f6aaf262-2f80-4875-bbbf-1b5918a23787] Running
	I0916 11:12:02.138314  288908 system_pods.go:61] "kube-proxy-bt6k2" [cae0a1e2-c041-4c6c-8772-978a2c544879] Running
	I0916 11:12:02.138320  288908 system_pods.go:61] "kube-scheduler-embed-certs-679624" [3eb8a130-09a9-4ae6-b12a-6dff3309cbae] Running
	I0916 11:12:02.138328  288908 system_pods.go:61] "storage-provisioner" [3b5477b8-ac39-4acc-9e16-a13a7b1d3e10] Running
	I0916 11:12:02.138334  288908 system_pods.go:74] duration metric: took 184.199914ms to wait for pod list to return data ...
	I0916 11:12:02.138346  288908 default_sa.go:34] waiting for default service account to be created ...
	I0916 11:12:02.335554  288908 default_sa.go:45] found service account: "default"
	I0916 11:12:02.335581  288908 default_sa.go:55] duration metric: took 197.225628ms for default service account to be created ...
	I0916 11:12:02.335592  288908 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 11:12:02.537944  288908 system_pods.go:86] 8 kube-system pods found
	I0916 11:12:02.537972  288908 system_pods.go:89] "coredns-7c65d6cfc9-dmv6t" [95a9589e-1385-4fb0-8b68-fb26098daf01] Running
	I0916 11:12:02.537977  288908 system_pods.go:89] "etcd-embed-certs-679624" [b351fd38-6c30-4fa6-b5da-159580232c88] Running
	I0916 11:12:02.537981  288908 system_pods.go:89] "kindnet-78kp5" [795aad3a-f96f-4477-9bd5-49a233890f1e] Running
	I0916 11:12:02.537985  288908 system_pods.go:89] "kube-apiserver-embed-certs-679624" [858cb7c8-8e66-4952-bf89-228525cffafb] Running
	I0916 11:12:02.537989  288908 system_pods.go:89] "kube-controller-manager-embed-certs-679624" [f6aaf262-2f80-4875-bbbf-1b5918a23787] Running
	I0916 11:12:02.537992  288908 system_pods.go:89] "kube-proxy-bt6k2" [cae0a1e2-c041-4c6c-8772-978a2c544879] Running
	I0916 11:12:02.537995  288908 system_pods.go:89] "kube-scheduler-embed-certs-679624" [3eb8a130-09a9-4ae6-b12a-6dff3309cbae] Running
	I0916 11:12:02.538000  288908 system_pods.go:89] "storage-provisioner" [3b5477b8-ac39-4acc-9e16-a13a7b1d3e10] Running
	I0916 11:12:02.538009  288908 system_pods.go:126] duration metric: took 202.410695ms to wait for k8s-apps to be running ...
	I0916 11:12:02.538017  288908 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 11:12:02.538066  288908 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:12:02.549283  288908 system_svc.go:56] duration metric: took 11.252338ms WaitForService to wait for kubelet
	I0916 11:12:02.549315  288908 kubeadm.go:582] duration metric: took 16.615095592s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:12:02.549372  288908 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:12:02.736116  288908 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 11:12:02.736146  288908 node_conditions.go:123] node cpu capacity is 8
	I0916 11:12:02.736168  288908 node_conditions.go:105] duration metric: took 186.790688ms to run NodePressure ...
	I0916 11:12:02.736182  288908 start.go:241] waiting for startup goroutines ...
	I0916 11:12:02.736190  288908 start.go:246] waiting for cluster config update ...
	I0916 11:12:02.736206  288908 start.go:255] writing updated cluster config ...
	I0916 11:12:02.736490  288908 ssh_runner.go:195] Run: rm -f paused
	I0916 11:12:02.743407  288908 out.go:177] * Done! kubectl is now configured to use "embed-certs-679624" cluster and "default" namespace by default
	E0916 11:12:02.744289  288908 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	I0916 11:12:00.240179  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:02.738229  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:03.757341  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:05.757854  283294 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:03.001710  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:12:03.002233  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:12:03.002302  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:12:03.002362  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:12:03.042639  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:12:03.042663  254463 cri.go:89] found id: ""
	I0916 11:12:03.042671  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:12:03.042724  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:12:03.046119  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:12:03.046184  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:12:03.082990  254463 cri.go:89] found id: ""
	I0916 11:12:03.083019  254463 logs.go:276] 0 containers: []
	W0916 11:12:03.083029  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:12:03.083036  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:12:03.083093  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:12:03.122289  254463 cri.go:89] found id: ""
	I0916 11:12:03.122320  254463 logs.go:276] 0 containers: []
	W0916 11:12:03.122332  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:12:03.122341  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:12:03.122404  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:12:03.158843  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:12:03.158871  254463 cri.go:89] found id: ""
	I0916 11:12:03.158880  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:12:03.158938  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:12:03.162826  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:12:03.162897  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:12:03.202157  254463 cri.go:89] found id: ""
	I0916 11:12:03.202178  254463 logs.go:276] 0 containers: []
	W0916 11:12:03.202188  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:12:03.202195  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:12:03.202257  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:12:03.242880  254463 cri.go:89] found id: "1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:12:03.242901  254463 cri.go:89] found id: ""
	I0916 11:12:03.242910  254463 logs.go:276] 1 containers: [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff]
	I0916 11:12:03.242966  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:12:03.247156  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:12:03.247232  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:12:03.288527  254463 cri.go:89] found id: ""
	I0916 11:12:03.288550  254463 logs.go:276] 0 containers: []
	W0916 11:12:03.288558  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:12:03.288563  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:12:03.288605  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:12:03.324425  254463 cri.go:89] found id: ""
	I0916 11:12:03.324459  254463 logs.go:276] 0 containers: []
	W0916 11:12:03.324471  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:12:03.324481  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:12:03.324496  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:12:03.370142  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:12:03.370188  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:12:03.479094  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:12:03.479132  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:12:03.505999  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:12:03.506032  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:12:03.593223  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:12:03.593251  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:12:03.593266  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:12:03.637905  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:12:03.637932  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:12:03.714153  254463 logs.go:123] Gathering logs for kube-controller-manager [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff] ...
	I0916 11:12:03.714187  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:12:03.763936  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:12:03.763971  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:12:06.311929  254463 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0916 11:12:06.312336  254463 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0916 11:12:06.312398  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:12:06.312453  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:12:06.347022  254463 cri.go:89] found id: "f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:12:06.347043  254463 cri.go:89] found id: ""
	I0916 11:12:06.347050  254463 logs.go:276] 1 containers: [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836]
	I0916 11:12:06.347101  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:12:06.350688  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:12:06.350755  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:12:06.385442  254463 cri.go:89] found id: ""
	I0916 11:12:06.385468  254463 logs.go:276] 0 containers: []
	W0916 11:12:06.385477  254463 logs.go:278] No container was found matching "etcd"
	I0916 11:12:06.385485  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:12:06.385538  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:12:06.421630  254463 cri.go:89] found id: ""
	I0916 11:12:06.421655  254463 logs.go:276] 0 containers: []
	W0916 11:12:06.421666  254463 logs.go:278] No container was found matching "coredns"
	I0916 11:12:06.421674  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:12:06.421726  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:12:06.458472  254463 cri.go:89] found id: "c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:12:06.458499  254463 cri.go:89] found id: ""
	I0916 11:12:06.458508  254463 logs.go:276] 1 containers: [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68]
	I0916 11:12:06.458571  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:12:06.462339  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:12:06.462416  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:12:06.502319  254463 cri.go:89] found id: ""
	I0916 11:12:06.502352  254463 logs.go:276] 0 containers: []
	W0916 11:12:06.502364  254463 logs.go:278] No container was found matching "kube-proxy"
	I0916 11:12:06.502372  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:12:06.502423  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:12:06.541298  254463 cri.go:89] found id: "1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:12:06.541324  254463 cri.go:89] found id: ""
	I0916 11:12:06.541334  254463 logs.go:276] 1 containers: [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff]
	I0916 11:12:06.541382  254463 ssh_runner.go:195] Run: which crictl
	I0916 11:12:06.545141  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:12:06.545204  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:12:06.582720  254463 cri.go:89] found id: ""
	I0916 11:12:06.582751  254463 logs.go:276] 0 containers: []
	W0916 11:12:06.582762  254463 logs.go:278] No container was found matching "kindnet"
	I0916 11:12:06.582771  254463 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:12:06.582830  254463 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:12:06.620616  254463 cri.go:89] found id: ""
	I0916 11:12:06.620644  254463 logs.go:276] 0 containers: []
	W0916 11:12:06.620657  254463 logs.go:278] No container was found matching "storage-provisioner"
	I0916 11:12:06.620668  254463 logs.go:123] Gathering logs for dmesg ...
	I0916 11:12:06.620684  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:12:06.642176  254463 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:12:06.642208  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0916 11:12:06.701485  254463 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0916 11:12:06.701508  254463 logs.go:123] Gathering logs for kube-apiserver [f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836] ...
	I0916 11:12:06.701524  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f301e284d038ef336f00a8eceb1f9bbdd51a693deddf56d1eeee4bdb2aab7836"
	I0916 11:12:06.742106  254463 logs.go:123] Gathering logs for kube-scheduler [c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68] ...
	I0916 11:12:06.742141  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c74e25c8311e61750d6cbe9f1f995ca86e16172310b49c2c288a610706f6bd68"
	I0916 11:12:06.816975  254463 logs.go:123] Gathering logs for kube-controller-manager [1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff] ...
	I0916 11:12:06.817018  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1900ff0ede4d8f7e6924cb3306c8f26c6ac6ec0cd8a69acae49fee187bba6bff"
	I0916 11:12:06.855707  254463 logs.go:123] Gathering logs for containerd ...
	I0916 11:12:06.855783  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:12:06.900704  254463 logs.go:123] Gathering logs for container status ...
	I0916 11:12:06.900751  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:12:06.937869  254463 logs.go:123] Gathering logs for kubelet ...
	I0916 11:12:06.937904  254463 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3dab298bfe5b5       c69fa2e9cbf5f       8 seconds ago       Running             coredns                   0                   c9b661400e384       coredns-7c65d6cfc9-dmv6t
	f590d121c5d6d       6e38f40d628db       20 seconds ago      Running             storage-provisioner       0                   985f1b4472131       storage-provisioner
	2dbb170a519e8       12968670680f4       21 seconds ago      Running             kindnet-cni               0                   06e595c1fc81f       kindnet-78kp5
	c182b9d7c07df       60c005f310ff3       22 seconds ago      Running             kube-proxy                0                   d47fcd0c3fa57       kube-proxy-bt6k2
	debbdc082cc9c       6bab7719df100       32 seconds ago      Running             kube-apiserver            0                   9df038a9105dc       kube-apiserver-embed-certs-679624
	7637dc0ee3d4d       9aa1fad941575       32 seconds ago      Running             kube-scheduler            0                   ba28ed2ba4c4a       kube-scheduler-embed-certs-679624
	98ba0135cf4f3       175ffd71cce3d       32 seconds ago      Running             kube-controller-manager   0                   ab668cab99a4f       kube-controller-manager-embed-certs-679624
	e7db7be77ed78       2e96e5913fc06       32 seconds ago      Running             etcd                      0                   c206875f93f94       etcd-embed-certs-679624
	
	
	==> containerd <==
	Sep 16 11:11:47 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:47.566480877Z" level=info msg="CreateContainer within sandbox \"985f1b4472131607d8a9ef9844a23f17639f03d828fdfa8fe9bcdbdf16a1125e\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Sep 16 11:11:47 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:47.580004902Z" level=info msg="CreateContainer within sandbox \"985f1b4472131607d8a9ef9844a23f17639f03d828fdfa8fe9bcdbdf16a1125e\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"f590d121c5d6d1937cefe41166b62f9f511f34157bec2b92d38767e94a5b8ba8\""
	Sep 16 11:11:47 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:47.580646352Z" level=info msg="StartContainer for \"f590d121c5d6d1937cefe41166b62f9f511f34157bec2b92d38767e94a5b8ba8\""
	Sep 16 11:11:47 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:47.633530791Z" level=info msg="StartContainer for \"f590d121c5d6d1937cefe41166b62f9f511f34157bec2b92d38767e94a5b8ba8\" returns successfully"
	Sep 16 11:11:51 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:51.239254534Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Sep 16 11:11:59 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:59.836571108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dmv6t,Uid:95a9589e-1385-4fb0-8b68-fb26098daf01,Namespace:kube-system,Attempt:0,}"
	Sep 16 11:11:59 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:59.877183985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 11:11:59 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:59.877991138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 11:11:59 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:59.878020603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:11:59 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:59.878153724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:11:59 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:59.928098331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dmv6t,Uid:95a9589e-1385-4fb0-8b68-fb26098daf01,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9b661400e3843a52856b80e1ea45e1fd13416f005a9e149a0718e77a406f2e0\""
	Sep 16 11:11:59 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:59.931020108Z" level=info msg="CreateContainer within sandbox \"c9b661400e3843a52856b80e1ea45e1fd13416f005a9e149a0718e77a406f2e0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Sep 16 11:11:59 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:59.946892222Z" level=info msg="CreateContainer within sandbox \"c9b661400e3843a52856b80e1ea45e1fd13416f005a9e149a0718e77a406f2e0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3dab298bfe5b5f466c279d41236e5592bfcab2429f9c5b2ce3ac5bf7c01c183d\""
	Sep 16 11:11:59 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:59.947484287Z" level=info msg="StartContainer for \"3dab298bfe5b5f466c279d41236e5592bfcab2429f9c5b2ce3ac5bf7c01c183d\""
	Sep 16 11:11:59 embed-certs-679624 containerd[865]: time="2024-09-16T11:11:59.995349695Z" level=info msg="StartContainer for \"3dab298bfe5b5f466c279d41236e5592bfcab2429f9c5b2ce3ac5bf7c01c183d\" returns successfully"
	Sep 16 11:12:07 embed-certs-679624 containerd[865]: time="2024-09-16T11:12:07.540720530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:metrics-server-6867b74b74-qgvl9,Uid:b0d684f3-ff91-4996-8d9d-23936b12c814,Namespace:kube-system,Attempt:0,}"
	Sep 16 11:12:07 embed-certs-679624 containerd[865]: time="2024-09-16T11:12:07.578141966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 11:12:07 embed-certs-679624 containerd[865]: time="2024-09-16T11:12:07.578209020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 11:12:07 embed-certs-679624 containerd[865]: time="2024-09-16T11:12:07.578219981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:12:07 embed-certs-679624 containerd[865]: time="2024-09-16T11:12:07.578325364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:12:07 embed-certs-679624 containerd[865]: time="2024-09-16T11:12:07.627576240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:metrics-server-6867b74b74-qgvl9,Uid:b0d684f3-ff91-4996-8d9d-23936b12c814,Namespace:kube-system,Attempt:0,} returns sandbox id \"284d3c4c2319cd2683af6528ee7d265bb44bfbf11c934414d0c2d6228a151035\""
	Sep 16 11:12:07 embed-certs-679624 containerd[865]: time="2024-09-16T11:12:07.629813604Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 11:12:07 embed-certs-679624 containerd[865]: time="2024-09-16T11:12:07.651892820Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Sep 16 11:12:07 embed-certs-679624 containerd[865]: time="2024-09-16T11:12:07.653451548Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 16 11:12:07 embed-certs-679624 containerd[865]: time="2024-09-16T11:12:07.653527469Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [3dab298bfe5b5f466c279d41236e5592bfcab2429f9c5b2ce3ac5bf7c01c183d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:55078 - 62834 "HINFO IN 5079472268666806265.2239314299196871410. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008456339s
	
	
	==> describe nodes <==
	Name:               embed-certs-679624
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-679624
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=embed-certs-679624
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T11_11_41_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:11:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-679624
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:12:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:11:51 +0000   Mon, 16 Sep 2024 11:11:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:11:51 +0000   Mon, 16 Sep 2024 11:11:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:11:51 +0000   Mon, 16 Sep 2024 11:11:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:11:51 +0000   Mon, 16 Sep 2024 11:11:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-679624
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 68cf6edacc48492dad36911d3d7a1ae0
	  System UUID:                cc7366e5-b963-44cb-99a5-daef6ab18709
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-dmv6t                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23s
	  kube-system                 etcd-embed-certs-679624                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         27s
	  kube-system                 kindnet-78kp5                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23s
	  kube-system                 kube-apiserver-embed-certs-679624             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-embed-certs-679624    200m (2%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-proxy-bt6k2                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-scheduler-embed-certs-679624             100m (1%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 metrics-server-6867b74b74-qgvl9               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         1s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 21s                kube-proxy       
	  Normal   NodeHasSufficientMemory  34s (x8 over 34s)  kubelet          Node embed-certs-679624 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    34s (x7 over 34s)  kubelet          Node embed-certs-679624 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     34s (x7 over 34s)  kubelet          Node embed-certs-679624 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  34s                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 28s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 28s                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  28s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  27s                kubelet          Node embed-certs-679624 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    27s                kubelet          Node embed-certs-679624 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     27s                kubelet          Node embed-certs-679624 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           24s                node-controller  Node embed-certs-679624 event: Registered Node embed-certs-679624 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000001] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000001] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +1.003295] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000012] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.003959] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000006] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +2.011810] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000005] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.000001] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +4.063628] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000008] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.000030] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000007] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.003992] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000006] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +8.187268] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000006] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.000063] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000005] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	[  +0.003939] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-2cc59d4eff80
	[  +0.000006] ll header: 00000000: 02 42 1a e2 22 6c 02 42 c0 a8 5e 02 08 00
	
	
	==> etcd [e7db7be77ed78b3a10c8d675a7803092991d0e4a8c3e81f93fe10d600f5fd3a0] <==
	{"level":"info","ts":"2024-09-16T11:11:35.660657Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T11:11:35.660927Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T11:11:35.660956Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T11:11:35.661023Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2024-09-16T11:11:35.661042Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2024-09-16T11:11:36.545011Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T11:11:36.545063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T11:11:36.545117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2024-09-16T11:11:36.545137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T11:11:36.545150Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2024-09-16T11:11:36.545165Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2024-09-16T11:11:36.545180Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2024-09-16T11:11:36.546198Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:11:36.546663Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:11:36.546683Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:11:36.546665Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:embed-certs-679624 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T11:11:36.546933Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T11:11:36.546964Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T11:11:36.547066Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:11:36.547154Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:11:36.547183Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:11:36.548000Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:11:36.548092Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:11:36.548901Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2024-09-16T11:11:36.549253Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 11:12:08 up 54 min,  0 users,  load average: 2.59, 3.17, 2.23
	Linux embed-certs-679624 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [2dbb170a519e89c32bdecca493bd2479c29f546b0a9ba488f7768c7e7cdd8cc6] <==
	I0916 11:11:47.021998       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 11:11:47.023989       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0916 11:11:47.024566       1 main.go:148] setting mtu 1500 for CNI 
	I0916 11:11:47.025534       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 11:11:47.025627       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 11:11:47.420585       1 controller.go:334] Starting controller kube-network-policies
	I0916 11:11:47.421021       1 controller.go:338] Waiting for informer caches to sync
	I0916 11:11:47.421117       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 11:11:47.627002       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 11:11:47.627034       1 metrics.go:61] Registering metrics
	I0916 11:11:47.627087       1 controller.go:374] Syncing nftables rules
	I0916 11:11:57.424285       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0916 11:11:57.424361       1 main.go:299] handling current node
	I0916 11:12:07.420876       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0916 11:12:07.420920       1 main.go:299] handling current node
	
	
	==> kube-apiserver [debbdc082cc9ca9a3fe53ba70655a9e18f2b6df022109c93f5db8c5e17b5c30a] <==
	E0916 11:12:07.209407       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0916 11:12:07.210538       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I0916 11:12:07.270691       1 alloc.go:330] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.98.201.195"}
	W0916 11:12:07.320987       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 11:12:07.321061       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0916 11:12:07.327950       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 11:12:07.328018       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0916 11:12:08.207093       1 handler_proxy.go:99] no RequestInfo found in the context
	W0916 11:12:08.207093       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 11:12:08.207163       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0916 11:12:08.207195       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0916 11:12:08.208211       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0916 11:12:08.208230       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [98ba0135cf4f3961fbf8ace4f9fb6ca27a2883eb5d31aa7115bb9c9828c71f32] <==
	I0916 11:11:44.931380       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 11:11:44.937542       1 shared_informer.go:320] Caches are synced for deployment
	I0916 11:11:44.943920       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0916 11:11:45.325629       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:11:45.408258       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:11:45.408287       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 11:11:45.828842       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="111.978923ms"
	I0916 11:11:45.842449       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="13.539417ms"
	I0916 11:11:45.842559       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="62.208µs"
	I0916 11:11:45.843676       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="53.216µs"
	I0916 11:11:46.851046       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="15.841905ms"
	I0916 11:11:46.858766       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="7.657412ms"
	I0916 11:11:46.859483       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="165.208µs"
	I0916 11:11:47.957358       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="66.062µs"
	I0916 11:11:47.964349       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="64.093µs"
	I0916 11:11:47.965886       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="68.029µs"
	I0916 11:11:51.248649       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-679624"
	I0916 11:12:00.965845       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="117.386µs"
	I0916 11:12:00.983957       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.090341ms"
	I0916 11:12:00.984089       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="84.88µs"
	I0916 11:12:07.235725       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="15.628399ms"
	I0916 11:12:07.245675       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="9.846104ms"
	I0916 11:12:07.245772       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="46.356µs"
	I0916 11:12:07.253072       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="49.079µs"
	I0916 11:12:07.979844       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="134.858µs"
	
	
	==> kube-proxy [c182b9d7c07dff5e07ae6d7a2a6ee44903b7d16b719cbe713e148ddee7a32eae] <==
	I0916 11:11:46.629316       1 server_linux.go:66] "Using iptables proxy"
	I0916 11:11:46.830532       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.85.2"]
	E0916 11:11:46.830628       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 11:11:46.926994       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 11:11:46.927247       1 server_linux.go:169] "Using iptables Proxier"
	I0916 11:11:46.930151       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 11:11:46.930796       1 server.go:483] "Version info" version="v1.31.1"
	I0916 11:11:46.930829       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:11:46.932160       1 config.go:199] "Starting service config controller"
	I0916 11:11:46.932195       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 11:11:46.932254       1 config.go:328] "Starting node config controller"
	I0916 11:11:46.932264       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 11:11:46.932283       1 config.go:105] "Starting endpoint slice config controller"
	I0916 11:11:46.932300       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 11:11:47.033501       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 11:11:47.033621       1 shared_informer.go:320] Caches are synced for service config
	I0916 11:11:47.033942       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7637dc0ee3d4d0dad8e111abe371f5489a1304abc063851969b1c262fb4b2a10] <==
	W0916 11:11:38.120528       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 11:11:38.120569       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 11:11:38.120569       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0916 11:11:38.120569       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 11:11:38.120610       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E0916 11:11:38.120610       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:11:38.120674       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 11:11:38.120697       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:11:38.918573       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 11:11:38.918616       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:11:39.040886       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 11:11:39.040945       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:11:39.113732       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 11:11:39.113779       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:11:39.119266       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 11:11:39.119303       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:11:39.126330       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 11:11:39.126368       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:11:39.133675       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 11:11:39.133725       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:11:39.158407       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 11:11:39.158460       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:11:39.324525       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:11:39.324580       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 11:11:41.243501       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 11:11:46 embed-certs-679624 kubelet[1613]: E0916 11:11:46.243111    1613 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3250d92c6cbeccfb0153a110aede6c0d9e789dabaa3cf4ac553050e5d10b4d04\": failed to find network info for sandbox \"3250d92c6cbeccfb0153a110aede6c0d9e789dabaa3cf4ac553050e5d10b4d04\"" pod="kube-system/coredns-7c65d6cfc9-x4f6n"
	Sep 16 11:11:46 embed-certs-679624 kubelet[1613]: E0916 11:11:46.243138    1613 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3250d92c6cbeccfb0153a110aede6c0d9e789dabaa3cf4ac553050e5d10b4d04\": failed to find network info for sandbox \"3250d92c6cbeccfb0153a110aede6c0d9e789dabaa3cf4ac553050e5d10b4d04\"" pod="kube-system/coredns-7c65d6cfc9-x4f6n"
	Sep 16 11:11:46 embed-certs-679624 kubelet[1613]: E0916 11:11:46.243192    1613 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-x4f6n_kube-system(281fa9a8-3479-46dc-a1df-9dc1d7985344)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-x4f6n_kube-system(281fa9a8-3479-46dc-a1df-9dc1d7985344)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3250d92c6cbeccfb0153a110aede6c0d9e789dabaa3cf4ac553050e5d10b4d04\\\": failed to find network info for sandbox \\\"3250d92c6cbeccfb0153a110aede6c0d9e789dabaa3cf4ac553050e5d10b4d04\\\"\"" pod="kube-system/coredns-7c65d6cfc9-x4f6n" podUID="281fa9a8-3479-46dc-a1df-9dc1d7985344"
	Sep 16 11:11:46 embed-certs-679624 kubelet[1613]: I0916 11:11:46.936307    1613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bt6k2" podStartSLOduration=1.9362803259999999 podStartE2EDuration="1.936280326s" podCreationTimestamp="2024-09-16 11:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:11:46.93539239 +0000 UTC m=+6.230931077" watchObservedRunningTime="2024-09-16 11:11:46.936280326 +0000 UTC m=+6.231819013"
	Sep 16 11:11:47 embed-certs-679624 kubelet[1613]: I0916 11:11:47.042983    1613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-78kp5" podStartSLOduration=2.042955881 podStartE2EDuration="2.042955881s" podCreationTimestamp="2024-09-16 11:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:11:47.027232044 +0000 UTC m=+6.322770729" watchObservedRunningTime="2024-09-16 11:11:47.042955881 +0000 UTC m=+6.338494569"
	Sep 16 11:11:47 embed-certs-679624 kubelet[1613]: I0916 11:11:47.128660    1613 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/281fa9a8-3479-46dc-a1df-9dc1d7985344-config-volume\") pod \"281fa9a8-3479-46dc-a1df-9dc1d7985344\" (UID: \"281fa9a8-3479-46dc-a1df-9dc1d7985344\") "
	Sep 16 11:11:47 embed-certs-679624 kubelet[1613]: I0916 11:11:47.128726    1613 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn5kr\" (UniqueName: \"kubernetes.io/projected/281fa9a8-3479-46dc-a1df-9dc1d7985344-kube-api-access-mn5kr\") pod \"281fa9a8-3479-46dc-a1df-9dc1d7985344\" (UID: \"281fa9a8-3479-46dc-a1df-9dc1d7985344\") "
	Sep 16 11:11:47 embed-certs-679624 kubelet[1613]: I0916 11:11:47.129072    1613 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/281fa9a8-3479-46dc-a1df-9dc1d7985344-config-volume" (OuterVolumeSpecName: "config-volume") pod "281fa9a8-3479-46dc-a1df-9dc1d7985344" (UID: "281fa9a8-3479-46dc-a1df-9dc1d7985344"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Sep 16 11:11:47 embed-certs-679624 kubelet[1613]: I0916 11:11:47.131020    1613 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/281fa9a8-3479-46dc-a1df-9dc1d7985344-kube-api-access-mn5kr" (OuterVolumeSpecName: "kube-api-access-mn5kr") pod "281fa9a8-3479-46dc-a1df-9dc1d7985344" (UID: "281fa9a8-3479-46dc-a1df-9dc1d7985344"). InnerVolumeSpecName "kube-api-access-mn5kr". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 11:11:47 embed-certs-679624 kubelet[1613]: I0916 11:11:47.229070    1613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhtxr\" (UniqueName: \"kubernetes.io/projected/3b5477b8-ac39-4acc-9e16-a13a7b1d3e10-kube-api-access-rhtxr\") pod \"storage-provisioner\" (UID: \"3b5477b8-ac39-4acc-9e16-a13a7b1d3e10\") " pod="kube-system/storage-provisioner"
	Sep 16 11:11:47 embed-certs-679624 kubelet[1613]: I0916 11:11:47.229155    1613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3b5477b8-ac39-4acc-9e16-a13a7b1d3e10-tmp\") pod \"storage-provisioner\" (UID: \"3b5477b8-ac39-4acc-9e16-a13a7b1d3e10\") " pod="kube-system/storage-provisioner"
	Sep 16 11:11:47 embed-certs-679624 kubelet[1613]: I0916 11:11:47.229198    1613 reconciler_common.go:288] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/281fa9a8-3479-46dc-a1df-9dc1d7985344-config-volume\") on node \"embed-certs-679624\" DevicePath \"\""
	Sep 16 11:11:47 embed-certs-679624 kubelet[1613]: I0916 11:11:47.229220    1613 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-mn5kr\" (UniqueName: \"kubernetes.io/projected/281fa9a8-3479-46dc-a1df-9dc1d7985344-kube-api-access-mn5kr\") on node \"embed-certs-679624\" DevicePath \"\""
	Sep 16 11:11:47 embed-certs-679624 kubelet[1613]: I0916 11:11:47.947516    1613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=0.947491757 podStartE2EDuration="947.491757ms" podCreationTimestamp="2024-09-16 11:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:11:47.947191961 +0000 UTC m=+7.242730646" watchObservedRunningTime="2024-09-16 11:11:47.947491757 +0000 UTC m=+7.243030463"
	Sep 16 11:11:48 embed-certs-679624 kubelet[1613]: I0916 11:11:48.838386    1613 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="281fa9a8-3479-46dc-a1df-9dc1d7985344" path="/var/lib/kubelet/pods/281fa9a8-3479-46dc-a1df-9dc1d7985344/volumes"
	Sep 16 11:11:51 embed-certs-679624 kubelet[1613]: I0916 11:11:51.238671    1613 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 16 11:11:51 embed-certs-679624 kubelet[1613]: I0916 11:11:51.239550    1613 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 16 11:12:00 embed-certs-679624 kubelet[1613]: I0916 11:12:00.977086    1613 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-dmv6t" podStartSLOduration=15.977061402 podStartE2EDuration="15.977061402s" podCreationTimestamp="2024-09-16 11:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:12:00.966248932 +0000 UTC m=+20.261787617" watchObservedRunningTime="2024-09-16 11:12:00.977061402 +0000 UTC m=+20.272600088"
	Sep 16 11:12:07 embed-certs-679624 kubelet[1613]: I0916 11:12:07.341769    1613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs42k\" (UniqueName: \"kubernetes.io/projected/b0d684f3-ff91-4996-8d9d-23936b12c814-kube-api-access-zs42k\") pod \"metrics-server-6867b74b74-qgvl9\" (UID: \"b0d684f3-ff91-4996-8d9d-23936b12c814\") " pod="kube-system/metrics-server-6867b74b74-qgvl9"
	Sep 16 11:12:07 embed-certs-679624 kubelet[1613]: I0916 11:12:07.341818    1613 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b0d684f3-ff91-4996-8d9d-23936b12c814-tmp-dir\") pod \"metrics-server-6867b74b74-qgvl9\" (UID: \"b0d684f3-ff91-4996-8d9d-23936b12c814\") " pod="kube-system/metrics-server-6867b74b74-qgvl9"
	Sep 16 11:12:07 embed-certs-679624 kubelet[1613]: E0916 11:12:07.653732    1613 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 16 11:12:07 embed-certs-679624 kubelet[1613]: E0916 11:12:07.653811    1613 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 16 11:12:07 embed-certs-679624 kubelet[1613]: E0916 11:12:07.654026    1613 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zs42k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation
:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Std
in:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-qgvl9_kube-system(b0d684f3-ff91-4996-8d9d-23936b12c814): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" logger="UnhandledError"
	Sep 16 11:12:07 embed-certs-679624 kubelet[1613]: E0916 11:12:07.655212    1613 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-6867b74b74-qgvl9" podUID="b0d684f3-ff91-4996-8d9d-23936b12c814"
	Sep 16 11:12:07 embed-certs-679624 kubelet[1613]: E0916 11:12:07.970179    1613 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-qgvl9" podUID="b0d684f3-ff91-4996-8d9d-23936b12c814"
	
	
	==> storage-provisioner [f590d121c5d6d1937cefe41166b62f9f511f34157bec2b92d38767e94a5b8ba8] <==
	I0916 11:11:47.640871       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 11:11:47.650046       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 11:11:47.650086       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 11:11:47.659227       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 11:11:47.659353       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"af47b140-7661-4805-8791-5af1e81aebf7", APIVersion:"v1", ResourceVersion:"421", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-679624_78a7dd88-7eba-4513-b3b6-513ea370e3ab became leader
	I0916 11:11:47.659420       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-679624_78a7dd88-7eba-4513-b3b6-513ea370e3ab!
	I0916 11:11:47.760481       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-679624_78a7dd88-7eba-4513-b3b6-513ea370e3ab!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-679624 -n embed-certs-679624
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-679624 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context embed-certs-679624 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (560.313µs)
helpers_test.go:263: kubectl --context embed-certs-679624 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (3.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-006978 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-006978 create -f testdata/busybox.yaml: fork/exec /usr/local/bin/kubectl: exec format error (616.802µs)
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-006978 create -f testdata/busybox.yaml failed: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-006978
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-006978:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "92220cda3aab6b195a38e8c57831506f5fef8d5456bc7737cfbe9edbdceff751",
	        "Created": "2024-09-16T11:12:40.853683512Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 303954,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T11:12:40.986877852Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/92220cda3aab6b195a38e8c57831506f5fef8d5456bc7737cfbe9edbdceff751/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/92220cda3aab6b195a38e8c57831506f5fef8d5456bc7737cfbe9edbdceff751/hostname",
	        "HostsPath": "/var/lib/docker/containers/92220cda3aab6b195a38e8c57831506f5fef8d5456bc7737cfbe9edbdceff751/hosts",
	        "LogPath": "/var/lib/docker/containers/92220cda3aab6b195a38e8c57831506f5fef8d5456bc7737cfbe9edbdceff751/92220cda3aab6b195a38e8c57831506f5fef8d5456bc7737cfbe9edbdceff751-json.log",
	        "Name": "/default-k8s-diff-port-006978",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-006978:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-006978",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/016c3d4fc319478973bdc14ac47307bb58de6ea7b8c61d9d42356a23a7da5f9c-init/diff:/var/lib/docker/overlay2/e391a31d00d345931d18da68f8a7d7338830f6d9b98fe203461225d03a65d594/diff",
	                "MergedDir": "/var/lib/docker/overlay2/016c3d4fc319478973bdc14ac47307bb58de6ea7b8c61d9d42356a23a7da5f9c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/016c3d4fc319478973bdc14ac47307bb58de6ea7b8c61d9d42356a23a7da5f9c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/016c3d4fc319478973bdc14ac47307bb58de6ea7b8c61d9d42356a23a7da5f9c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-006978",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-006978/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-006978",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-006978",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-006978",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "962cb6cc39d91026d44c9cf4daa9dd57b47deeb7041f7aa51db91e46b312ce38",
	            "SandboxKey": "/var/run/docker/netns/962cb6cc39d9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-006978": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "77357235afcef96415382e78c67fcc53123318fac9325f81acae0f265d8eb86e",
	                    "EndpointID": "b4935c247e07031b1781430c46c0d3d9e9e0bcc8919b1161f872a08294783641",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-006978",
	                        "92220cda3aab"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-006978 -n default-k8s-diff-port-006978
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-006978 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-006978 logs -n 25: (1.098439305s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | force-systemd-flag-917705                              | force-systemd-flag-917705    | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | ssh cat                                                |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| delete  | -p force-systemd-flag-917705                           | force-systemd-flag-917705    | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| start   | -p old-k8s-version-371039                              | old-k8s-version-371039       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| ssh     | cert-options-840054 ssh                                | cert-options-840054          | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-840054 -- sudo                         | cert-options-840054          | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-840054                                 | cert-options-840054          | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| start   | -p no-preload-349453                                   | no-preload-349453            | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-349453             | no-preload-349453            | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC | 16 Sep 24 11:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-349453                                   | no-preload-349453            | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC | 16 Sep 24 11:09 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-349453                  | no-preload-349453            | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC | 16 Sep 24 11:09 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-349453                                   | no-preload-349453            | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-371039        | old-k8s-version-371039       | jenkins | v1.34.0 | 16 Sep 24 11:10 UTC | 16 Sep 24 11:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-371039                              | old-k8s-version-371039       | jenkins | v1.34.0 | 16 Sep 24 11:10 UTC | 16 Sep 24 11:10 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-371039             | old-k8s-version-371039       | jenkins | v1.34.0 | 16 Sep 24 11:10 UTC | 16 Sep 24 11:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-371039                              | old-k8s-version-371039       | jenkins | v1.34.0 | 16 Sep 24 11:10 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-021107                              | cert-expiration-021107       | jenkins | v1.34.0 | 16 Sep 24 11:11 UTC | 16 Sep 24 11:11 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-021107                              | cert-expiration-021107       | jenkins | v1.34.0 | 16 Sep 24 11:11 UTC | 16 Sep 24 11:11 UTC |
	| start   | -p embed-certs-679624                                  | embed-certs-679624           | jenkins | v1.34.0 | 16 Sep 24 11:11 UTC | 16 Sep 24 11:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-679624            | embed-certs-679624           | jenkins | v1.34.0 | 16 Sep 24 11:12 UTC | 16 Sep 24 11:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-679624                                  | embed-certs-679624           | jenkins | v1.34.0 | 16 Sep 24 11:12 UTC | 16 Sep 24 11:12 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-679624                 | embed-certs-679624           | jenkins | v1.34.0 | 16 Sep 24 11:12 UTC | 16 Sep 24 11:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-679624                                  | embed-certs-679624           | jenkins | v1.34.0 | 16 Sep 24 11:12 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-311911                           | kubernetes-upgrade-311911    | jenkins | v1.34.0 | 16 Sep 24 11:12 UTC | 16 Sep 24 11:12 UTC |
	| delete  | -p                                                     | disable-driver-mounts-852440 | jenkins | v1.34.0 | 16 Sep 24 11:12 UTC | 16 Sep 24 11:12 UTC |
	|         | disable-driver-mounts-852440                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-006978 | jenkins | v1.34.0 | 16 Sep 24 11:12 UTC | 16 Sep 24 11:13 UTC |
	|         | default-k8s-diff-port-006978                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:12:33
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:12:33.188304  303072 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:12:33.188581  303072 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:12:33.188593  303072 out.go:358] Setting ErrFile to fd 2...
	I0916 11:12:33.188598  303072 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:12:33.188783  303072 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 11:12:33.189413  303072 out.go:352] Setting JSON to false
	I0916 11:12:33.190969  303072 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3297,"bootTime":1726481856,"procs":383,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:12:33.191086  303072 start.go:139] virtualization: kvm guest
	I0916 11:12:33.193702  303072 out.go:177] * [default-k8s-diff-port-006978] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:12:33.195341  303072 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:12:33.195410  303072 notify.go:220] Checking for updates...
	I0916 11:12:33.198592  303072 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:12:33.199962  303072 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:12:33.201287  303072 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 11:12:33.202689  303072 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:12:33.204109  303072 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:12:33.206233  303072 config.go:182] Loaded profile config "embed-certs-679624": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:12:33.206402  303072 config.go:182] Loaded profile config "no-preload-349453": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:12:33.206535  303072 config.go:182] Loaded profile config "old-k8s-version-371039": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0916 11:12:33.206656  303072 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:12:33.233320  303072 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 11:12:33.233448  303072 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:12:33.298402  303072 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:86 SystemTime:2024-09-16 11:12:33.288078298 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:12:33.298570  303072 docker.go:318] overlay module found
	I0916 11:12:33.301953  303072 out.go:177] * Using the docker driver based on user configuration
	I0916 11:12:33.303332  303072 start.go:297] selected driver: docker
	I0916 11:12:33.303349  303072 start.go:901] validating driver "docker" against <nil>
	I0916 11:12:33.303362  303072 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:12:33.304321  303072 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:12:33.369824  303072 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:86 SystemTime:2024-09-16 11:12:33.356912728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:12:33.370078  303072 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 11:12:33.370327  303072 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:12:33.372698  303072 out.go:177] * Using Docker driver with root privileges
	I0916 11:12:33.374242  303072 cni.go:84] Creating CNI manager for ""
	I0916 11:12:33.374302  303072 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:12:33.374313  303072 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 11:12:33.374391  303072 start.go:340] cluster config:
	{Name:default-k8s-diff-port-006978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-006978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:12:33.375879  303072 out.go:177] * Starting "default-k8s-diff-port-006978" primary control-plane node in "default-k8s-diff-port-006978" cluster
	I0916 11:12:33.377330  303072 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 11:12:33.378788  303072 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 11:12:33.380265  303072 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:12:33.380313  303072 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4
	I0916 11:12:33.380331  303072 cache.go:56] Caching tarball of preloaded images
	I0916 11:12:33.380387  303072 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 11:12:33.380431  303072 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 11:12:33.380447  303072 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 11:12:33.380593  303072 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/config.json ...
	I0916 11:12:33.380632  303072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/config.json: {Name:mk8dc034cf5d1663f163d44cacb1db0a697f761d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0916 11:12:33.405013  303072 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 11:12:33.405039  303072 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 11:12:33.405136  303072 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 11:12:33.405159  303072 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 11:12:33.405165  303072 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 11:12:33.405174  303072 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 11:12:33.405185  303072 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 11:12:33.466107  303072 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 11:12:33.466152  303072 cache.go:194] Successfully downloaded all kic artifacts
	I0916 11:12:33.466198  303072 start.go:360] acquireMachinesLock for default-k8s-diff-port-006978: {Name:mke54f99fcd9e320f7c2bc8102220e65af70efd7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:12:33.466306  303072 start.go:364] duration metric: took 80.59µs to acquireMachinesLock for "default-k8s-diff-port-006978"
	I0916 11:12:33.466338  303072 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-006978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-006978 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 11:12:33.466439  303072 start.go:125] createHost starting for "" (driver="docker")
	I0916 11:12:30.238757  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:32.239283  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:30.649566  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:33.149628  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:33.273175  283294 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:35.769915  283294 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:33.469116  303072 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 11:12:33.469464  303072 start.go:159] libmachine.API.Create for "default-k8s-diff-port-006978" (driver="docker")
	I0916 11:12:33.469509  303072 client.go:168] LocalClient.Create starting
	I0916 11:12:33.469613  303072 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem
	I0916 11:12:33.469669  303072 main.go:141] libmachine: Decoding PEM data...
	I0916 11:12:33.469693  303072 main.go:141] libmachine: Parsing certificate...
	I0916 11:12:33.469766  303072 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem
	I0916 11:12:33.469792  303072 main.go:141] libmachine: Decoding PEM data...
	I0916 11:12:33.469803  303072 main.go:141] libmachine: Parsing certificate...
	I0916 11:12:33.470217  303072 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-006978 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 11:12:33.489304  303072 cli_runner.go:211] docker network inspect default-k8s-diff-port-006978 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 11:12:33.489379  303072 network_create.go:284] running [docker network inspect default-k8s-diff-port-006978] to gather additional debugging logs...
	I0916 11:12:33.489410  303072 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-006978
	W0916 11:12:33.511223  303072 cli_runner.go:211] docker network inspect default-k8s-diff-port-006978 returned with exit code 1
	I0916 11:12:33.511273  303072 network_create.go:287] error running [docker network inspect default-k8s-diff-port-006978]: docker network inspect default-k8s-diff-port-006978: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-006978 not found
	I0916 11:12:33.511289  303072 network_create.go:289] output of [docker network inspect default-k8s-diff-port-006978]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-006978 not found
	
	** /stderr **
	I0916 11:12:33.511384  303072 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:12:33.531001  303072 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c95c64bb41bd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:bd:76:ab:c4} reservation:<nil>}
	I0916 11:12:33.532177  303072 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fad43aa9929b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:94:3f:61:fd} reservation:<nil>}
	I0916 11:12:33.533347  303072 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-49585fce923a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:ad:45:94:54} reservation:<nil>}
	I0916 11:12:33.534648  303072 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001bd36e0}
	I0916 11:12:33.534681  303072 network_create.go:124] attempt to create docker network default-k8s-diff-port-006978 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0916 11:12:33.534740  303072 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-006978 default-k8s-diff-port-006978
	I0916 11:12:33.610090  303072 network_create.go:108] docker network default-k8s-diff-port-006978 192.168.76.0/24 created
	I0916 11:12:33.610127  303072 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-006978" container
	I0916 11:12:33.610214  303072 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 11:12:33.632805  303072 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-006978 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-006978 --label created_by.minikube.sigs.k8s.io=true
	I0916 11:12:33.655257  303072 oci.go:103] Successfully created a docker volume default-k8s-diff-port-006978
	I0916 11:12:33.655345  303072 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-006978-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-006978 --entrypoint /usr/bin/test -v default-k8s-diff-port-006978:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 11:12:34.731781  303072 cli_runner.go:217] Completed: docker run --rm --name default-k8s-diff-port-006978-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-006978 --entrypoint /usr/bin/test -v default-k8s-diff-port-006978:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib: (1.076336693s)
	I0916 11:12:34.731816  303072 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-006978
	I0916 11:12:34.731846  303072 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:12:34.731872  303072 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 11:12:34.731946  303072 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-006978:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 11:12:34.739277  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:37.239310  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:35.149722  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:37.650375  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:37.770648  283294 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"True"
	I0916 11:12:37.770673  283294 pod_ready.go:82] duration metric: took 11.007535908s for pod "kube-scheduler-old-k8s-version-371039" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:37.770686  283294 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:39.777546  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:40.784380  303072 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-006978:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (6.052385534s)
	I0916 11:12:40.784418  303072 kic.go:203] duration metric: took 6.052542506s to extract preloaded images to volume ...
	W0916 11:12:40.784564  303072 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 11:12:40.784661  303072 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 11:12:40.837569  303072 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-006978 --name default-k8s-diff-port-006978 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-006978 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-006978 --network default-k8s-diff-port-006978 --ip 192.168.76.2 --volume default-k8s-diff-port-006978:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 11:12:41.151535  303072 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-006978 --format={{.State.Running}}
	I0916 11:12:41.171308  303072 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-006978 --format={{.State.Status}}
	I0916 11:12:41.190871  303072 cli_runner.go:164] Run: docker exec default-k8s-diff-port-006978 stat /var/lib/dpkg/alternatives/iptables
	I0916 11:12:41.233480  303072 oci.go:144] the created container "default-k8s-diff-port-006978" has a running status.
	I0916 11:12:41.233522  303072 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/default-k8s-diff-port-006978/id_rsa...
	I0916 11:12:41.414049  303072 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3687/.minikube/machines/default-k8s-diff-port-006978/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 11:12:41.436388  303072 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-006978 --format={{.State.Status}}
	I0916 11:12:41.455455  303072 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 11:12:41.455481  303072 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-006978 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 11:12:41.511490  303072 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-006978 --format={{.State.Status}}
	I0916 11:12:41.540142  303072 machine.go:93] provisionDockerMachine start ...
	I0916 11:12:41.540258  303072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:12:41.563377  303072 main.go:141] libmachine: Using SSH client type: native
	I0916 11:12:41.563597  303072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I0916 11:12:41.563607  303072 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 11:12:41.821654  303072 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-006978
	
	I0916 11:12:41.821689  303072 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-006978"
	I0916 11:12:41.821753  303072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:12:41.840337  303072 main.go:141] libmachine: Using SSH client type: native
	I0916 11:12:41.840544  303072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I0916 11:12:41.840564  303072 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-006978 && echo "default-k8s-diff-port-006978" | sudo tee /etc/hostname
	I0916 11:12:41.992032  303072 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-006978
	
	I0916 11:12:41.992120  303072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:12:42.009447  303072 main.go:141] libmachine: Using SSH client type: native
	I0916 11:12:42.009695  303072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I0916 11:12:42.009733  303072 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-006978' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-006978/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-006978' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:12:42.148459  303072 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:12:42.148487  303072 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 11:12:42.148510  303072 ubuntu.go:177] setting up certificates
	I0916 11:12:42.148538  303072 provision.go:84] configureAuth start
	I0916 11:12:42.148598  303072 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-006978
	I0916 11:12:42.166372  303072 provision.go:143] copyHostCerts
	I0916 11:12:42.166428  303072 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 11:12:42.166436  303072 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 11:12:42.166501  303072 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 11:12:42.166586  303072 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 11:12:42.166595  303072 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 11:12:42.166621  303072 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 11:12:42.166674  303072 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 11:12:42.166682  303072 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 11:12:42.166703  303072 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 11:12:42.166753  303072 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-006978 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-006978 localhost minikube]
	I0916 11:12:42.306401  303072 provision.go:177] copyRemoteCerts
	I0916 11:12:42.306461  303072 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:12:42.306495  303072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:12:42.323490  303072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/default-k8s-diff-port-006978/id_rsa Username:docker}
	I0916 11:12:42.420814  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 11:12:42.443662  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0916 11:12:42.466807  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 11:12:42.490564  303072 provision.go:87] duration metric: took 342.007302ms to configureAuth
	I0916 11:12:42.490593  303072 ubuntu.go:193] setting minikube options for container-runtime
	I0916 11:12:42.490820  303072 config.go:182] Loaded profile config "default-k8s-diff-port-006978": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:12:42.490837  303072 machine.go:96] duration metric: took 950.665124ms to provisionDockerMachine
	I0916 11:12:42.490846  303072 client.go:171] duration metric: took 9.021328095s to LocalClient.Create
	I0916 11:12:42.490871  303072 start.go:167] duration metric: took 9.02141907s to libmachine.API.Create "default-k8s-diff-port-006978"
	I0916 11:12:42.490884  303072 start.go:293] postStartSetup for "default-k8s-diff-port-006978" (driver="docker")
	I0916 11:12:42.490896  303072 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:12:42.490957  303072 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:12:42.491009  303072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:12:42.508314  303072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/default-k8s-diff-port-006978/id_rsa Username:docker}
	I0916 11:12:42.606294  303072 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:12:42.609598  303072 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 11:12:42.609636  303072 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 11:12:42.609645  303072 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 11:12:42.609651  303072 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 11:12:42.609662  303072 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 11:12:42.609720  303072 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 11:12:42.609807  303072 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 11:12:42.609896  303072 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:12:42.618062  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:12:42.641241  303072 start.go:296] duration metric: took 150.341833ms for postStartSetup
	I0916 11:12:42.641601  303072 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-006978
	I0916 11:12:42.660638  303072 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/config.json ...
	I0916 11:12:42.660910  303072 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 11:12:42.660959  303072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:12:42.681352  303072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/default-k8s-diff-port-006978/id_rsa Username:docker}
	I0916 11:12:42.773016  303072 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 11:12:42.777639  303072 start.go:128] duration metric: took 9.311183024s to createHost
	I0916 11:12:42.777671  303072 start.go:83] releasing machines lock for "default-k8s-diff-port-006978", held for 9.311348572s
	I0916 11:12:42.777730  303072 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-006978
	I0916 11:12:42.794671  303072 ssh_runner.go:195] Run: cat /version.json
	I0916 11:12:42.794729  303072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:12:42.794734  303072 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:12:42.794809  303072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:12:42.812760  303072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/default-k8s-diff-port-006978/id_rsa Username:docker}
	I0916 11:12:42.812961  303072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/default-k8s-diff-port-006978/id_rsa Username:docker}
	I0916 11:12:42.983965  303072 ssh_runner.go:195] Run: systemctl --version
	I0916 11:12:42.988168  303072 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:12:42.992468  303072 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 11:12:43.016957  303072 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 11:12:43.017041  303072 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:12:43.045266  303072 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 11:12:43.045297  303072 start.go:495] detecting cgroup driver to use...
	I0916 11:12:43.045326  303072 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 11:12:43.045377  303072 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 11:12:43.057420  303072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 11:12:43.068346  303072 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:12:43.068404  303072 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:12:43.081261  303072 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:12:43.094734  303072 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:12:43.175775  303072 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:12:39.737995  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:41.742098  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:43.261975  303072 docker.go:233] disabling docker service ...
	I0916 11:12:43.262038  303072 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:12:43.282995  303072 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:12:43.295522  303072 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:12:43.379559  303072 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:12:43.459884  303072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:12:43.472400  303072 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:12:43.487862  303072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 11:12:43.497717  303072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 11:12:43.507197  303072 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 11:12:43.507271  303072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 11:12:43.516769  303072 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 11:12:43.526040  303072 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 11:12:43.535489  303072 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 11:12:43.545566  303072 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:12:43.554727  303072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 11:12:43.564652  303072 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 11:12:43.574313  303072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 11:12:43.584261  303072 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:12:43.592172  303072 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:12:43.600336  303072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:12:43.675960  303072 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 11:12:43.777200  303072 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 11:12:43.777376  303072 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 11:12:43.781384  303072 start.go:563] Will wait 60s for crictl version
	I0916 11:12:43.781440  303072 ssh_runner.go:195] Run: which crictl
	I0916 11:12:43.784718  303072 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:12:43.817809  303072 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 11:12:43.817866  303072 ssh_runner.go:195] Run: containerd --version
	I0916 11:12:43.839994  303072 ssh_runner.go:195] Run: containerd --version
	I0916 11:12:43.868789  303072 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 11:12:40.149138  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:42.149777  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:44.150077  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:43.870264  303072 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-006978 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:12:43.887693  303072 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0916 11:12:43.891552  303072 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:12:43.902196  303072 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-006978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-006978 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:12:43.902316  303072 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:12:43.902363  303072 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:12:43.933503  303072 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 11:12:43.933524  303072 containerd.go:534] Images already preloaded, skipping extraction
	I0916 11:12:43.933574  303072 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:12:43.966712  303072 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 11:12:43.966739  303072 cache_images.go:84] Images are preloaded, skipping loading
	I0916 11:12:43.966750  303072 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.31.1 containerd true true} ...
	I0916 11:12:43.966868  303072 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-006978 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-006978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:12:43.966924  303072 ssh_runner.go:195] Run: sudo crictl info
	I0916 11:12:44.000346  303072 cni.go:84] Creating CNI manager for ""
	I0916 11:12:44.000368  303072 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:12:44.000378  303072 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:12:44.000397  303072 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-006978 NodeName:default-k8s-diff-port-006978 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 11:12:44.000529  303072 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-006978"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:12:44.000585  303072 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 11:12:44.009158  303072 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 11:12:44.009228  303072 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:12:44.017370  303072 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I0916 11:12:44.034166  303072 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:12:44.050711  303072 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2182 bytes)
	I0916 11:12:44.068227  303072 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0916 11:12:44.071437  303072 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:12:44.081858  303072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:12:44.151949  303072 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:12:44.165524  303072 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978 for IP: 192.168.76.2
	I0916 11:12:44.165552  303072 certs.go:194] generating shared ca certs ...
	I0916 11:12:44.165574  303072 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:12:44.165741  303072 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 11:12:44.165796  303072 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 11:12:44.165809  303072 certs.go:256] generating profile certs ...
	I0916 11:12:44.165876  303072 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.key
	I0916 11:12:44.165895  303072 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.crt with IP's: []
	I0916 11:12:44.646752  303072 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.crt ...
	I0916 11:12:44.646790  303072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.crt: {Name:mk5fe57391c71a635bc2664646b46ecf8e7b30f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:12:44.646990  303072 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.key ...
	I0916 11:12:44.647008  303072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.key: {Name:mk8729c4f50d0dff1d65e22e9e0317a12cedc4f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:12:44.647122  303072 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/apiserver.key.9826bbf6
	I0916 11:12:44.647147  303072 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/apiserver.crt.9826bbf6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0916 11:12:44.927657  303072 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/apiserver.crt.9826bbf6 ...
	I0916 11:12:44.927689  303072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/apiserver.crt.9826bbf6: {Name:mk507ad25443e7441acfbd74f84b0e53e00a318e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:12:44.927907  303072 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/apiserver.key.9826bbf6 ...
	I0916 11:12:44.927928  303072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/apiserver.key.9826bbf6: {Name:mk9141b6cbd16a3cbc7444d9c738b092ec418bec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:12:44.928023  303072 certs.go:381] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/apiserver.crt.9826bbf6 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/apiserver.crt
	I0916 11:12:44.928103  303072 certs.go:385] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/apiserver.key.9826bbf6 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/apiserver.key
	I0916 11:12:44.928163  303072 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/proxy-client.key
	I0916 11:12:44.928181  303072 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/proxy-client.crt with IP's: []
	I0916 11:12:45.016612  303072 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/proxy-client.crt ...
	I0916 11:12:45.016645  303072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/proxy-client.crt: {Name:mkc1eab85fbe839e33e53386182e4a6afedec155 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:12:45.016821  303072 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/proxy-client.key ...
	I0916 11:12:45.016835  303072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/proxy-client.key: {Name:mk2b6e4ebf261029f43772640bda54fcc5f4921e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:12:45.017014  303072 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 11:12:45.017057  303072 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 11:12:45.017069  303072 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:12:45.017097  303072 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 11:12:45.017125  303072 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:12:45.017150  303072 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 11:12:45.017223  303072 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:12:45.018092  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:12:45.044144  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:12:45.069666  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:12:45.093991  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:12:45.118079  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0916 11:12:45.141384  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 11:12:45.165248  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:12:45.188695  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 11:12:45.211474  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 11:12:45.234691  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:12:45.260049  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 11:12:45.285743  303072 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:12:45.305003  303072 ssh_runner.go:195] Run: openssl version
	I0916 11:12:45.310350  303072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 11:12:45.319899  303072 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 11:12:45.323233  303072 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 11:12:45.323295  303072 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 11:12:45.330231  303072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:12:45.339365  303072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:12:45.348101  303072 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:12:45.351431  303072 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:12:45.351477  303072 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:12:45.358772  303072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:12:45.368221  303072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 11:12:45.377642  303072 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 11:12:45.381484  303072 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 11:12:45.381557  303072 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 11:12:45.388377  303072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 11:12:45.397630  303072 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:12:45.400839  303072 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 11:12:45.400901  303072 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-006978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-006978 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:12:45.400985  303072 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 11:12:45.401042  303072 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:12:45.435932  303072 cri.go:89] found id: ""
	I0916 11:12:45.435998  303072 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:12:45.444621  303072 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 11:12:45.453465  303072 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 11:12:45.453540  303072 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 11:12:45.461982  303072 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 11:12:45.462009  303072 kubeadm.go:157] found existing configuration files:
	
	I0916 11:12:45.462058  303072 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0916 11:12:45.471201  303072 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 11:12:45.471266  303072 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 11:12:45.479819  303072 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0916 11:12:45.488490  303072 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 11:12:45.488561  303072 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 11:12:45.497296  303072 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0916 11:12:45.505691  303072 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 11:12:45.505752  303072 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 11:12:45.514164  303072 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0916 11:12:45.522589  303072 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 11:12:45.522664  303072 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 11:12:45.531758  303072 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 11:12:45.570283  303072 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 11:12:45.570392  303072 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 11:12:45.587093  303072 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 11:12:45.587194  303072 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 11:12:45.587259  303072 kubeadm.go:310] OS: Linux
	I0916 11:12:45.587364  303072 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 11:12:45.587437  303072 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 11:12:45.587506  303072 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 11:12:45.587575  303072 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 11:12:45.587661  303072 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 11:12:45.587775  303072 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 11:12:45.587850  303072 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 11:12:45.587917  303072 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 11:12:45.587985  303072 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 11:12:45.641793  303072 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 11:12:45.641930  303072 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 11:12:45.642053  303072 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 11:12:45.649963  303072 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 11:12:42.276706  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:44.277484  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:46.777255  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:45.652421  303072 out.go:235]   - Generating certificates and keys ...
	I0916 11:12:45.652552  303072 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 11:12:45.652614  303072 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 11:12:45.754915  303072 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 11:12:45.828601  303072 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 11:12:46.002610  303072 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 11:12:46.107849  303072 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 11:12:46.248800  303072 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 11:12:46.248969  303072 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-006978 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0916 11:12:46.391232  303072 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 11:12:46.391389  303072 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-006978 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0916 11:12:46.580271  303072 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 11:12:46.848235  303072 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 11:12:46.941777  303072 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 11:12:46.942069  303072 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 11:12:47.120540  303072 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 11:12:47.246837  303072 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 11:12:47.425713  303072 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 11:12:47.548056  303072 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 11:12:47.729107  303072 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 11:12:47.729658  303072 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 11:12:47.732291  303072 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 11:12:47.734653  303072 out.go:235]   - Booting up control plane ...
	I0916 11:12:47.734798  303072 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 11:12:47.734923  303072 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 11:12:47.735780  303072 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 11:12:47.746631  303072 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 11:12:47.752853  303072 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 11:12:47.752933  303072 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 11:12:47.837084  303072 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 11:12:47.837210  303072 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 11:12:44.237974  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:46.238843  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:48.738940  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:46.649823  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:48.650010  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:48.785616  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:51.277044  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:48.338749  303072 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.690295ms
	I0916 11:12:48.338844  303072 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 11:12:53.340729  303072 kubeadm.go:310] [api-check] The API server is healthy after 5.001910312s
	I0916 11:12:53.352535  303072 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 11:12:53.365374  303072 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 11:12:53.386596  303072 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 11:12:53.386790  303072 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-006978 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 11:12:53.394294  303072 kubeadm.go:310] [bootstrap-token] Using token: 21xlxs.cbzjnrzj5tox0go3
	I0916 11:12:51.240955  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:53.739148  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:53.395665  303072 out.go:235]   - Configuring RBAC rules ...
	I0916 11:12:53.395858  303072 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 11:12:53.400957  303072 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 11:12:53.407354  303072 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 11:12:53.410317  303072 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 11:12:53.413111  303072 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 11:12:53.415993  303072 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 11:12:53.748017  303072 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 11:12:54.173321  303072 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 11:12:54.747458  303072 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 11:12:54.748595  303072 kubeadm.go:310] 
	I0916 11:12:54.748708  303072 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 11:12:54.748726  303072 kubeadm.go:310] 
	I0916 11:12:54.748792  303072 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 11:12:54.748815  303072 kubeadm.go:310] 
	I0916 11:12:54.748848  303072 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 11:12:54.748905  303072 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 11:12:54.748949  303072 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 11:12:54.748956  303072 kubeadm.go:310] 
	I0916 11:12:54.749000  303072 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 11:12:54.749007  303072 kubeadm.go:310] 
	I0916 11:12:54.749052  303072 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 11:12:54.749058  303072 kubeadm.go:310] 
	I0916 11:12:54.749133  303072 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 11:12:54.749244  303072 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 11:12:54.749349  303072 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 11:12:54.749358  303072 kubeadm.go:310] 
	I0916 11:12:54.749466  303072 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 11:12:54.749577  303072 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 11:12:54.749587  303072 kubeadm.go:310] 
	I0916 11:12:54.749715  303072 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 21xlxs.cbzjnrzj5tox0go3 \
	I0916 11:12:54.749881  303072 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 \
	I0916 11:12:54.749917  303072 kubeadm.go:310] 	--control-plane 
	I0916 11:12:54.749927  303072 kubeadm.go:310] 
	I0916 11:12:54.750057  303072 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 11:12:54.750065  303072 kubeadm.go:310] 
	I0916 11:12:54.750183  303072 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 21xlxs.cbzjnrzj5tox0go3 \
	I0916 11:12:54.750332  303072 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 
	I0916 11:12:54.753671  303072 kubeadm.go:310] W0916 11:12:45.567417    1138 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:12:54.753962  303072 kubeadm.go:310] W0916 11:12:45.568121    1138 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:12:54.754163  303072 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 11:12:54.754263  303072 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 11:12:54.754300  303072 cni.go:84] Creating CNI manager for ""
	I0916 11:12:54.754311  303072 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:12:54.756405  303072 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 11:12:50.650425  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:53.149646  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:53.776361  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:55.777283  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:54.757636  303072 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 11:12:54.761995  303072 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 11:12:54.762021  303072 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 11:12:54.781299  303072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 11:12:54.997779  303072 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 11:12:54.997865  303072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:12:54.997925  303072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-006978 minikube.k8s.io/updated_at=2024_09_16T11_12_54_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=default-k8s-diff-port-006978 minikube.k8s.io/primary=true
	I0916 11:12:55.122417  303072 ops.go:34] apiserver oom_adj: -16
	I0916 11:12:55.122453  303072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:12:55.623042  303072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:12:56.123379  303072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:12:56.622627  303072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:12:57.122828  303072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:12:57.623249  303072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:12:58.122814  303072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:12:56.237961  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:58.238231  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:58.623435  303072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:12:59.122743  303072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:12:59.192533  303072 kubeadm.go:1113] duration metric: took 4.194732378s to wait for elevateKubeSystemPrivileges
	I0916 11:12:59.192570  303072 kubeadm.go:394] duration metric: took 13.791671494s to StartCluster
	I0916 11:12:59.192623  303072 settings.go:142] acquiring lock: {Name:mk5f140da961bb985c566f732d611f3e238f1f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:12:59.192717  303072 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:12:59.194519  303072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:12:59.194804  303072 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 11:12:59.194808  303072 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 11:12:59.194880  303072 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:12:59.194998  303072 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-006978"
	I0916 11:12:59.195022  303072 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-006978"
	I0916 11:12:59.195023  303072 config.go:182] Loaded profile config "default-k8s-diff-port-006978": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:12:59.195039  303072 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-006978"
	I0916 11:12:59.195062  303072 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-006978"
	I0916 11:12:59.195069  303072 host.go:66] Checking if "default-k8s-diff-port-006978" exists ...
	I0916 11:12:59.195452  303072 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-006978 --format={{.State.Status}}
	I0916 11:12:59.195657  303072 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-006978 --format={{.State.Status}}
	I0916 11:12:59.196648  303072 out.go:177] * Verifying Kubernetes components...
	I0916 11:12:59.198253  303072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:12:59.227528  303072 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:12:55.150355  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:57.649659  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:59.650582  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:59.229043  303072 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:12:59.229072  303072 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:12:59.229154  303072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:12:59.231097  303072 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-006978"
	I0916 11:12:59.231131  303072 host.go:66] Checking if "default-k8s-diff-port-006978" exists ...
	I0916 11:12:59.231438  303072 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-006978 --format={{.State.Status}}
	I0916 11:12:59.260139  303072 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:12:59.260162  303072 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:12:59.260235  303072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:12:59.261484  303072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/default-k8s-diff-port-006978/id_rsa Username:docker}
	I0916 11:12:59.286575  303072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/default-k8s-diff-port-006978/id_rsa Username:docker}
	I0916 11:12:59.436277  303072 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 11:12:59.436326  303072 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:12:59.548794  303072 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:12:59.549612  303072 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:12:59.965906  303072 start.go:971] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0916 11:12:59.967917  303072 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-006978" to be "Ready" ...
	I0916 11:13:00.025739  303072 node_ready.go:49] node "default-k8s-diff-port-006978" has status "Ready":"True"
	I0916 11:13:00.025770  303072 node_ready.go:38] duration metric: took 57.824532ms for node "default-k8s-diff-port-006978" to be "Ready" ...
	I0916 11:13:00.025783  303072 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:13:00.036420  303072 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-827n4" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:00.378574  303072 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0916 11:12:58.276870  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:00.277679  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:00.379868  303072 addons.go:510] duration metric: took 1.184988241s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0916 11:13:00.470687  303072 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-006978" context rescaled to 1 replicas
	I0916 11:13:01.540194  303072 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-827n4" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-827n4" not found
	I0916 11:13:01.540224  303072 pod_ready.go:82] duration metric: took 1.503769597s for pod "coredns-7c65d6cfc9-827n4" in "kube-system" namespace to be "Ready" ...
	E0916 11:13:01.540237  303072 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-827n4" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-827n4" not found
	I0916 11:13:01.540246  303072 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sc74v" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:00.239474  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:02.738165  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:02.148445  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:04.149068  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:02.776501  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:05.277166  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:03.545597  303072 pod_ready.go:103] pod "coredns-7c65d6cfc9-sc74v" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:05.545656  303072 pod_ready.go:103] pod "coredns-7c65d6cfc9-sc74v" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:08.045988  303072 pod_ready.go:103] pod "coredns-7c65d6cfc9-sc74v" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:04.738774  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:07.238765  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:06.150520  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:08.649261  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:07.775965  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:09.776234  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:11.777149  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:10.545650  303072 pod_ready.go:103] pod "coredns-7c65d6cfc9-sc74v" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:12.545787  303072 pod_ready.go:103] pod "coredns-7c65d6cfc9-sc74v" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:09.738393  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:12.240211  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:11.148506  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:13.149270  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:14.276783  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:16.776281  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:14.545899  303072 pod_ready.go:103] pod "coredns-7c65d6cfc9-sc74v" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:16.546214  303072 pod_ready.go:93] pod "coredns-7c65d6cfc9-sc74v" in "kube-system" namespace has status "Ready":"True"
	I0916 11:13:16.546237  303072 pod_ready.go:82] duration metric: took 15.005983715s for pod "coredns-7c65d6cfc9-sc74v" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:16.546248  303072 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-006978" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:16.550669  303072 pod_ready.go:93] pod "etcd-default-k8s-diff-port-006978" in "kube-system" namespace has status "Ready":"True"
	I0916 11:13:16.550693  303072 pod_ready.go:82] duration metric: took 4.439531ms for pod "etcd-default-k8s-diff-port-006978" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:16.550708  303072 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-006978" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:16.554968  303072 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-006978" in "kube-system" namespace has status "Ready":"True"
	I0916 11:13:16.554985  303072 pod_ready.go:82] duration metric: took 4.271061ms for pod "kube-apiserver-default-k8s-diff-port-006978" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:16.554994  303072 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-006978" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:16.558971  303072 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-006978" in "kube-system" namespace has status "Ready":"True"
	I0916 11:13:16.558989  303072 pod_ready.go:82] duration metric: took 3.989284ms for pod "kube-controller-manager-default-k8s-diff-port-006978" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:16.558999  303072 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2mcbv" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:16.562698  303072 pod_ready.go:93] pod "kube-proxy-2mcbv" in "kube-system" namespace has status "Ready":"True"
	I0916 11:13:16.562717  303072 pod_ready.go:82] duration metric: took 3.713096ms for pod "kube-proxy-2mcbv" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:16.562725  303072 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-006978" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:16.944094  303072 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-006978" in "kube-system" namespace has status "Ready":"True"
	I0916 11:13:16.944124  303072 pod_ready.go:82] duration metric: took 381.391034ms for pod "kube-scheduler-default-k8s-diff-port-006978" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:16.944135  303072 pod_ready.go:39] duration metric: took 16.918337249s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:13:16.944166  303072 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:13:16.944236  303072 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:13:16.955576  303072 api_server.go:72] duration metric: took 17.760737057s to wait for apiserver process to appear ...
	I0916 11:13:16.955602  303072 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:13:16.955640  303072 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0916 11:13:16.959362  303072 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I0916 11:13:16.960401  303072 api_server.go:141] control plane version: v1.31.1
	I0916 11:13:16.960425  303072 api_server.go:131] duration metric: took 4.816984ms to wait for apiserver health ...
	I0916 11:13:16.960434  303072 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 11:13:17.147884  303072 system_pods.go:59] 8 kube-system pods found
	I0916 11:13:17.147928  303072 system_pods.go:61] "coredns-7c65d6cfc9-sc74v" [5655635d-c5e6-4043-b178-77f3df972e86] Running
	I0916 11:13:17.147935  303072 system_pods.go:61] "etcd-default-k8s-diff-port-006978" [d2e16749-ded9-4a12-9454-f66910bb9e5e] Running
	I0916 11:13:17.147948  303072 system_pods.go:61] "kindnet-njckk" [666b88e3-d80f-4bd7-b0a9-2cec72a365f0] Running
	I0916 11:13:17.147953  303072 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-006978" [81a8cd00-67af-4a7b-adcb-26da8bd1403a] Running
	I0916 11:13:17.147959  303072 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-006978" [76606226-2ee9-4661-a691-29edce3f8d0e] Running
	I0916 11:13:17.147964  303072 system_pods.go:61] "kube-proxy-2mcbv" [8fae6563-2965-49e7-96b5-b9813dc369e1] Running
	I0916 11:13:17.147969  303072 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-006978" [a69636f9-c121-4c9e-856e-600dc2fea787] Running
	I0916 11:13:17.147977  303072 system_pods.go:61] "storage-provisioner" [08708819-cf0d-4505-a1f0-5563be02bd8c] Running
	I0916 11:13:17.147985  303072 system_pods.go:74] duration metric: took 187.54485ms to wait for pod list to return data ...
	I0916 11:13:17.147996  303072 default_sa.go:34] waiting for default service account to be created ...
	I0916 11:13:17.344472  303072 default_sa.go:45] found service account: "default"
	I0916 11:13:17.344500  303072 default_sa.go:55] duration metric: took 196.497574ms for default service account to be created ...
	I0916 11:13:17.344510  303072 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 11:13:17.546956  303072 system_pods.go:86] 8 kube-system pods found
	I0916 11:13:17.546985  303072 system_pods.go:89] "coredns-7c65d6cfc9-sc74v" [5655635d-c5e6-4043-b178-77f3df972e86] Running
	I0916 11:13:17.546990  303072 system_pods.go:89] "etcd-default-k8s-diff-port-006978" [d2e16749-ded9-4a12-9454-f66910bb9e5e] Running
	I0916 11:13:17.546995  303072 system_pods.go:89] "kindnet-njckk" [666b88e3-d80f-4bd7-b0a9-2cec72a365f0] Running
	I0916 11:13:17.546999  303072 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-006978" [81a8cd00-67af-4a7b-adcb-26da8bd1403a] Running
	I0916 11:13:17.547003  303072 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-006978" [76606226-2ee9-4661-a691-29edce3f8d0e] Running
	I0916 11:13:17.547006  303072 system_pods.go:89] "kube-proxy-2mcbv" [8fae6563-2965-49e7-96b5-b9813dc369e1] Running
	I0916 11:13:17.547009  303072 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-006978" [a69636f9-c121-4c9e-856e-600dc2fea787] Running
	I0916 11:13:17.547013  303072 system_pods.go:89] "storage-provisioner" [08708819-cf0d-4505-a1f0-5563be02bd8c] Running
	I0916 11:13:17.547018  303072 system_pods.go:126] duration metric: took 202.504183ms to wait for k8s-apps to be running ...
	I0916 11:13:17.547033  303072 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 11:13:17.547078  303072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:13:17.559046  303072 system_svc.go:56] duration metric: took 12.001345ms WaitForService to wait for kubelet
	I0916 11:13:17.559077  303072 kubeadm.go:582] duration metric: took 18.364243961s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:13:17.559097  303072 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:13:17.744389  303072 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 11:13:17.744416  303072 node_conditions.go:123] node cpu capacity is 8
	I0916 11:13:17.744431  303072 node_conditions.go:105] duration metric: took 185.330735ms to run NodePressure ...
	I0916 11:13:17.744442  303072 start.go:241] waiting for startup goroutines ...
	I0916 11:13:17.744448  303072 start.go:246] waiting for cluster config update ...
	I0916 11:13:17.744458  303072 start.go:255] writing updated cluster config ...
	I0916 11:13:17.744735  303072 ssh_runner.go:195] Run: rm -f paused
	I0916 11:13:17.750858  303072 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-006978" cluster and "default" namespace by default
	E0916 11:13:17.752103  303072 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	308f1d6d730a2       c69fa2e9cbf5f       3 seconds ago       Running             coredns                   0                   a427aaf4dc7bf       coredns-7c65d6cfc9-sc74v
	6f355202fdbbe       6e38f40d628db       17 seconds ago      Running             storage-provisioner       0                   7bb5692bf820a       storage-provisioner
	3d2d679d3f920       12968670680f4       18 seconds ago      Running             kindnet-cni               0                   c9b1db2846501       kindnet-njckk
	947c3b3b00e44       60c005f310ff3       18 seconds ago      Running             kube-proxy                0                   d0095dc7cbd78       kube-proxy-2mcbv
	06406ac4e01c0       2e96e5913fc06       29 seconds ago      Running             etcd                      0                   6908ea2d82b0c       etcd-default-k8s-diff-port-006978
	3b1640b111894       9aa1fad941575       29 seconds ago      Running             kube-scheduler            0                   75eb18111b77e       kube-scheduler-default-k8s-diff-port-006978
	a085c20f4e6d1       175ffd71cce3d       29 seconds ago      Running             kube-controller-manager   0                   4e59876f0bb83       kube-controller-manager-default-k8s-diff-port-006978
	bdf3aa888730f       6bab7719df100       29 seconds ago      Running             kube-apiserver            0                   8f6d53f6f0c9d       kube-apiserver-default-k8s-diff-port-006978
	
	
	==> containerd <==
	Sep 16 11:13:00 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:00.242014427Z" level=info msg="CreateContainer within sandbox \"c9b1db28465017b75476004f06e26da2011ee07762a7e0f1f1fbdb717fa9c9f3\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Sep 16 11:13:00 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:00.256539912Z" level=info msg="CreateContainer within sandbox \"c9b1db28465017b75476004f06e26da2011ee07762a7e0f1f1fbdb717fa9c9f3\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"3d2d679d3f9204404ae6eba2a4077abf21a6fcc5943b04d5a2e69b78836e3cd3\""
	Sep 16 11:13:00 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:00.257579971Z" level=info msg="StartContainer for \"3d2d679d3f9204404ae6eba2a4077abf21a6fcc5943b04d5a2e69b78836e3cd3\""
	Sep 16 11:13:00 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:00.438814926Z" level=info msg="StartContainer for \"3d2d679d3f9204404ae6eba2a4077abf21a6fcc5943b04d5a2e69b78836e3cd3\" returns successfully"
	Sep 16 11:13:00 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:00.682014335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:08708819-cf0d-4505-a1f0-5563be02bd8c,Namespace:kube-system,Attempt:0,}"
	Sep 16 11:13:00 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:00.705460896Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 11:13:00 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:00.705574112Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 11:13:00 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:00.705591855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:13:00 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:00.705767960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:13:00 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:00.763243127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:08708819-cf0d-4505-a1f0-5563be02bd8c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7bb5692bf820af9b4baaf102e9bcee78709750dc811c3040ccf45c8afde3f945\""
	Sep 16 11:13:00 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:00.766422496Z" level=info msg="CreateContainer within sandbox \"7bb5692bf820af9b4baaf102e9bcee78709750dc811c3040ccf45c8afde3f945\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Sep 16 11:13:00 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:00.779827089Z" level=info msg="CreateContainer within sandbox \"7bb5692bf820af9b4baaf102e9bcee78709750dc811c3040ccf45c8afde3f945\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"6f355202fdbbeadd28797b1e19193703a5ee560921eef405227ef9296cb15a44\""
	Sep 16 11:13:00 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:00.780483761Z" level=info msg="StartContainer for \"6f355202fdbbeadd28797b1e19193703a5ee560921eef405227ef9296cb15a44\""
	Sep 16 11:13:00 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:00.834312428Z" level=info msg="StartContainer for \"6f355202fdbbeadd28797b1e19193703a5ee560921eef405227ef9296cb15a44\" returns successfully"
	Sep 16 11:13:04 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:04.282404342Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Sep 16 11:13:15 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:15.036034724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-sc74v,Uid:5655635d-c5e6-4043-b178-77f3df972e86,Namespace:kube-system,Attempt:0,}"
	Sep 16 11:13:15 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:15.071775195Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 11:13:15 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:15.071867312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 11:13:15 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:15.071884979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:13:15 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:15.071998114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:13:15 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:15.118805252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-sc74v,Uid:5655635d-c5e6-4043-b178-77f3df972e86,Namespace:kube-system,Attempt:0,} returns sandbox id \"a427aaf4dc7bf56af8e59e4086f4715ac46e1c9db615ec7c033d1c861cc37441\""
	Sep 16 11:13:15 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:15.121605030Z" level=info msg="CreateContainer within sandbox \"a427aaf4dc7bf56af8e59e4086f4715ac46e1c9db615ec7c033d1c861cc37441\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Sep 16 11:13:15 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:15.134951310Z" level=info msg="CreateContainer within sandbox \"a427aaf4dc7bf56af8e59e4086f4715ac46e1c9db615ec7c033d1c861cc37441\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"308f1d6d730a2535f1e6a084892c5da3b5b775594ef073672e029c09537b57da\""
	Sep 16 11:13:15 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:15.135602631Z" level=info msg="StartContainer for \"308f1d6d730a2535f1e6a084892c5da3b5b775594ef073672e029c09537b57da\""
	Sep 16 11:13:15 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:15.180235908Z" level=info msg="StartContainer for \"308f1d6d730a2535f1e6a084892c5da3b5b775594ef073672e029c09537b57da\" returns successfully"
	
	
	==> coredns [308f1d6d730a2535f1e6a084892c5da3b5b775594ef073672e029c09537b57da] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:41169 - 50662 "HINFO IN 4844345484503832019.4449023886173755708. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011300932s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-006978
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-006978
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=default-k8s-diff-port-006978
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T11_12_54_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:12:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-006978
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:13:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:13:04 +0000   Mon, 16 Sep 2024 11:12:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:13:04 +0000   Mon, 16 Sep 2024 11:12:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:13:04 +0000   Mon, 16 Sep 2024 11:12:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:13:04 +0000   Mon, 16 Sep 2024 11:12:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-006978
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 9f862fabd65249baa1ce0a392f842af0
	  System UUID:                15408216-8343-44b6-bf08-785f58970e8a
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-sc74v                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     19s
	  kube-system                 etcd-default-k8s-diff-port-006978                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         24s
	  kube-system                 kindnet-njckk                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      19s
	  kube-system                 kube-apiserver-default-k8s-diff-port-006978             250m (3%)     0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-006978    200m (2%)     0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-proxy-2mcbv                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         19s
	  kube-system                 kube-scheduler-default-k8s-diff-port-006978             100m (1%)     0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 18s                kube-proxy       
	  Normal   NodeHasSufficientMemory  30s (x8 over 30s)  kubelet          Node default-k8s-diff-port-006978 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    30s (x7 over 30s)  kubelet          Node default-k8s-diff-port-006978 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     30s (x7 over 30s)  kubelet          Node default-k8s-diff-port-006978 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  30s                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 25s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 25s                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  24s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  24s                kubelet          Node default-k8s-diff-port-006978 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    24s                kubelet          Node default-k8s-diff-port-006978 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     24s                kubelet          Node default-k8s-diff-port-006978 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           20s                node-controller  Node default-k8s-diff-port-006978 event: Registered Node default-k8s-diff-port-006978 in Controller
	
	
	==> dmesg <==
	[  +0.000003] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	[  +1.007060] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000007] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000001] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000001] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	[  +2.015770] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000006] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000001] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000001] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	[  +4.191585] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000008] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	[  +0.000009] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000001] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000002] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	[  +8.191312] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000007] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000002] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000002] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	
	
	==> etcd [06406ac4e01c0da912225abf9c21537955221a3a9f8dfb94deb8f00faf06ca09] <==
	{"level":"info","ts":"2024-09-16T11:12:49.134376Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T11:12:49.134546Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-09-16T11:12:49.134585Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-09-16T11:12:49.135386Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T11:12:49.135426Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T11:12:49.463407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T11:12:49.463466Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T11:12:49.463496Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2024-09-16T11:12:49.463511Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T11:12:49.463520Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2024-09-16T11:12:49.463536Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2024-09-16T11:12:49.463550Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2024-09-16T11:12:49.464413Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:12:49.465019Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:default-k8s-diff-port-006978 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T11:12:49.465072Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:12:49.465051Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:12:49.465386Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T11:12:49.465431Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T11:12:49.466272Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:12:49.467109Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2024-09-16T11:12:49.467934Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:12:49.468738Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T11:12:49.470945Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:12:49.471063Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:12:49.471097Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 11:13:18 up 55 min,  0 users,  load average: 2.05, 2.94, 2.23
	Linux default-k8s-diff-port-006978 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [3d2d679d3f9204404ae6eba2a4077abf21a6fcc5943b04d5a2e69b78836e3cd3] <==
	I0916 11:13:00.621671       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 11:13:00.621894       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0916 11:13:00.622047       1 main.go:148] setting mtu 1500 for CNI 
	I0916 11:13:00.622069       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 11:13:00.622081       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 11:13:00.948822       1 controller.go:334] Starting controller kube-network-policies
	I0916 11:13:00.948853       1 controller.go:338] Waiting for informer caches to sync
	I0916 11:13:00.948861       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 11:13:01.249787       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 11:13:01.249852       1 metrics.go:61] Registering metrics
	I0916 11:13:01.249923       1 controller.go:374] Syncing nftables rules
	I0916 11:13:10.948665       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0916 11:13:10.948741       1 main.go:299] handling current node
	
	
	==> kube-apiserver [bdf3aa888730f420448b90d2c08e29c4518e341208ebdb9b901916839a3b5862] <==
	I0916 11:12:51.521678       1 aggregator.go:171] initial CRD sync complete...
	I0916 11:12:51.521686       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 11:12:51.521692       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 11:12:51.521698       1 cache.go:39] Caches are synced for autoregister controller
	E0916 11:12:51.525231       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0916 11:12:51.549075       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 11:12:51.549115       1 policy_source.go:224] refreshing policies
	E0916 11:12:51.574330       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I0916 11:12:51.622819       1 controller.go:615] quota admission added evaluator for: namespaces
	I0916 11:12:51.728150       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 11:12:52.373099       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0916 11:12:52.376786       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0916 11:12:52.376801       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 11:12:52.843917       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 11:12:52.881093       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 11:12:52.928507       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0916 11:12:52.934298       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0916 11:12:52.935302       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 11:12:52.939383       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 11:12:53.449184       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 11:12:54.158602       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 11:12:54.171876       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 11:12:54.180069       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 11:12:58.404034       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 11:12:59.262881       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [a085c20f4e6d11ad60d4935c3c6e2b3ee2a685ac38a3ca0cae9820d4bab0905e] <==
	I0916 11:12:58.390353       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0916 11:12:58.399656       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0916 11:12:58.400114       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-006978"
	I0916 11:12:58.401355       1 shared_informer.go:320] Caches are synced for daemon sets
	I0916 11:12:58.405515       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 11:12:58.409038       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 11:12:58.817951       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:12:58.898115       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:12:58.898137       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 11:12:59.210806       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-006978"
	I0916 11:12:59.426362       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="1.017098138s"
	I0916 11:12:59.433522       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="7.101603ms"
	I0916 11:12:59.433635       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="61.021µs"
	I0916 11:12:59.520137       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="123.944µs"
	I0916 11:12:59.539861       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="150.746µs"
	I0916 11:13:00.058053       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="15.011457ms"
	I0916 11:13:00.126185       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="68.061499ms"
	I0916 11:13:00.126320       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="81.97µs"
	I0916 11:13:01.081758       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="88.254µs"
	I0916 11:13:01.086415       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="103.992µs"
	I0916 11:13:01.089510       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="69.774µs"
	I0916 11:13:04.291467       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-006978"
	I0916 11:13:16.102024       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="80.781µs"
	I0916 11:13:16.119318       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.696792ms"
	I0916 11:13:16.119456       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="86.29µs"
	
	
	==> kube-proxy [947c3b3b00e44516aae5e8db652e25532afcfccf08cf372511b7656ed49e1dce] <==
	I0916 11:13:00.253825       1 server_linux.go:66] "Using iptables proxy"
	I0916 11:13:00.407401       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.76.2"]
	E0916 11:13:00.407487       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 11:13:00.429078       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 11:13:00.429182       1 server_linux.go:169] "Using iptables Proxier"
	I0916 11:13:00.432606       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 11:13:00.434317       1 server.go:483] "Version info" version="v1.31.1"
	I0916 11:13:00.434355       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:13:00.436922       1 config.go:199] "Starting service config controller"
	I0916 11:13:00.436961       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 11:13:00.436992       1 config.go:328] "Starting node config controller"
	I0916 11:13:00.436998       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 11:13:00.437231       1 config.go:105] "Starting endpoint slice config controller"
	I0916 11:13:00.437259       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 11:13:00.537105       1 shared_informer.go:320] Caches are synced for node config
	I0916 11:13:00.537113       1 shared_informer.go:320] Caches are synced for service config
	I0916 11:13:00.538249       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [3b1640b111894c26e26aef3c208049c9bac3c487c7b837d73cc15609a25be8a5] <==
	W0916 11:12:51.533950       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 11:12:51.533967       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:51.534038       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 11:12:51.534052       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:51.534137       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 11:12:51.534151       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:51.534222       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 11:12:51.534236       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:51.534340       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:12:51.534355       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:51.534367       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 11:12:51.534374       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:51.534388       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 11:12:51.534396       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:52.339002       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 11:12:52.339046       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:52.406598       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 11:12:52.406652       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:52.413957       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 11:12:52.413997       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:52.416027       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 11:12:52.416071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:52.594671       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:12:52.594714       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 11:12:54.631845       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 11:12:59 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:12:59.521986    1604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5655635d-c5e6-4043-b178-77f3df972e86-config-volume\") pod \"coredns-7c65d6cfc9-sc74v\" (UID: \"5655635d-c5e6-4043-b178-77f3df972e86\") " pod="kube-system/coredns-7c65d6cfc9-sc74v"
	Sep 16 11:12:59 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:12:59.531287    1604 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Sep 16 11:12:59 default-k8s-diff-port-006978 kubelet[1604]: E0916 11:12:59.832137    1604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5ca2f009c6ea77d26e1dcde0a0b7e597e232874d34e229eaa42385e8160feb5\": failed to find network info for sandbox \"f5ca2f009c6ea77d26e1dcde0a0b7e597e232874d34e229eaa42385e8160feb5\""
	Sep 16 11:12:59 default-k8s-diff-port-006978 kubelet[1604]: E0916 11:12:59.832228    1604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5ca2f009c6ea77d26e1dcde0a0b7e597e232874d34e229eaa42385e8160feb5\": failed to find network info for sandbox \"f5ca2f009c6ea77d26e1dcde0a0b7e597e232874d34e229eaa42385e8160feb5\"" pod="kube-system/coredns-7c65d6cfc9-827n4"
	Sep 16 11:12:59 default-k8s-diff-port-006978 kubelet[1604]: E0916 11:12:59.832258    1604 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5ca2f009c6ea77d26e1dcde0a0b7e597e232874d34e229eaa42385e8160feb5\": failed to find network info for sandbox \"f5ca2f009c6ea77d26e1dcde0a0b7e597e232874d34e229eaa42385e8160feb5\"" pod="kube-system/coredns-7c65d6cfc9-827n4"
	Sep 16 11:12:59 default-k8s-diff-port-006978 kubelet[1604]: E0916 11:12:59.832331    1604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-827n4_kube-system(950246f0-ecc4-4b7c-b89b-09a027a772d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-827n4_kube-system(950246f0-ecc4-4b7c-b89b-09a027a772d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5ca2f009c6ea77d26e1dcde0a0b7e597e232874d34e229eaa42385e8160feb5\\\": failed to find network info for sandbox \\\"f5ca2f009c6ea77d26e1dcde0a0b7e597e232874d34e229eaa42385e8160feb5\\\"\"" pod="kube-system/coredns-7c65d6cfc9-827n4" podUID="950246f0-ecc4-4b7c-b89b-09a027a772d0"
	Sep 16 11:12:59 default-k8s-diff-port-006978 kubelet[1604]: E0916 11:12:59.853684    1604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c13c121968d649e2f76dc9c20d74e291bf0d8459ab5bca233f9663685b619659\": failed to find network info for sandbox \"c13c121968d649e2f76dc9c20d74e291bf0d8459ab5bca233f9663685b619659\""
	Sep 16 11:12:59 default-k8s-diff-port-006978 kubelet[1604]: E0916 11:12:59.853777    1604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c13c121968d649e2f76dc9c20d74e291bf0d8459ab5bca233f9663685b619659\": failed to find network info for sandbox \"c13c121968d649e2f76dc9c20d74e291bf0d8459ab5bca233f9663685b619659\"" pod="kube-system/coredns-7c65d6cfc9-sc74v"
	Sep 16 11:12:59 default-k8s-diff-port-006978 kubelet[1604]: E0916 11:12:59.853808    1604 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c13c121968d649e2f76dc9c20d74e291bf0d8459ab5bca233f9663685b619659\": failed to find network info for sandbox \"c13c121968d649e2f76dc9c20d74e291bf0d8459ab5bca233f9663685b619659\"" pod="kube-system/coredns-7c65d6cfc9-sc74v"
	Sep 16 11:12:59 default-k8s-diff-port-006978 kubelet[1604]: E0916 11:12:59.853861    1604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-sc74v_kube-system(5655635d-c5e6-4043-b178-77f3df972e86)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-sc74v_kube-system(5655635d-c5e6-4043-b178-77f3df972e86)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c13c121968d649e2f76dc9c20d74e291bf0d8459ab5bca233f9663685b619659\\\": failed to find network info for sandbox \\\"c13c121968d649e2f76dc9c20d74e291bf0d8459ab5bca233f9663685b619659\\\"\"" pod="kube-system/coredns-7c65d6cfc9-sc74v" podUID="5655635d-c5e6-4043-b178-77f3df972e86"
	Sep 16 11:13:00 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:00.126731    1604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/950246f0-ecc4-4b7c-b89b-09a027a772d0-config-volume\") pod \"950246f0-ecc4-4b7c-b89b-09a027a772d0\" (UID: \"950246f0-ecc4-4b7c-b89b-09a027a772d0\") "
	Sep 16 11:13:00 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:00.126802    1604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f8xl\" (UniqueName: \"kubernetes.io/projected/950246f0-ecc4-4b7c-b89b-09a027a772d0-kube-api-access-6f8xl\") pod \"950246f0-ecc4-4b7c-b89b-09a027a772d0\" (UID: \"950246f0-ecc4-4b7c-b89b-09a027a772d0\") "
	Sep 16 11:13:00 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:00.127337    1604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/950246f0-ecc4-4b7c-b89b-09a027a772d0-config-volume" (OuterVolumeSpecName: "config-volume") pod "950246f0-ecc4-4b7c-b89b-09a027a772d0" (UID: "950246f0-ecc4-4b7c-b89b-09a027a772d0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Sep 16 11:13:00 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:00.131029    1604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/950246f0-ecc4-4b7c-b89b-09a027a772d0-kube-api-access-6f8xl" (OuterVolumeSpecName: "kube-api-access-6f8xl") pod "950246f0-ecc4-4b7c-b89b-09a027a772d0" (UID: "950246f0-ecc4-4b7c-b89b-09a027a772d0"). InnerVolumeSpecName "kube-api-access-6f8xl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 11:13:00 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:00.227037    1604 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-6f8xl\" (UniqueName: \"kubernetes.io/projected/950246f0-ecc4-4b7c-b89b-09a027a772d0-kube-api-access-6f8xl\") on node \"default-k8s-diff-port-006978\" DevicePath \"\""
	Sep 16 11:13:00 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:00.227076    1604 reconciler_common.go:288] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/950246f0-ecc4-4b7c-b89b-09a027a772d0-config-volume\") on node \"default-k8s-diff-port-006978\" DevicePath \"\""
	Sep 16 11:13:00 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:00.428505    1604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/08708819-cf0d-4505-a1f0-5563be02bd8c-tmp\") pod \"storage-provisioner\" (UID: \"08708819-cf0d-4505-a1f0-5563be02bd8c\") " pod="kube-system/storage-provisioner"
	Sep 16 11:13:00 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:00.428562    1604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4hzl\" (UniqueName: \"kubernetes.io/projected/08708819-cf0d-4505-a1f0-5563be02bd8c-kube-api-access-h4hzl\") pod \"storage-provisioner\" (UID: \"08708819-cf0d-4505-a1f0-5563be02bd8c\") " pod="kube-system/storage-provisioner"
	Sep 16 11:13:01 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:01.069502    1604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2mcbv" podStartSLOduration=2.069481573 podStartE2EDuration="2.069481573s" podCreationTimestamp="2024-09-16 11:12:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:13:01.069398727 +0000 UTC m=+7.138484871" watchObservedRunningTime="2024-09-16 11:13:01.069481573 +0000 UTC m=+7.138567716"
	Sep 16 11:13:01 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:01.098813    1604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.098790323 podStartE2EDuration="1.098790323s" podCreationTimestamp="2024-09-16 11:13:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:13:01.09871627 +0000 UTC m=+7.167802413" watchObservedRunningTime="2024-09-16 11:13:01.098790323 +0000 UTC m=+7.167876468"
	Sep 16 11:13:01 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:01.108472    1604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-njckk" podStartSLOduration=2.108444312 podStartE2EDuration="2.108444312s" podCreationTimestamp="2024-09-16 11:12:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:13:01.108144012 +0000 UTC m=+7.177230167" watchObservedRunningTime="2024-09-16 11:13:01.108444312 +0000 UTC m=+7.177530457"
	Sep 16 11:13:02 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:02.037418    1604 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="950246f0-ecc4-4b7c-b89b-09a027a772d0" path="/var/lib/kubelet/pods/950246f0-ecc4-4b7c-b89b-09a027a772d0/volumes"
	Sep 16 11:13:04 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:04.281799    1604 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 16 11:13:04 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:04.282674    1604 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 16 11:13:16 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:16.113460    1604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-sc74v" podStartSLOduration=17.113430858 podStartE2EDuration="17.113430858s" podCreationTimestamp="2024-09-16 11:12:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:13:16.102461271 +0000 UTC m=+22.171547414" watchObservedRunningTime="2024-09-16 11:13:16.113430858 +0000 UTC m=+22.182517002"
	
	
	==> storage-provisioner [6f355202fdbbeadd28797b1e19193703a5ee560921eef405227ef9296cb15a44] <==
	I0916 11:13:00.842140       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 11:13:00.849696       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 11:13:00.849745       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 11:13:00.858720       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 11:13:00.858874       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-006978_6e2d25a8-69d9-42c8-bc96-a993683990b2!
	I0916 11:13:00.860283       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"48271e48-bb5a-477f-91cc-b9e1963cd811", APIVersion:"v1", ResourceVersion:"414", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-006978_6e2d25a8-69d9-42c8-bc96-a993683990b2 became leader
	I0916 11:13:00.959514       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-006978_6e2d25a8-69d9-42c8-bc96-a993683990b2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-006978 -n default-k8s-diff-port-006978
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-006978 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-006978 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (531.676µs)
helpers_test.go:263: kubectl --context default-k8s-diff-port-006978 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-006978
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-006978:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "92220cda3aab6b195a38e8c57831506f5fef8d5456bc7737cfbe9edbdceff751",
	        "Created": "2024-09-16T11:12:40.853683512Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 303954,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T11:12:40.986877852Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/92220cda3aab6b195a38e8c57831506f5fef8d5456bc7737cfbe9edbdceff751/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/92220cda3aab6b195a38e8c57831506f5fef8d5456bc7737cfbe9edbdceff751/hostname",
	        "HostsPath": "/var/lib/docker/containers/92220cda3aab6b195a38e8c57831506f5fef8d5456bc7737cfbe9edbdceff751/hosts",
	        "LogPath": "/var/lib/docker/containers/92220cda3aab6b195a38e8c57831506f5fef8d5456bc7737cfbe9edbdceff751/92220cda3aab6b195a38e8c57831506f5fef8d5456bc7737cfbe9edbdceff751-json.log",
	        "Name": "/default-k8s-diff-port-006978",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-006978:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-006978",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/016c3d4fc319478973bdc14ac47307bb58de6ea7b8c61d9d42356a23a7da5f9c-init/diff:/var/lib/docker/overlay2/e391a31d00d345931d18da68f8a7d7338830f6d9b98fe203461225d03a65d594/diff",
	                "MergedDir": "/var/lib/docker/overlay2/016c3d4fc319478973bdc14ac47307bb58de6ea7b8c61d9d42356a23a7da5f9c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/016c3d4fc319478973bdc14ac47307bb58de6ea7b8c61d9d42356a23a7da5f9c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/016c3d4fc319478973bdc14ac47307bb58de6ea7b8c61d9d42356a23a7da5f9c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-006978",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-006978/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-006978",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-006978",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-006978",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "962cb6cc39d91026d44c9cf4daa9dd57b47deeb7041f7aa51db91e46b312ce38",
	            "SandboxKey": "/var/run/docker/netns/962cb6cc39d9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-006978": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "77357235afcef96415382e78c67fcc53123318fac9325f81acae0f265d8eb86e",
	                    "EndpointID": "b4935c247e07031b1781430c46c0d3d9e9e0bcc8919b1161f872a08294783641",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-006978",
	                        "92220cda3aab"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-006978 -n default-k8s-diff-port-006978
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-006978 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-006978 logs -n 25: (1.086593735s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | force-systemd-flag-917705                              | force-systemd-flag-917705    | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | ssh cat                                                |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| delete  | -p force-systemd-flag-917705                           | force-systemd-flag-917705    | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| start   | -p old-k8s-version-371039                              | old-k8s-version-371039       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| ssh     | cert-options-840054 ssh                                | cert-options-840054          | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-840054 -- sudo                         | cert-options-840054          | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-840054                                 | cert-options-840054          | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| start   | -p no-preload-349453                                   | no-preload-349453            | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-349453             | no-preload-349453            | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC | 16 Sep 24 11:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-349453                                   | no-preload-349453            | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC | 16 Sep 24 11:09 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-349453                  | no-preload-349453            | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC | 16 Sep 24 11:09 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-349453                                   | no-preload-349453            | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-371039        | old-k8s-version-371039       | jenkins | v1.34.0 | 16 Sep 24 11:10 UTC | 16 Sep 24 11:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-371039                              | old-k8s-version-371039       | jenkins | v1.34.0 | 16 Sep 24 11:10 UTC | 16 Sep 24 11:10 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-371039             | old-k8s-version-371039       | jenkins | v1.34.0 | 16 Sep 24 11:10 UTC | 16 Sep 24 11:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-371039                              | old-k8s-version-371039       | jenkins | v1.34.0 | 16 Sep 24 11:10 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-021107                              | cert-expiration-021107       | jenkins | v1.34.0 | 16 Sep 24 11:11 UTC | 16 Sep 24 11:11 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-021107                              | cert-expiration-021107       | jenkins | v1.34.0 | 16 Sep 24 11:11 UTC | 16 Sep 24 11:11 UTC |
	| start   | -p embed-certs-679624                                  | embed-certs-679624           | jenkins | v1.34.0 | 16 Sep 24 11:11 UTC | 16 Sep 24 11:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-679624            | embed-certs-679624           | jenkins | v1.34.0 | 16 Sep 24 11:12 UTC | 16 Sep 24 11:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-679624                                  | embed-certs-679624           | jenkins | v1.34.0 | 16 Sep 24 11:12 UTC | 16 Sep 24 11:12 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-679624                 | embed-certs-679624           | jenkins | v1.34.0 | 16 Sep 24 11:12 UTC | 16 Sep 24 11:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-679624                                  | embed-certs-679624           | jenkins | v1.34.0 | 16 Sep 24 11:12 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-311911                           | kubernetes-upgrade-311911    | jenkins | v1.34.0 | 16 Sep 24 11:12 UTC | 16 Sep 24 11:12 UTC |
	| delete  | -p                                                     | disable-driver-mounts-852440 | jenkins | v1.34.0 | 16 Sep 24 11:12 UTC | 16 Sep 24 11:12 UTC |
	|         | disable-driver-mounts-852440                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-006978 | jenkins | v1.34.0 | 16 Sep 24 11:12 UTC | 16 Sep 24 11:13 UTC |
	|         | default-k8s-diff-port-006978                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:12:33
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:12:33.188304  303072 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:12:33.188581  303072 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:12:33.188593  303072 out.go:358] Setting ErrFile to fd 2...
	I0916 11:12:33.188598  303072 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:12:33.188783  303072 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 11:12:33.189413  303072 out.go:352] Setting JSON to false
	I0916 11:12:33.190969  303072 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3297,"bootTime":1726481856,"procs":383,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:12:33.191086  303072 start.go:139] virtualization: kvm guest
	I0916 11:12:33.193702  303072 out.go:177] * [default-k8s-diff-port-006978] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:12:33.195341  303072 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:12:33.195410  303072 notify.go:220] Checking for updates...
	I0916 11:12:33.198592  303072 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:12:33.199962  303072 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:12:33.201287  303072 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 11:12:33.202689  303072 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:12:33.204109  303072 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:12:33.206233  303072 config.go:182] Loaded profile config "embed-certs-679624": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:12:33.206402  303072 config.go:182] Loaded profile config "no-preload-349453": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:12:33.206535  303072 config.go:182] Loaded profile config "old-k8s-version-371039": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0916 11:12:33.206656  303072 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:12:33.233320  303072 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 11:12:33.233448  303072 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:12:33.298402  303072 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:86 SystemTime:2024-09-16 11:12:33.288078298 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:12:33.298570  303072 docker.go:318] overlay module found
	I0916 11:12:33.301953  303072 out.go:177] * Using the docker driver based on user configuration
	I0916 11:12:33.303332  303072 start.go:297] selected driver: docker
	I0916 11:12:33.303349  303072 start.go:901] validating driver "docker" against <nil>
	I0916 11:12:33.303362  303072 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:12:33.304321  303072 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:12:33.369824  303072 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:86 SystemTime:2024-09-16 11:12:33.356912728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:12:33.370078  303072 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 11:12:33.370327  303072 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:12:33.372698  303072 out.go:177] * Using Docker driver with root privileges
	I0916 11:12:33.374242  303072 cni.go:84] Creating CNI manager for ""
	I0916 11:12:33.374302  303072 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:12:33.374313  303072 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 11:12:33.374391  303072 start.go:340] cluster config:
	{Name:default-k8s-diff-port-006978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-006978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:12:33.375879  303072 out.go:177] * Starting "default-k8s-diff-port-006978" primary control-plane node in "default-k8s-diff-port-006978" cluster
	I0916 11:12:33.377330  303072 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 11:12:33.378788  303072 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 11:12:33.380265  303072 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:12:33.380313  303072 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4
	I0916 11:12:33.380331  303072 cache.go:56] Caching tarball of preloaded images
	I0916 11:12:33.380387  303072 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 11:12:33.380431  303072 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 11:12:33.380447  303072 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 11:12:33.380593  303072 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/config.json ...
	I0916 11:12:33.380632  303072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/config.json: {Name:mk8dc034cf5d1663f163d44cacb1db0a697f761d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0916 11:12:33.405013  303072 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 11:12:33.405039  303072 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 11:12:33.405136  303072 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 11:12:33.405159  303072 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 11:12:33.405165  303072 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 11:12:33.405174  303072 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 11:12:33.405185  303072 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 11:12:33.466107  303072 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 11:12:33.466152  303072 cache.go:194] Successfully downloaded all kic artifacts
	I0916 11:12:33.466198  303072 start.go:360] acquireMachinesLock for default-k8s-diff-port-006978: {Name:mke54f99fcd9e320f7c2bc8102220e65af70efd7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:12:33.466306  303072 start.go:364] duration metric: took 80.59µs to acquireMachinesLock for "default-k8s-diff-port-006978"
	I0916 11:12:33.466338  303072 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-006978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-006978 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 11:12:33.466439  303072 start.go:125] createHost starting for "" (driver="docker")
	I0916 11:12:30.238757  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:32.239283  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:30.649566  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:33.149628  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:33.273175  283294 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:35.769915  283294 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:33.469116  303072 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 11:12:33.469464  303072 start.go:159] libmachine.API.Create for "default-k8s-diff-port-006978" (driver="docker")
	I0916 11:12:33.469509  303072 client.go:168] LocalClient.Create starting
	I0916 11:12:33.469613  303072 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem
	I0916 11:12:33.469669  303072 main.go:141] libmachine: Decoding PEM data...
	I0916 11:12:33.469693  303072 main.go:141] libmachine: Parsing certificate...
	I0916 11:12:33.469766  303072 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem
	I0916 11:12:33.469792  303072 main.go:141] libmachine: Decoding PEM data...
	I0916 11:12:33.469803  303072 main.go:141] libmachine: Parsing certificate...
	I0916 11:12:33.470217  303072 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-006978 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 11:12:33.489304  303072 cli_runner.go:211] docker network inspect default-k8s-diff-port-006978 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 11:12:33.489379  303072 network_create.go:284] running [docker network inspect default-k8s-diff-port-006978] to gather additional debugging logs...
	I0916 11:12:33.489410  303072 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-006978
	W0916 11:12:33.511223  303072 cli_runner.go:211] docker network inspect default-k8s-diff-port-006978 returned with exit code 1
	I0916 11:12:33.511273  303072 network_create.go:287] error running [docker network inspect default-k8s-diff-port-006978]: docker network inspect default-k8s-diff-port-006978: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-006978 not found
	I0916 11:12:33.511289  303072 network_create.go:289] output of [docker network inspect default-k8s-diff-port-006978]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-006978 not found
	
	** /stderr **
	I0916 11:12:33.511384  303072 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:12:33.531001  303072 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c95c64bb41bd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:bd:76:ab:c4} reservation:<nil>}
	I0916 11:12:33.532177  303072 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fad43aa9929b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:94:3f:61:fd} reservation:<nil>}
	I0916 11:12:33.533347  303072 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-49585fce923a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:ad:45:94:54} reservation:<nil>}
	I0916 11:12:33.534648  303072 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001bd36e0}
	I0916 11:12:33.534681  303072 network_create.go:124] attempt to create docker network default-k8s-diff-port-006978 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0916 11:12:33.534740  303072 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-006978 default-k8s-diff-port-006978
	I0916 11:12:33.610090  303072 network_create.go:108] docker network default-k8s-diff-port-006978 192.168.76.0/24 created
	I0916 11:12:33.610127  303072 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-006978" container
	I0916 11:12:33.610214  303072 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 11:12:33.632805  303072 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-006978 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-006978 --label created_by.minikube.sigs.k8s.io=true
	I0916 11:12:33.655257  303072 oci.go:103] Successfully created a docker volume default-k8s-diff-port-006978
	I0916 11:12:33.655345  303072 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-006978-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-006978 --entrypoint /usr/bin/test -v default-k8s-diff-port-006978:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 11:12:34.731781  303072 cli_runner.go:217] Completed: docker run --rm --name default-k8s-diff-port-006978-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-006978 --entrypoint /usr/bin/test -v default-k8s-diff-port-006978:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib: (1.076336693s)
	I0916 11:12:34.731816  303072 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-006978
	I0916 11:12:34.731846  303072 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:12:34.731872  303072 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 11:12:34.731946  303072 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-006978:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 11:12:34.739277  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:37.239310  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:35.149722  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:37.650375  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:37.770648  283294 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"True"
	I0916 11:12:37.770673  283294 pod_ready.go:82] duration metric: took 11.007535908s for pod "kube-scheduler-old-k8s-version-371039" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:37.770686  283294 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:39.777546  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:40.784380  303072 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-006978:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (6.052385534s)
	I0916 11:12:40.784418  303072 kic.go:203] duration metric: took 6.052542506s to extract preloaded images to volume ...
	W0916 11:12:40.784564  303072 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 11:12:40.784661  303072 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 11:12:40.837569  303072 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-006978 --name default-k8s-diff-port-006978 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-006978 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-006978 --network default-k8s-diff-port-006978 --ip 192.168.76.2 --volume default-k8s-diff-port-006978:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 11:12:41.151535  303072 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-006978 --format={{.State.Running}}
	I0916 11:12:41.171308  303072 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-006978 --format={{.State.Status}}
	I0916 11:12:41.190871  303072 cli_runner.go:164] Run: docker exec default-k8s-diff-port-006978 stat /var/lib/dpkg/alternatives/iptables
	I0916 11:12:41.233480  303072 oci.go:144] the created container "default-k8s-diff-port-006978" has a running status.
	I0916 11:12:41.233522  303072 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/default-k8s-diff-port-006978/id_rsa...
	I0916 11:12:41.414049  303072 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3687/.minikube/machines/default-k8s-diff-port-006978/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 11:12:41.436388  303072 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-006978 --format={{.State.Status}}
	I0916 11:12:41.455455  303072 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 11:12:41.455481  303072 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-006978 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 11:12:41.511490  303072 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-006978 --format={{.State.Status}}
	I0916 11:12:41.540142  303072 machine.go:93] provisionDockerMachine start ...
	I0916 11:12:41.540258  303072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:12:41.563377  303072 main.go:141] libmachine: Using SSH client type: native
	I0916 11:12:41.563597  303072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I0916 11:12:41.563607  303072 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 11:12:41.821654  303072 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-006978
	
	I0916 11:12:41.821689  303072 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-006978"
	I0916 11:12:41.821753  303072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:12:41.840337  303072 main.go:141] libmachine: Using SSH client type: native
	I0916 11:12:41.840544  303072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I0916 11:12:41.840564  303072 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-006978 && echo "default-k8s-diff-port-006978" | sudo tee /etc/hostname
	I0916 11:12:41.992032  303072 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-006978
	
	I0916 11:12:41.992120  303072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:12:42.009447  303072 main.go:141] libmachine: Using SSH client type: native
	I0916 11:12:42.009695  303072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I0916 11:12:42.009733  303072 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-006978' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-006978/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-006978' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:12:42.148459  303072 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:12:42.148487  303072 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 11:12:42.148510  303072 ubuntu.go:177] setting up certificates
	I0916 11:12:42.148538  303072 provision.go:84] configureAuth start
	I0916 11:12:42.148598  303072 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-006978
	I0916 11:12:42.166372  303072 provision.go:143] copyHostCerts
	I0916 11:12:42.166428  303072 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 11:12:42.166436  303072 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 11:12:42.166501  303072 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 11:12:42.166586  303072 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 11:12:42.166595  303072 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 11:12:42.166621  303072 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 11:12:42.166674  303072 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 11:12:42.166682  303072 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 11:12:42.166703  303072 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 11:12:42.166753  303072 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-006978 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-006978 localhost minikube]
	I0916 11:12:42.306401  303072 provision.go:177] copyRemoteCerts
	I0916 11:12:42.306461  303072 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:12:42.306495  303072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:12:42.323490  303072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/default-k8s-diff-port-006978/id_rsa Username:docker}
	I0916 11:12:42.420814  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 11:12:42.443662  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0916 11:12:42.466807  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 11:12:42.490564  303072 provision.go:87] duration metric: took 342.007302ms to configureAuth
	I0916 11:12:42.490593  303072 ubuntu.go:193] setting minikube options for container-runtime
	I0916 11:12:42.490820  303072 config.go:182] Loaded profile config "default-k8s-diff-port-006978": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:12:42.490837  303072 machine.go:96] duration metric: took 950.665124ms to provisionDockerMachine
	I0916 11:12:42.490846  303072 client.go:171] duration metric: took 9.021328095s to LocalClient.Create
	I0916 11:12:42.490871  303072 start.go:167] duration metric: took 9.02141907s to libmachine.API.Create "default-k8s-diff-port-006978"
	I0916 11:12:42.490884  303072 start.go:293] postStartSetup for "default-k8s-diff-port-006978" (driver="docker")
	I0916 11:12:42.490896  303072 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:12:42.490957  303072 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:12:42.491009  303072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:12:42.508314  303072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/default-k8s-diff-port-006978/id_rsa Username:docker}
	I0916 11:12:42.606294  303072 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:12:42.609598  303072 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 11:12:42.609636  303072 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 11:12:42.609645  303072 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 11:12:42.609651  303072 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 11:12:42.609662  303072 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 11:12:42.609720  303072 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 11:12:42.609807  303072 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 11:12:42.609896  303072 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:12:42.618062  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:12:42.641241  303072 start.go:296] duration metric: took 150.341833ms for postStartSetup
	I0916 11:12:42.641601  303072 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-006978
	I0916 11:12:42.660638  303072 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/config.json ...
	I0916 11:12:42.660910  303072 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 11:12:42.660959  303072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:12:42.681352  303072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/default-k8s-diff-port-006978/id_rsa Username:docker}
	I0916 11:12:42.773016  303072 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 11:12:42.777639  303072 start.go:128] duration metric: took 9.311183024s to createHost
	I0916 11:12:42.777671  303072 start.go:83] releasing machines lock for "default-k8s-diff-port-006978", held for 9.311348572s
	I0916 11:12:42.777730  303072 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-006978
	I0916 11:12:42.794671  303072 ssh_runner.go:195] Run: cat /version.json
	I0916 11:12:42.794729  303072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:12:42.794734  303072 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:12:42.794809  303072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:12:42.812760  303072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/default-k8s-diff-port-006978/id_rsa Username:docker}
	I0916 11:12:42.812961  303072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/default-k8s-diff-port-006978/id_rsa Username:docker}
	I0916 11:12:42.983965  303072 ssh_runner.go:195] Run: systemctl --version
	I0916 11:12:42.988168  303072 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:12:42.992468  303072 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 11:12:43.016957  303072 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 11:12:43.017041  303072 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:12:43.045266  303072 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 11:12:43.045297  303072 start.go:495] detecting cgroup driver to use...
	I0916 11:12:43.045326  303072 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 11:12:43.045377  303072 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 11:12:43.057420  303072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 11:12:43.068346  303072 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:12:43.068404  303072 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:12:43.081261  303072 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:12:43.094734  303072 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:12:43.175775  303072 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:12:39.737995  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:41.742098  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:43.261975  303072 docker.go:233] disabling docker service ...
	I0916 11:12:43.262038  303072 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:12:43.282995  303072 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:12:43.295522  303072 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:12:43.379559  303072 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:12:43.459884  303072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:12:43.472400  303072 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:12:43.487862  303072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 11:12:43.497717  303072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 11:12:43.507197  303072 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 11:12:43.507271  303072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 11:12:43.516769  303072 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 11:12:43.526040  303072 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 11:12:43.535489  303072 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 11:12:43.545566  303072 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:12:43.554727  303072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 11:12:43.564652  303072 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 11:12:43.574313  303072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 11:12:43.584261  303072 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:12:43.592172  303072 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:12:43.600336  303072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:12:43.675960  303072 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 11:12:43.777200  303072 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 11:12:43.777376  303072 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 11:12:43.781384  303072 start.go:563] Will wait 60s for crictl version
	I0916 11:12:43.781440  303072 ssh_runner.go:195] Run: which crictl
	I0916 11:12:43.784718  303072 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:12:43.817809  303072 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 11:12:43.817866  303072 ssh_runner.go:195] Run: containerd --version
	I0916 11:12:43.839994  303072 ssh_runner.go:195] Run: containerd --version
	I0916 11:12:43.868789  303072 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 11:12:40.149138  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:42.149777  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:44.150077  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:43.870264  303072 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-006978 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:12:43.887693  303072 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0916 11:12:43.891552  303072 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:12:43.902196  303072 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-006978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-006978 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:12:43.902316  303072 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:12:43.902363  303072 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:12:43.933503  303072 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 11:12:43.933524  303072 containerd.go:534] Images already preloaded, skipping extraction
	I0916 11:12:43.933574  303072 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:12:43.966712  303072 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 11:12:43.966739  303072 cache_images.go:84] Images are preloaded, skipping loading
	I0916 11:12:43.966750  303072 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.31.1 containerd true true} ...
	I0916 11:12:43.966868  303072 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-006978 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-006978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:12:43.966924  303072 ssh_runner.go:195] Run: sudo crictl info
	I0916 11:12:44.000346  303072 cni.go:84] Creating CNI manager for ""
	I0916 11:12:44.000368  303072 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:12:44.000378  303072 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:12:44.000397  303072 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-006978 NodeName:default-k8s-diff-port-006978 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 11:12:44.000529  303072 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-006978"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:12:44.000585  303072 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 11:12:44.009158  303072 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 11:12:44.009228  303072 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:12:44.017370  303072 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I0916 11:12:44.034166  303072 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:12:44.050711  303072 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2182 bytes)
	I0916 11:12:44.068227  303072 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0916 11:12:44.071437  303072 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:12:44.081858  303072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:12:44.151949  303072 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:12:44.165524  303072 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978 for IP: 192.168.76.2
	I0916 11:12:44.165552  303072 certs.go:194] generating shared ca certs ...
	I0916 11:12:44.165574  303072 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:12:44.165741  303072 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 11:12:44.165796  303072 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 11:12:44.165809  303072 certs.go:256] generating profile certs ...
	I0916 11:12:44.165876  303072 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.key
	I0916 11:12:44.165895  303072 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.crt with IP's: []
	I0916 11:12:44.646752  303072 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.crt ...
	I0916 11:12:44.646790  303072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.crt: {Name:mk5fe57391c71a635bc2664646b46ecf8e7b30f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:12:44.646990  303072 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.key ...
	I0916 11:12:44.647008  303072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.key: {Name:mk8729c4f50d0dff1d65e22e9e0317a12cedc4f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:12:44.647122  303072 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/apiserver.key.9826bbf6
	I0916 11:12:44.647147  303072 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/apiserver.crt.9826bbf6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0916 11:12:44.927657  303072 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/apiserver.crt.9826bbf6 ...
	I0916 11:12:44.927689  303072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/apiserver.crt.9826bbf6: {Name:mk507ad25443e7441acfbd74f84b0e53e00a318e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:12:44.927907  303072 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/apiserver.key.9826bbf6 ...
	I0916 11:12:44.927928  303072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/apiserver.key.9826bbf6: {Name:mk9141b6cbd16a3cbc7444d9c738b092ec418bec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:12:44.928023  303072 certs.go:381] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/apiserver.crt.9826bbf6 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/apiserver.crt
	I0916 11:12:44.928103  303072 certs.go:385] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/apiserver.key.9826bbf6 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/apiserver.key
	I0916 11:12:44.928163  303072 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/proxy-client.key
	I0916 11:12:44.928181  303072 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/proxy-client.crt with IP's: []
	I0916 11:12:45.016612  303072 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/proxy-client.crt ...
	I0916 11:12:45.016645  303072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/proxy-client.crt: {Name:mkc1eab85fbe839e33e53386182e4a6afedec155 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:12:45.016821  303072 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/proxy-client.key ...
	I0916 11:12:45.016835  303072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/proxy-client.key: {Name:mk2b6e4ebf261029f43772640bda54fcc5f4921e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:12:45.017014  303072 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 11:12:45.017057  303072 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 11:12:45.017069  303072 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:12:45.017097  303072 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 11:12:45.017125  303072 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:12:45.017150  303072 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 11:12:45.017223  303072 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:12:45.018092  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:12:45.044144  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:12:45.069666  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:12:45.093991  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:12:45.118079  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0916 11:12:45.141384  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 11:12:45.165248  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:12:45.188695  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 11:12:45.211474  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 11:12:45.234691  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:12:45.260049  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 11:12:45.285743  303072 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:12:45.305003  303072 ssh_runner.go:195] Run: openssl version
	I0916 11:12:45.310350  303072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 11:12:45.319899  303072 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 11:12:45.323233  303072 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 11:12:45.323295  303072 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 11:12:45.330231  303072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:12:45.339365  303072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:12:45.348101  303072 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:12:45.351431  303072 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:12:45.351477  303072 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:12:45.358772  303072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:12:45.368221  303072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 11:12:45.377642  303072 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 11:12:45.381484  303072 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 11:12:45.381557  303072 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 11:12:45.388377  303072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 11:12:45.397630  303072 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:12:45.400839  303072 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 11:12:45.400901  303072 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-006978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-006978 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:12:45.400985  303072 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 11:12:45.401042  303072 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:12:45.435932  303072 cri.go:89] found id: ""
	I0916 11:12:45.435998  303072 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:12:45.444621  303072 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 11:12:45.453465  303072 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 11:12:45.453540  303072 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 11:12:45.461982  303072 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 11:12:45.462009  303072 kubeadm.go:157] found existing configuration files:
	
	I0916 11:12:45.462058  303072 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0916 11:12:45.471201  303072 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 11:12:45.471266  303072 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 11:12:45.479819  303072 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0916 11:12:45.488490  303072 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 11:12:45.488561  303072 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 11:12:45.497296  303072 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0916 11:12:45.505691  303072 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 11:12:45.505752  303072 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 11:12:45.514164  303072 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0916 11:12:45.522589  303072 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 11:12:45.522664  303072 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 11:12:45.531758  303072 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 11:12:45.570283  303072 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 11:12:45.570392  303072 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 11:12:45.587093  303072 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 11:12:45.587194  303072 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 11:12:45.587259  303072 kubeadm.go:310] OS: Linux
	I0916 11:12:45.587364  303072 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 11:12:45.587437  303072 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 11:12:45.587506  303072 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 11:12:45.587575  303072 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 11:12:45.587661  303072 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 11:12:45.587775  303072 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 11:12:45.587850  303072 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 11:12:45.587917  303072 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 11:12:45.587985  303072 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 11:12:45.641793  303072 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 11:12:45.641930  303072 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 11:12:45.642053  303072 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 11:12:45.649963  303072 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 11:12:42.276706  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:44.277484  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:46.777255  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:45.652421  303072 out.go:235]   - Generating certificates and keys ...
	I0916 11:12:45.652552  303072 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 11:12:45.652614  303072 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 11:12:45.754915  303072 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 11:12:45.828601  303072 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 11:12:46.002610  303072 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 11:12:46.107849  303072 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 11:12:46.248800  303072 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 11:12:46.248969  303072 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-006978 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0916 11:12:46.391232  303072 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 11:12:46.391389  303072 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-006978 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0916 11:12:46.580271  303072 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 11:12:46.848235  303072 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 11:12:46.941777  303072 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 11:12:46.942069  303072 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 11:12:47.120540  303072 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 11:12:47.246837  303072 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 11:12:47.425713  303072 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 11:12:47.548056  303072 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 11:12:47.729107  303072 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 11:12:47.729658  303072 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 11:12:47.732291  303072 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 11:12:47.734653  303072 out.go:235]   - Booting up control plane ...
	I0916 11:12:47.734798  303072 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 11:12:47.734923  303072 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 11:12:47.735780  303072 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 11:12:47.746631  303072 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 11:12:47.752853  303072 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 11:12:47.752933  303072 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 11:12:47.837084  303072 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 11:12:47.837210  303072 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 11:12:44.237974  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:46.238843  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:48.738940  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:46.649823  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:48.650010  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:48.785616  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:51.277044  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:48.338749  303072 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.690295ms
	I0916 11:12:48.338844  303072 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 11:12:53.340729  303072 kubeadm.go:310] [api-check] The API server is healthy after 5.001910312s
	I0916 11:12:53.352535  303072 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 11:12:53.365374  303072 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 11:12:53.386596  303072 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 11:12:53.386790  303072 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-006978 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 11:12:53.394294  303072 kubeadm.go:310] [bootstrap-token] Using token: 21xlxs.cbzjnrzj5tox0go3
	I0916 11:12:51.240955  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:53.739148  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:53.395665  303072 out.go:235]   - Configuring RBAC rules ...
	I0916 11:12:53.395858  303072 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 11:12:53.400957  303072 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 11:12:53.407354  303072 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 11:12:53.410317  303072 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 11:12:53.413111  303072 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 11:12:53.415993  303072 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 11:12:53.748017  303072 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 11:12:54.173321  303072 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 11:12:54.747458  303072 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 11:12:54.748595  303072 kubeadm.go:310] 
	I0916 11:12:54.748708  303072 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 11:12:54.748726  303072 kubeadm.go:310] 
	I0916 11:12:54.748792  303072 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 11:12:54.748815  303072 kubeadm.go:310] 
	I0916 11:12:54.748848  303072 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 11:12:54.748905  303072 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 11:12:54.748949  303072 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 11:12:54.748956  303072 kubeadm.go:310] 
	I0916 11:12:54.749000  303072 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 11:12:54.749007  303072 kubeadm.go:310] 
	I0916 11:12:54.749052  303072 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 11:12:54.749058  303072 kubeadm.go:310] 
	I0916 11:12:54.749133  303072 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 11:12:54.749244  303072 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 11:12:54.749349  303072 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 11:12:54.749358  303072 kubeadm.go:310] 
	I0916 11:12:54.749466  303072 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 11:12:54.749577  303072 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 11:12:54.749587  303072 kubeadm.go:310] 
	I0916 11:12:54.749715  303072 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 21xlxs.cbzjnrzj5tox0go3 \
	I0916 11:12:54.749881  303072 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 \
	I0916 11:12:54.749917  303072 kubeadm.go:310] 	--control-plane 
	I0916 11:12:54.749927  303072 kubeadm.go:310] 
	I0916 11:12:54.750057  303072 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 11:12:54.750065  303072 kubeadm.go:310] 
	I0916 11:12:54.750183  303072 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 21xlxs.cbzjnrzj5tox0go3 \
	I0916 11:12:54.750332  303072 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 
	I0916 11:12:54.753671  303072 kubeadm.go:310] W0916 11:12:45.567417    1138 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:12:54.753962  303072 kubeadm.go:310] W0916 11:12:45.568121    1138 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:12:54.754163  303072 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 11:12:54.754263  303072 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 11:12:54.754300  303072 cni.go:84] Creating CNI manager for ""
	I0916 11:12:54.754311  303072 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:12:54.756405  303072 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 11:12:50.650425  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:53.149646  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:53.776361  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:55.777283  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:54.757636  303072 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 11:12:54.761995  303072 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 11:12:54.762021  303072 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 11:12:54.781299  303072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 11:12:54.997779  303072 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 11:12:54.997865  303072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:12:54.997925  303072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-006978 minikube.k8s.io/updated_at=2024_09_16T11_12_54_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=default-k8s-diff-port-006978 minikube.k8s.io/primary=true
	I0916 11:12:55.122417  303072 ops.go:34] apiserver oom_adj: -16
	I0916 11:12:55.122453  303072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:12:55.623042  303072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:12:56.123379  303072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:12:56.622627  303072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:12:57.122828  303072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:12:57.623249  303072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:12:58.122814  303072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:12:56.237961  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:58.238231  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:58.623435  303072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:12:59.122743  303072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:12:59.192533  303072 kubeadm.go:1113] duration metric: took 4.194732378s to wait for elevateKubeSystemPrivileges
	I0916 11:12:59.192570  303072 kubeadm.go:394] duration metric: took 13.791671494s to StartCluster
	I0916 11:12:59.192623  303072 settings.go:142] acquiring lock: {Name:mk5f140da961bb985c566f732d611f3e238f1f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:12:59.192717  303072 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:12:59.194519  303072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:12:59.194804  303072 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 11:12:59.194808  303072 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 11:12:59.194880  303072 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:12:59.194998  303072 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-006978"
	I0916 11:12:59.195022  303072 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-006978"
	I0916 11:12:59.195023  303072 config.go:182] Loaded profile config "default-k8s-diff-port-006978": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:12:59.195039  303072 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-006978"
	I0916 11:12:59.195062  303072 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-006978"
	I0916 11:12:59.195069  303072 host.go:66] Checking if "default-k8s-diff-port-006978" exists ...
	I0916 11:12:59.195452  303072 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-006978 --format={{.State.Status}}
	I0916 11:12:59.195657  303072 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-006978 --format={{.State.Status}}
	I0916 11:12:59.196648  303072 out.go:177] * Verifying Kubernetes components...
	I0916 11:12:59.198253  303072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:12:59.227528  303072 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:12:55.150355  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:57.649659  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:59.650582  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:59.229043  303072 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:12:59.229072  303072 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:12:59.229154  303072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:12:59.231097  303072 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-006978"
	I0916 11:12:59.231131  303072 host.go:66] Checking if "default-k8s-diff-port-006978" exists ...
	I0916 11:12:59.231438  303072 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-006978 --format={{.State.Status}}
	I0916 11:12:59.260139  303072 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:12:59.260162  303072 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:12:59.260235  303072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:12:59.261484  303072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/default-k8s-diff-port-006978/id_rsa Username:docker}
	I0916 11:12:59.286575  303072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/default-k8s-diff-port-006978/id_rsa Username:docker}
	I0916 11:12:59.436277  303072 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 11:12:59.436326  303072 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:12:59.548794  303072 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:12:59.549612  303072 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:12:59.965906  303072 start.go:971] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0916 11:12:59.967917  303072 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-006978" to be "Ready" ...
	I0916 11:13:00.025739  303072 node_ready.go:49] node "default-k8s-diff-port-006978" has status "Ready":"True"
	I0916 11:13:00.025770  303072 node_ready.go:38] duration metric: took 57.824532ms for node "default-k8s-diff-port-006978" to be "Ready" ...
	I0916 11:13:00.025783  303072 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:13:00.036420  303072 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-827n4" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:00.378574  303072 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0916 11:12:58.276870  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:00.277679  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:00.379868  303072 addons.go:510] duration metric: took 1.184988241s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0916 11:13:00.470687  303072 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-006978" context rescaled to 1 replicas
	I0916 11:13:01.540194  303072 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-827n4" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-827n4" not found
	I0916 11:13:01.540224  303072 pod_ready.go:82] duration metric: took 1.503769597s for pod "coredns-7c65d6cfc9-827n4" in "kube-system" namespace to be "Ready" ...
	E0916 11:13:01.540237  303072 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-827n4" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-827n4" not found
	I0916 11:13:01.540246  303072 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sc74v" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:00.239474  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:02.738165  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:02.148445  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:04.149068  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:02.776501  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:05.277166  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:03.545597  303072 pod_ready.go:103] pod "coredns-7c65d6cfc9-sc74v" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:05.545656  303072 pod_ready.go:103] pod "coredns-7c65d6cfc9-sc74v" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:08.045988  303072 pod_ready.go:103] pod "coredns-7c65d6cfc9-sc74v" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:04.738774  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:07.238765  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:06.150520  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:08.649261  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:07.775965  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:09.776234  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:11.777149  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:10.545650  303072 pod_ready.go:103] pod "coredns-7c65d6cfc9-sc74v" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:12.545787  303072 pod_ready.go:103] pod "coredns-7c65d6cfc9-sc74v" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:09.738393  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:12.240211  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:11.148506  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:13.149270  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:14.276783  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:16.776281  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:14.545899  303072 pod_ready.go:103] pod "coredns-7c65d6cfc9-sc74v" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:16.546214  303072 pod_ready.go:93] pod "coredns-7c65d6cfc9-sc74v" in "kube-system" namespace has status "Ready":"True"
	I0916 11:13:16.546237  303072 pod_ready.go:82] duration metric: took 15.005983715s for pod "coredns-7c65d6cfc9-sc74v" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:16.546248  303072 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-006978" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:16.550669  303072 pod_ready.go:93] pod "etcd-default-k8s-diff-port-006978" in "kube-system" namespace has status "Ready":"True"
	I0916 11:13:16.550693  303072 pod_ready.go:82] duration metric: took 4.439531ms for pod "etcd-default-k8s-diff-port-006978" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:16.550708  303072 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-006978" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:16.554968  303072 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-006978" in "kube-system" namespace has status "Ready":"True"
	I0916 11:13:16.554985  303072 pod_ready.go:82] duration metric: took 4.271061ms for pod "kube-apiserver-default-k8s-diff-port-006978" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:16.554994  303072 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-006978" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:16.558971  303072 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-006978" in "kube-system" namespace has status "Ready":"True"
	I0916 11:13:16.558989  303072 pod_ready.go:82] duration metric: took 3.989284ms for pod "kube-controller-manager-default-k8s-diff-port-006978" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:16.558999  303072 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2mcbv" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:16.562698  303072 pod_ready.go:93] pod "kube-proxy-2mcbv" in "kube-system" namespace has status "Ready":"True"
	I0916 11:13:16.562717  303072 pod_ready.go:82] duration metric: took 3.713096ms for pod "kube-proxy-2mcbv" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:16.562725  303072 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-006978" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:16.944094  303072 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-006978" in "kube-system" namespace has status "Ready":"True"
	I0916 11:13:16.944124  303072 pod_ready.go:82] duration metric: took 381.391034ms for pod "kube-scheduler-default-k8s-diff-port-006978" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:16.944135  303072 pod_ready.go:39] duration metric: took 16.918337249s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:13:16.944166  303072 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:13:16.944236  303072 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:13:16.955576  303072 api_server.go:72] duration metric: took 17.760737057s to wait for apiserver process to appear ...
	I0916 11:13:16.955602  303072 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:13:16.955640  303072 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0916 11:13:16.959362  303072 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I0916 11:13:16.960401  303072 api_server.go:141] control plane version: v1.31.1
	I0916 11:13:16.960425  303072 api_server.go:131] duration metric: took 4.816984ms to wait for apiserver health ...
	I0916 11:13:16.960434  303072 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 11:13:17.147884  303072 system_pods.go:59] 8 kube-system pods found
	I0916 11:13:17.147928  303072 system_pods.go:61] "coredns-7c65d6cfc9-sc74v" [5655635d-c5e6-4043-b178-77f3df972e86] Running
	I0916 11:13:17.147935  303072 system_pods.go:61] "etcd-default-k8s-diff-port-006978" [d2e16749-ded9-4a12-9454-f66910bb9e5e] Running
	I0916 11:13:17.147948  303072 system_pods.go:61] "kindnet-njckk" [666b88e3-d80f-4bd7-b0a9-2cec72a365f0] Running
	I0916 11:13:17.147953  303072 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-006978" [81a8cd00-67af-4a7b-adcb-26da8bd1403a] Running
	I0916 11:13:17.147959  303072 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-006978" [76606226-2ee9-4661-a691-29edce3f8d0e] Running
	I0916 11:13:17.147964  303072 system_pods.go:61] "kube-proxy-2mcbv" [8fae6563-2965-49e7-96b5-b9813dc369e1] Running
	I0916 11:13:17.147969  303072 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-006978" [a69636f9-c121-4c9e-856e-600dc2fea787] Running
	I0916 11:13:17.147977  303072 system_pods.go:61] "storage-provisioner" [08708819-cf0d-4505-a1f0-5563be02bd8c] Running
	I0916 11:13:17.147985  303072 system_pods.go:74] duration metric: took 187.54485ms to wait for pod list to return data ...
	I0916 11:13:17.147996  303072 default_sa.go:34] waiting for default service account to be created ...
	I0916 11:13:17.344472  303072 default_sa.go:45] found service account: "default"
	I0916 11:13:17.344500  303072 default_sa.go:55] duration metric: took 196.497574ms for default service account to be created ...
	I0916 11:13:17.344510  303072 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 11:13:17.546956  303072 system_pods.go:86] 8 kube-system pods found
	I0916 11:13:17.546985  303072 system_pods.go:89] "coredns-7c65d6cfc9-sc74v" [5655635d-c5e6-4043-b178-77f3df972e86] Running
	I0916 11:13:17.546990  303072 system_pods.go:89] "etcd-default-k8s-diff-port-006978" [d2e16749-ded9-4a12-9454-f66910bb9e5e] Running
	I0916 11:13:17.546995  303072 system_pods.go:89] "kindnet-njckk" [666b88e3-d80f-4bd7-b0a9-2cec72a365f0] Running
	I0916 11:13:17.546999  303072 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-006978" [81a8cd00-67af-4a7b-adcb-26da8bd1403a] Running
	I0916 11:13:17.547003  303072 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-006978" [76606226-2ee9-4661-a691-29edce3f8d0e] Running
	I0916 11:13:17.547006  303072 system_pods.go:89] "kube-proxy-2mcbv" [8fae6563-2965-49e7-96b5-b9813dc369e1] Running
	I0916 11:13:17.547009  303072 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-006978" [a69636f9-c121-4c9e-856e-600dc2fea787] Running
	I0916 11:13:17.547013  303072 system_pods.go:89] "storage-provisioner" [08708819-cf0d-4505-a1f0-5563be02bd8c] Running
	I0916 11:13:17.547018  303072 system_pods.go:126] duration metric: took 202.504183ms to wait for k8s-apps to be running ...
	I0916 11:13:17.547033  303072 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 11:13:17.547078  303072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:13:17.559046  303072 system_svc.go:56] duration metric: took 12.001345ms WaitForService to wait for kubelet
	I0916 11:13:17.559077  303072 kubeadm.go:582] duration metric: took 18.364243961s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:13:17.559097  303072 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:13:17.744389  303072 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 11:13:17.744416  303072 node_conditions.go:123] node cpu capacity is 8
	I0916 11:13:17.744431  303072 node_conditions.go:105] duration metric: took 185.330735ms to run NodePressure ...
	I0916 11:13:17.744442  303072 start.go:241] waiting for startup goroutines ...
	I0916 11:13:17.744448  303072 start.go:246] waiting for cluster config update ...
	I0916 11:13:17.744458  303072 start.go:255] writing updated cluster config ...
	I0916 11:13:17.744735  303072 ssh_runner.go:195] Run: rm -f paused
	I0916 11:13:17.750858  303072 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-006978" cluster and "default" namespace by default
	E0916 11:13:17.752103  303072 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	I0916 11:13:14.737902  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:16.737931  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:18.738172  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	308f1d6d730a2       c69fa2e9cbf5f       5 seconds ago       Running             coredns                   0                   a427aaf4dc7bf       coredns-7c65d6cfc9-sc74v
	6f355202fdbbe       6e38f40d628db       19 seconds ago      Running             storage-provisioner       0                   7bb5692bf820a       storage-provisioner
	3d2d679d3f920       12968670680f4       20 seconds ago      Running             kindnet-cni               0                   c9b1db2846501       kindnet-njckk
	947c3b3b00e44       60c005f310ff3       20 seconds ago      Running             kube-proxy                0                   d0095dc7cbd78       kube-proxy-2mcbv
	06406ac4e01c0       2e96e5913fc06       31 seconds ago      Running             etcd                      0                   6908ea2d82b0c       etcd-default-k8s-diff-port-006978
	3b1640b111894       9aa1fad941575       31 seconds ago      Running             kube-scheduler            0                   75eb18111b77e       kube-scheduler-default-k8s-diff-port-006978
	a085c20f4e6d1       175ffd71cce3d       31 seconds ago      Running             kube-controller-manager   0                   4e59876f0bb83       kube-controller-manager-default-k8s-diff-port-006978
	bdf3aa888730f       6bab7719df100       31 seconds ago      Running             kube-apiserver            0                   8f6d53f6f0c9d       kube-apiserver-default-k8s-diff-port-006978
	
	
	==> containerd <==
	Sep 16 11:13:00 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:00.242014427Z" level=info msg="CreateContainer within sandbox \"c9b1db28465017b75476004f06e26da2011ee07762a7e0f1f1fbdb717fa9c9f3\" for container &ContainerMetadata{Name:kindnet-cni,Attempt:0,}"
	Sep 16 11:13:00 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:00.256539912Z" level=info msg="CreateContainer within sandbox \"c9b1db28465017b75476004f06e26da2011ee07762a7e0f1f1fbdb717fa9c9f3\" for &ContainerMetadata{Name:kindnet-cni,Attempt:0,} returns container id \"3d2d679d3f9204404ae6eba2a4077abf21a6fcc5943b04d5a2e69b78836e3cd3\""
	Sep 16 11:13:00 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:00.257579971Z" level=info msg="StartContainer for \"3d2d679d3f9204404ae6eba2a4077abf21a6fcc5943b04d5a2e69b78836e3cd3\""
	Sep 16 11:13:00 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:00.438814926Z" level=info msg="StartContainer for \"3d2d679d3f9204404ae6eba2a4077abf21a6fcc5943b04d5a2e69b78836e3cd3\" returns successfully"
	Sep 16 11:13:00 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:00.682014335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:08708819-cf0d-4505-a1f0-5563be02bd8c,Namespace:kube-system,Attempt:0,}"
	Sep 16 11:13:00 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:00.705460896Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 11:13:00 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:00.705574112Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 11:13:00 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:00.705591855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:13:00 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:00.705767960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:13:00 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:00.763243127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:08708819-cf0d-4505-a1f0-5563be02bd8c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7bb5692bf820af9b4baaf102e9bcee78709750dc811c3040ccf45c8afde3f945\""
	Sep 16 11:13:00 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:00.766422496Z" level=info msg="CreateContainer within sandbox \"7bb5692bf820af9b4baaf102e9bcee78709750dc811c3040ccf45c8afde3f945\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Sep 16 11:13:00 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:00.779827089Z" level=info msg="CreateContainer within sandbox \"7bb5692bf820af9b4baaf102e9bcee78709750dc811c3040ccf45c8afde3f945\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"6f355202fdbbeadd28797b1e19193703a5ee560921eef405227ef9296cb15a44\""
	Sep 16 11:13:00 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:00.780483761Z" level=info msg="StartContainer for \"6f355202fdbbeadd28797b1e19193703a5ee560921eef405227ef9296cb15a44\""
	Sep 16 11:13:00 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:00.834312428Z" level=info msg="StartContainer for \"6f355202fdbbeadd28797b1e19193703a5ee560921eef405227ef9296cb15a44\" returns successfully"
	Sep 16 11:13:04 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:04.282404342Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Sep 16 11:13:15 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:15.036034724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-sc74v,Uid:5655635d-c5e6-4043-b178-77f3df972e86,Namespace:kube-system,Attempt:0,}"
	Sep 16 11:13:15 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:15.071775195Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 11:13:15 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:15.071867312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 11:13:15 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:15.071884979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:13:15 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:15.071998114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:13:15 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:15.118805252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-sc74v,Uid:5655635d-c5e6-4043-b178-77f3df972e86,Namespace:kube-system,Attempt:0,} returns sandbox id \"a427aaf4dc7bf56af8e59e4086f4715ac46e1c9db615ec7c033d1c861cc37441\""
	Sep 16 11:13:15 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:15.121605030Z" level=info msg="CreateContainer within sandbox \"a427aaf4dc7bf56af8e59e4086f4715ac46e1c9db615ec7c033d1c861cc37441\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Sep 16 11:13:15 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:15.134951310Z" level=info msg="CreateContainer within sandbox \"a427aaf4dc7bf56af8e59e4086f4715ac46e1c9db615ec7c033d1c861cc37441\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"308f1d6d730a2535f1e6a084892c5da3b5b775594ef073672e029c09537b57da\""
	Sep 16 11:13:15 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:15.135602631Z" level=info msg="StartContainer for \"308f1d6d730a2535f1e6a084892c5da3b5b775594ef073672e029c09537b57da\""
	Sep 16 11:13:15 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:15.180235908Z" level=info msg="StartContainer for \"308f1d6d730a2535f1e6a084892c5da3b5b775594ef073672e029c09537b57da\" returns successfully"
	
	
	==> coredns [308f1d6d730a2535f1e6a084892c5da3b5b775594ef073672e029c09537b57da] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:41169 - 50662 "HINFO IN 4844345484503832019.4449023886173755708. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011300932s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-006978
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-006978
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=default-k8s-diff-port-006978
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T11_12_54_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:12:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-006978
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:13:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:13:04 +0000   Mon, 16 Sep 2024 11:12:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:13:04 +0000   Mon, 16 Sep 2024 11:12:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:13:04 +0000   Mon, 16 Sep 2024 11:12:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:13:04 +0000   Mon, 16 Sep 2024 11:12:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-006978
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 9f862fabd65249baa1ce0a392f842af0
	  System UUID:                15408216-8343-44b6-bf08-785f58970e8a
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-sc74v                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     21s
	  kube-system                 etcd-default-k8s-diff-port-006978                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         26s
	  kube-system                 kindnet-njckk                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      21s
	  kube-system                 kube-apiserver-default-k8s-diff-port-006978             250m (3%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-006978    200m (2%)     0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-proxy-2mcbv                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	  kube-system                 kube-scheduler-default-k8s-diff-port-006978             100m (1%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 20s                kube-proxy       
	  Normal   NodeHasSufficientMemory  32s (x8 over 32s)  kubelet          Node default-k8s-diff-port-006978 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    32s (x7 over 32s)  kubelet          Node default-k8s-diff-port-006978 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     32s (x7 over 32s)  kubelet          Node default-k8s-diff-port-006978 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  32s                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 27s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 27s                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  26s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  26s                kubelet          Node default-k8s-diff-port-006978 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    26s                kubelet          Node default-k8s-diff-port-006978 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     26s                kubelet          Node default-k8s-diff-port-006978 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           22s                node-controller  Node default-k8s-diff-port-006978 event: Registered Node default-k8s-diff-port-006978 in Controller
	
	
	==> dmesg <==
	[  +0.000003] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	[  +1.007060] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000007] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000001] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000001] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	[  +2.015770] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000006] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000001] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000001] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	[  +4.191585] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000008] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	[  +0.000009] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000001] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000002] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	[  +8.191312] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000007] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000002] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000002] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	
	
	==> etcd [06406ac4e01c0da912225abf9c21537955221a3a9f8dfb94deb8f00faf06ca09] <==
	{"level":"info","ts":"2024-09-16T11:12:49.134376Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T11:12:49.134546Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-09-16T11:12:49.134585Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-09-16T11:12:49.135386Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T11:12:49.135426Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T11:12:49.463407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T11:12:49.463466Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T11:12:49.463496Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2024-09-16T11:12:49.463511Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T11:12:49.463520Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2024-09-16T11:12:49.463536Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2024-09-16T11:12:49.463550Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2024-09-16T11:12:49.464413Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:12:49.465019Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:default-k8s-diff-port-006978 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T11:12:49.465072Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:12:49.465051Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:12:49.465386Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T11:12:49.465431Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T11:12:49.466272Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:12:49.467109Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2024-09-16T11:12:49.467934Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:12:49.468738Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T11:12:49.470945Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:12:49.471063Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:12:49.471097Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 11:13:20 up 55 min,  0 users,  load average: 1.97, 2.91, 2.22
	Linux default-k8s-diff-port-006978 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [3d2d679d3f9204404ae6eba2a4077abf21a6fcc5943b04d5a2e69b78836e3cd3] <==
	I0916 11:13:00.621671       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 11:13:00.621894       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0916 11:13:00.622047       1 main.go:148] setting mtu 1500 for CNI 
	I0916 11:13:00.622069       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 11:13:00.622081       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 11:13:00.948822       1 controller.go:334] Starting controller kube-network-policies
	I0916 11:13:00.948853       1 controller.go:338] Waiting for informer caches to sync
	I0916 11:13:00.948861       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 11:13:01.249787       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 11:13:01.249852       1 metrics.go:61] Registering metrics
	I0916 11:13:01.249923       1 controller.go:374] Syncing nftables rules
	I0916 11:13:10.948665       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0916 11:13:10.948741       1 main.go:299] handling current node
	
	
	==> kube-apiserver [bdf3aa888730f420448b90d2c08e29c4518e341208ebdb9b901916839a3b5862] <==
	I0916 11:12:51.521678       1 aggregator.go:171] initial CRD sync complete...
	I0916 11:12:51.521686       1 autoregister_controller.go:144] Starting autoregister controller
	I0916 11:12:51.521692       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0916 11:12:51.521698       1 cache.go:39] Caches are synced for autoregister controller
	E0916 11:12:51.525231       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0916 11:12:51.549075       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0916 11:12:51.549115       1 policy_source.go:224] refreshing policies
	E0916 11:12:51.574330       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I0916 11:12:51.622819       1 controller.go:615] quota admission added evaluator for: namespaces
	I0916 11:12:51.728150       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0916 11:12:52.373099       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0916 11:12:52.376786       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0916 11:12:52.376801       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0916 11:12:52.843917       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0916 11:12:52.881093       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 11:12:52.928507       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0916 11:12:52.934298       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0916 11:12:52.935302       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 11:12:52.939383       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 11:12:53.449184       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 11:12:54.158602       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0916 11:12:54.171876       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0916 11:12:54.180069       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0916 11:12:58.404034       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 11:12:59.262881       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [a085c20f4e6d11ad60d4935c3c6e2b3ee2a685ac38a3ca0cae9820d4bab0905e] <==
	I0916 11:12:58.390353       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0916 11:12:58.399656       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0916 11:12:58.400114       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-006978"
	I0916 11:12:58.401355       1 shared_informer.go:320] Caches are synced for daemon sets
	I0916 11:12:58.405515       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 11:12:58.409038       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 11:12:58.817951       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:12:58.898115       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:12:58.898137       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 11:12:59.210806       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-006978"
	I0916 11:12:59.426362       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="1.017098138s"
	I0916 11:12:59.433522       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="7.101603ms"
	I0916 11:12:59.433635       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="61.021µs"
	I0916 11:12:59.520137       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="123.944µs"
	I0916 11:12:59.539861       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="150.746µs"
	I0916 11:13:00.058053       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="15.011457ms"
	I0916 11:13:00.126185       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="68.061499ms"
	I0916 11:13:00.126320       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="81.97µs"
	I0916 11:13:01.081758       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="88.254µs"
	I0916 11:13:01.086415       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="103.992µs"
	I0916 11:13:01.089510       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="69.774µs"
	I0916 11:13:04.291467       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-006978"
	I0916 11:13:16.102024       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="80.781µs"
	I0916 11:13:16.119318       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.696792ms"
	I0916 11:13:16.119456       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="86.29µs"
	
	
	==> kube-proxy [947c3b3b00e44516aae5e8db652e25532afcfccf08cf372511b7656ed49e1dce] <==
	I0916 11:13:00.253825       1 server_linux.go:66] "Using iptables proxy"
	I0916 11:13:00.407401       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.76.2"]
	E0916 11:13:00.407487       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 11:13:00.429078       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 11:13:00.429182       1 server_linux.go:169] "Using iptables Proxier"
	I0916 11:13:00.432606       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 11:13:00.434317       1 server.go:483] "Version info" version="v1.31.1"
	I0916 11:13:00.434355       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:13:00.436922       1 config.go:199] "Starting service config controller"
	I0916 11:13:00.436961       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 11:13:00.436992       1 config.go:328] "Starting node config controller"
	I0916 11:13:00.436998       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 11:13:00.437231       1 config.go:105] "Starting endpoint slice config controller"
	I0916 11:13:00.437259       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 11:13:00.537105       1 shared_informer.go:320] Caches are synced for node config
	I0916 11:13:00.537113       1 shared_informer.go:320] Caches are synced for service config
	I0916 11:13:00.538249       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [3b1640b111894c26e26aef3c208049c9bac3c487c7b837d73cc15609a25be8a5] <==
	W0916 11:12:51.533950       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 11:12:51.533967       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:51.534038       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 11:12:51.534052       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:51.534137       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 11:12:51.534151       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:51.534222       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 11:12:51.534236       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:51.534340       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:12:51.534355       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:51.534367       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 11:12:51.534374       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:51.534388       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 11:12:51.534396       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:52.339002       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 11:12:52.339046       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:52.406598       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 11:12:52.406652       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:52.413957       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 11:12:52.413997       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:52.416027       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 11:12:52.416071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:52.594671       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:12:52.594714       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 11:12:54.631845       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 11:12:59 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:12:59.521986    1604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5655635d-c5e6-4043-b178-77f3df972e86-config-volume\") pod \"coredns-7c65d6cfc9-sc74v\" (UID: \"5655635d-c5e6-4043-b178-77f3df972e86\") " pod="kube-system/coredns-7c65d6cfc9-sc74v"
	Sep 16 11:12:59 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:12:59.531287    1604 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Sep 16 11:12:59 default-k8s-diff-port-006978 kubelet[1604]: E0916 11:12:59.832137    1604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5ca2f009c6ea77d26e1dcde0a0b7e597e232874d34e229eaa42385e8160feb5\": failed to find network info for sandbox \"f5ca2f009c6ea77d26e1dcde0a0b7e597e232874d34e229eaa42385e8160feb5\""
	Sep 16 11:12:59 default-k8s-diff-port-006978 kubelet[1604]: E0916 11:12:59.832228    1604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5ca2f009c6ea77d26e1dcde0a0b7e597e232874d34e229eaa42385e8160feb5\": failed to find network info for sandbox \"f5ca2f009c6ea77d26e1dcde0a0b7e597e232874d34e229eaa42385e8160feb5\"" pod="kube-system/coredns-7c65d6cfc9-827n4"
	Sep 16 11:12:59 default-k8s-diff-port-006978 kubelet[1604]: E0916 11:12:59.832258    1604 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5ca2f009c6ea77d26e1dcde0a0b7e597e232874d34e229eaa42385e8160feb5\": failed to find network info for sandbox \"f5ca2f009c6ea77d26e1dcde0a0b7e597e232874d34e229eaa42385e8160feb5\"" pod="kube-system/coredns-7c65d6cfc9-827n4"
	Sep 16 11:12:59 default-k8s-diff-port-006978 kubelet[1604]: E0916 11:12:59.832331    1604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-827n4_kube-system(950246f0-ecc4-4b7c-b89b-09a027a772d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-827n4_kube-system(950246f0-ecc4-4b7c-b89b-09a027a772d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5ca2f009c6ea77d26e1dcde0a0b7e597e232874d34e229eaa42385e8160feb5\\\": failed to find network info for sandbox \\\"f5ca2f009c6ea77d26e1dcde0a0b7e597e232874d34e229eaa42385e8160feb5\\\"\"" pod="kube-system/coredns-7c65d6cfc9-827n4" podUID="950246f0-ecc4-4b7c-b89b-09a027a772d0"
	Sep 16 11:12:59 default-k8s-diff-port-006978 kubelet[1604]: E0916 11:12:59.853684    1604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c13c121968d649e2f76dc9c20d74e291bf0d8459ab5bca233f9663685b619659\": failed to find network info for sandbox \"c13c121968d649e2f76dc9c20d74e291bf0d8459ab5bca233f9663685b619659\""
	Sep 16 11:12:59 default-k8s-diff-port-006978 kubelet[1604]: E0916 11:12:59.853777    1604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c13c121968d649e2f76dc9c20d74e291bf0d8459ab5bca233f9663685b619659\": failed to find network info for sandbox \"c13c121968d649e2f76dc9c20d74e291bf0d8459ab5bca233f9663685b619659\"" pod="kube-system/coredns-7c65d6cfc9-sc74v"
	Sep 16 11:12:59 default-k8s-diff-port-006978 kubelet[1604]: E0916 11:12:59.853808    1604 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c13c121968d649e2f76dc9c20d74e291bf0d8459ab5bca233f9663685b619659\": failed to find network info for sandbox \"c13c121968d649e2f76dc9c20d74e291bf0d8459ab5bca233f9663685b619659\"" pod="kube-system/coredns-7c65d6cfc9-sc74v"
	Sep 16 11:12:59 default-k8s-diff-port-006978 kubelet[1604]: E0916 11:12:59.853861    1604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-sc74v_kube-system(5655635d-c5e6-4043-b178-77f3df972e86)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-sc74v_kube-system(5655635d-c5e6-4043-b178-77f3df972e86)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c13c121968d649e2f76dc9c20d74e291bf0d8459ab5bca233f9663685b619659\\\": failed to find network info for sandbox \\\"c13c121968d649e2f76dc9c20d74e291bf0d8459ab5bca233f9663685b619659\\\"\"" pod="kube-system/coredns-7c65d6cfc9-sc74v" podUID="5655635d-c5e6-4043-b178-77f3df972e86"
	Sep 16 11:13:00 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:00.126731    1604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/950246f0-ecc4-4b7c-b89b-09a027a772d0-config-volume\") pod \"950246f0-ecc4-4b7c-b89b-09a027a772d0\" (UID: \"950246f0-ecc4-4b7c-b89b-09a027a772d0\") "
	Sep 16 11:13:00 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:00.126802    1604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f8xl\" (UniqueName: \"kubernetes.io/projected/950246f0-ecc4-4b7c-b89b-09a027a772d0-kube-api-access-6f8xl\") pod \"950246f0-ecc4-4b7c-b89b-09a027a772d0\" (UID: \"950246f0-ecc4-4b7c-b89b-09a027a772d0\") "
	Sep 16 11:13:00 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:00.127337    1604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/950246f0-ecc4-4b7c-b89b-09a027a772d0-config-volume" (OuterVolumeSpecName: "config-volume") pod "950246f0-ecc4-4b7c-b89b-09a027a772d0" (UID: "950246f0-ecc4-4b7c-b89b-09a027a772d0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Sep 16 11:13:00 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:00.131029    1604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/950246f0-ecc4-4b7c-b89b-09a027a772d0-kube-api-access-6f8xl" (OuterVolumeSpecName: "kube-api-access-6f8xl") pod "950246f0-ecc4-4b7c-b89b-09a027a772d0" (UID: "950246f0-ecc4-4b7c-b89b-09a027a772d0"). InnerVolumeSpecName "kube-api-access-6f8xl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 11:13:00 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:00.227037    1604 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-6f8xl\" (UniqueName: \"kubernetes.io/projected/950246f0-ecc4-4b7c-b89b-09a027a772d0-kube-api-access-6f8xl\") on node \"default-k8s-diff-port-006978\" DevicePath \"\""
	Sep 16 11:13:00 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:00.227076    1604 reconciler_common.go:288] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/950246f0-ecc4-4b7c-b89b-09a027a772d0-config-volume\") on node \"default-k8s-diff-port-006978\" DevicePath \"\""
	Sep 16 11:13:00 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:00.428505    1604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/08708819-cf0d-4505-a1f0-5563be02bd8c-tmp\") pod \"storage-provisioner\" (UID: \"08708819-cf0d-4505-a1f0-5563be02bd8c\") " pod="kube-system/storage-provisioner"
	Sep 16 11:13:00 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:00.428562    1604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4hzl\" (UniqueName: \"kubernetes.io/projected/08708819-cf0d-4505-a1f0-5563be02bd8c-kube-api-access-h4hzl\") pod \"storage-provisioner\" (UID: \"08708819-cf0d-4505-a1f0-5563be02bd8c\") " pod="kube-system/storage-provisioner"
	Sep 16 11:13:01 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:01.069502    1604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2mcbv" podStartSLOduration=2.069481573 podStartE2EDuration="2.069481573s" podCreationTimestamp="2024-09-16 11:12:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:13:01.069398727 +0000 UTC m=+7.138484871" watchObservedRunningTime="2024-09-16 11:13:01.069481573 +0000 UTC m=+7.138567716"
	Sep 16 11:13:01 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:01.098813    1604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.098790323 podStartE2EDuration="1.098790323s" podCreationTimestamp="2024-09-16 11:13:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:13:01.09871627 +0000 UTC m=+7.167802413" watchObservedRunningTime="2024-09-16 11:13:01.098790323 +0000 UTC m=+7.167876468"
	Sep 16 11:13:01 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:01.108472    1604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-njckk" podStartSLOduration=2.108444312 podStartE2EDuration="2.108444312s" podCreationTimestamp="2024-09-16 11:12:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:13:01.108144012 +0000 UTC m=+7.177230167" watchObservedRunningTime="2024-09-16 11:13:01.108444312 +0000 UTC m=+7.177530457"
	Sep 16 11:13:02 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:02.037418    1604 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="950246f0-ecc4-4b7c-b89b-09a027a772d0" path="/var/lib/kubelet/pods/950246f0-ecc4-4b7c-b89b-09a027a772d0/volumes"
	Sep 16 11:13:04 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:04.281799    1604 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 16 11:13:04 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:04.282674    1604 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 16 11:13:16 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:16.113460    1604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-sc74v" podStartSLOduration=17.113430858 podStartE2EDuration="17.113430858s" podCreationTimestamp="2024-09-16 11:12:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:13:16.102461271 +0000 UTC m=+22.171547414" watchObservedRunningTime="2024-09-16 11:13:16.113430858 +0000 UTC m=+22.182517002"
	
	
	==> storage-provisioner [6f355202fdbbeadd28797b1e19193703a5ee560921eef405227ef9296cb15a44] <==
	I0916 11:13:00.842140       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 11:13:00.849696       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 11:13:00.849745       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 11:13:00.858720       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 11:13:00.858874       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-006978_6e2d25a8-69d9-42c8-bc96-a993683990b2!
	I0916 11:13:00.860283       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"48271e48-bb5a-477f-91cc-b9e1963cd811", APIVersion:"v1", ResourceVersion:"414", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-006978_6e2d25a8-69d9-42c8-bc96-a993683990b2 became leader
	I0916 11:13:00.959514       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-006978_6e2d25a8-69d9-42c8-bc96-a993683990b2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-006978 -n default-k8s-diff-port-006978
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-006978 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-006978 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (511.231µs)
helpers_test.go:263: kubectl --context default-k8s-diff-port-006978 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (3.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-006978 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-006978 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-006978 describe deploy/metrics-server -n kube-system: fork/exec /usr/local/bin/kubectl: exec format error (522.571µs)
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-006978 describe deploy/metrics-server -n kube-system": fork/exec /usr/local/bin/kubectl: exec format error
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-006978
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-006978:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "92220cda3aab6b195a38e8c57831506f5fef8d5456bc7737cfbe9edbdceff751",
	        "Created": "2024-09-16T11:12:40.853683512Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 303954,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T11:12:40.986877852Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/92220cda3aab6b195a38e8c57831506f5fef8d5456bc7737cfbe9edbdceff751/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/92220cda3aab6b195a38e8c57831506f5fef8d5456bc7737cfbe9edbdceff751/hostname",
	        "HostsPath": "/var/lib/docker/containers/92220cda3aab6b195a38e8c57831506f5fef8d5456bc7737cfbe9edbdceff751/hosts",
	        "LogPath": "/var/lib/docker/containers/92220cda3aab6b195a38e8c57831506f5fef8d5456bc7737cfbe9edbdceff751/92220cda3aab6b195a38e8c57831506f5fef8d5456bc7737cfbe9edbdceff751-json.log",
	        "Name": "/default-k8s-diff-port-006978",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-006978:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-006978",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/016c3d4fc319478973bdc14ac47307bb58de6ea7b8c61d9d42356a23a7da5f9c-init/diff:/var/lib/docker/overlay2/e391a31d00d345931d18da68f8a7d7338830f6d9b98fe203461225d03a65d594/diff",
	                "MergedDir": "/var/lib/docker/overlay2/016c3d4fc319478973bdc14ac47307bb58de6ea7b8c61d9d42356a23a7da5f9c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/016c3d4fc319478973bdc14ac47307bb58de6ea7b8c61d9d42356a23a7da5f9c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/016c3d4fc319478973bdc14ac47307bb58de6ea7b8c61d9d42356a23a7da5f9c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-006978",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-006978/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-006978",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-006978",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-006978",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "962cb6cc39d91026d44c9cf4daa9dd57b47deeb7041f7aa51db91e46b312ce38",
	            "SandboxKey": "/var/run/docker/netns/962cb6cc39d9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-006978": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "77357235afcef96415382e78c67fcc53123318fac9325f81acae0f265d8eb86e",
	                    "EndpointID": "b4935c247e07031b1781430c46c0d3d9e9e0bcc8919b1161f872a08294783641",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-006978",
	                        "92220cda3aab"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-006978 -n default-k8s-diff-port-006978
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-006978 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-006978 logs -n 25: (1.099381192s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p force-systemd-flag-917705                           | force-systemd-flag-917705    | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| start   | -p old-k8s-version-371039                              | old-k8s-version-371039       | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:10 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| ssh     | cert-options-840054 ssh                                | cert-options-840054          | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-840054 -- sudo                         | cert-options-840054          | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-840054                                 | cert-options-840054          | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| start   | -p no-preload-349453                                   | no-preload-349453            | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-349453             | no-preload-349453            | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC | 16 Sep 24 11:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-349453                                   | no-preload-349453            | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC | 16 Sep 24 11:09 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-349453                  | no-preload-349453            | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC | 16 Sep 24 11:09 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-349453                                   | no-preload-349453            | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-371039        | old-k8s-version-371039       | jenkins | v1.34.0 | 16 Sep 24 11:10 UTC | 16 Sep 24 11:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-371039                              | old-k8s-version-371039       | jenkins | v1.34.0 | 16 Sep 24 11:10 UTC | 16 Sep 24 11:10 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-371039             | old-k8s-version-371039       | jenkins | v1.34.0 | 16 Sep 24 11:10 UTC | 16 Sep 24 11:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-371039                              | old-k8s-version-371039       | jenkins | v1.34.0 | 16 Sep 24 11:10 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-021107                              | cert-expiration-021107       | jenkins | v1.34.0 | 16 Sep 24 11:11 UTC | 16 Sep 24 11:11 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-021107                              | cert-expiration-021107       | jenkins | v1.34.0 | 16 Sep 24 11:11 UTC | 16 Sep 24 11:11 UTC |
	| start   | -p embed-certs-679624                                  | embed-certs-679624           | jenkins | v1.34.0 | 16 Sep 24 11:11 UTC | 16 Sep 24 11:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-679624            | embed-certs-679624           | jenkins | v1.34.0 | 16 Sep 24 11:12 UTC | 16 Sep 24 11:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-679624                                  | embed-certs-679624           | jenkins | v1.34.0 | 16 Sep 24 11:12 UTC | 16 Sep 24 11:12 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-679624                 | embed-certs-679624           | jenkins | v1.34.0 | 16 Sep 24 11:12 UTC | 16 Sep 24 11:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-679624                                  | embed-certs-679624           | jenkins | v1.34.0 | 16 Sep 24 11:12 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-311911                           | kubernetes-upgrade-311911    | jenkins | v1.34.0 | 16 Sep 24 11:12 UTC | 16 Sep 24 11:12 UTC |
	| delete  | -p                                                     | disable-driver-mounts-852440 | jenkins | v1.34.0 | 16 Sep 24 11:12 UTC | 16 Sep 24 11:12 UTC |
	|         | disable-driver-mounts-852440                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-006978 | jenkins | v1.34.0 | 16 Sep 24 11:12 UTC | 16 Sep 24 11:13 UTC |
	|         | default-k8s-diff-port-006978                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-006978  | default-k8s-diff-port-006978 | jenkins | v1.34.0 | 16 Sep 24 11:13 UTC | 16 Sep 24 11:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:12:33
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:12:33.188304  303072 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:12:33.188581  303072 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:12:33.188593  303072 out.go:358] Setting ErrFile to fd 2...
	I0916 11:12:33.188598  303072 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:12:33.188783  303072 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 11:12:33.189413  303072 out.go:352] Setting JSON to false
	I0916 11:12:33.190969  303072 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3297,"bootTime":1726481856,"procs":383,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:12:33.191086  303072 start.go:139] virtualization: kvm guest
	I0916 11:12:33.193702  303072 out.go:177] * [default-k8s-diff-port-006978] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:12:33.195341  303072 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:12:33.195410  303072 notify.go:220] Checking for updates...
	I0916 11:12:33.198592  303072 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:12:33.199962  303072 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:12:33.201287  303072 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 11:12:33.202689  303072 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:12:33.204109  303072 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:12:33.206233  303072 config.go:182] Loaded profile config "embed-certs-679624": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:12:33.206402  303072 config.go:182] Loaded profile config "no-preload-349453": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:12:33.206535  303072 config.go:182] Loaded profile config "old-k8s-version-371039": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0916 11:12:33.206656  303072 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:12:33.233320  303072 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 11:12:33.233448  303072 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:12:33.298402  303072 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:86 SystemTime:2024-09-16 11:12:33.288078298 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:12:33.298570  303072 docker.go:318] overlay module found
	I0916 11:12:33.301953  303072 out.go:177] * Using the docker driver based on user configuration
	I0916 11:12:33.303332  303072 start.go:297] selected driver: docker
	I0916 11:12:33.303349  303072 start.go:901] validating driver "docker" against <nil>
	I0916 11:12:33.303362  303072 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:12:33.304321  303072 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:12:33.369824  303072 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:86 SystemTime:2024-09-16 11:12:33.356912728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:12:33.370078  303072 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 11:12:33.370327  303072 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:12:33.372698  303072 out.go:177] * Using Docker driver with root privileges
	I0916 11:12:33.374242  303072 cni.go:84] Creating CNI manager for ""
	I0916 11:12:33.374302  303072 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:12:33.374313  303072 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 11:12:33.374391  303072 start.go:340] cluster config:
	{Name:default-k8s-diff-port-006978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-006978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:12:33.375879  303072 out.go:177] * Starting "default-k8s-diff-port-006978" primary control-plane node in "default-k8s-diff-port-006978" cluster
	I0916 11:12:33.377330  303072 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 11:12:33.378788  303072 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 11:12:33.380265  303072 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:12:33.380313  303072 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4
	I0916 11:12:33.380331  303072 cache.go:56] Caching tarball of preloaded images
	I0916 11:12:33.380387  303072 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 11:12:33.380431  303072 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 11:12:33.380447  303072 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 11:12:33.380593  303072 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/config.json ...
	I0916 11:12:33.380632  303072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/config.json: {Name:mk8dc034cf5d1663f163d44cacb1db0a697f761d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0916 11:12:33.405013  303072 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 11:12:33.405039  303072 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 11:12:33.405136  303072 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 11:12:33.405159  303072 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 11:12:33.405165  303072 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 11:12:33.405174  303072 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 11:12:33.405185  303072 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 11:12:33.466107  303072 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 11:12:33.466152  303072 cache.go:194] Successfully downloaded all kic artifacts
	I0916 11:12:33.466198  303072 start.go:360] acquireMachinesLock for default-k8s-diff-port-006978: {Name:mke54f99fcd9e320f7c2bc8102220e65af70efd7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:12:33.466306  303072 start.go:364] duration metric: took 80.59µs to acquireMachinesLock for "default-k8s-diff-port-006978"
	I0916 11:12:33.466338  303072 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-006978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-006978 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 11:12:33.466439  303072 start.go:125] createHost starting for "" (driver="docker")
	I0916 11:12:30.238757  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:32.239283  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:30.649566  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:33.149628  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:33.273175  283294 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:35.769915  283294 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:33.469116  303072 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0916 11:12:33.469464  303072 start.go:159] libmachine.API.Create for "default-k8s-diff-port-006978" (driver="docker")
	I0916 11:12:33.469509  303072 client.go:168] LocalClient.Create starting
	I0916 11:12:33.469613  303072 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem
	I0916 11:12:33.469669  303072 main.go:141] libmachine: Decoding PEM data...
	I0916 11:12:33.469693  303072 main.go:141] libmachine: Parsing certificate...
	I0916 11:12:33.469766  303072 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem
	I0916 11:12:33.469792  303072 main.go:141] libmachine: Decoding PEM data...
	I0916 11:12:33.469803  303072 main.go:141] libmachine: Parsing certificate...
	I0916 11:12:33.470217  303072 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-006978 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 11:12:33.489304  303072 cli_runner.go:211] docker network inspect default-k8s-diff-port-006978 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 11:12:33.489379  303072 network_create.go:284] running [docker network inspect default-k8s-diff-port-006978] to gather additional debugging logs...
	I0916 11:12:33.489410  303072 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-006978
	W0916 11:12:33.511223  303072 cli_runner.go:211] docker network inspect default-k8s-diff-port-006978 returned with exit code 1
	I0916 11:12:33.511273  303072 network_create.go:287] error running [docker network inspect default-k8s-diff-port-006978]: docker network inspect default-k8s-diff-port-006978: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-006978 not found
	I0916 11:12:33.511289  303072 network_create.go:289] output of [docker network inspect default-k8s-diff-port-006978]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-006978 not found
	
	** /stderr **
	I0916 11:12:33.511384  303072 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:12:33.531001  303072 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c95c64bb41bd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:bd:76:ab:c4} reservation:<nil>}
	I0916 11:12:33.532177  303072 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fad43aa9929b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:94:3f:61:fd} reservation:<nil>}
	I0916 11:12:33.533347  303072 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-49585fce923a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:ad:45:94:54} reservation:<nil>}
	I0916 11:12:33.534648  303072 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001bd36e0}
	I0916 11:12:33.534681  303072 network_create.go:124] attempt to create docker network default-k8s-diff-port-006978 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0916 11:12:33.534740  303072 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-006978 default-k8s-diff-port-006978
	I0916 11:12:33.610090  303072 network_create.go:108] docker network default-k8s-diff-port-006978 192.168.76.0/24 created
	I0916 11:12:33.610127  303072 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-006978" container
	I0916 11:12:33.610214  303072 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 11:12:33.632805  303072 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-006978 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-006978 --label created_by.minikube.sigs.k8s.io=true
	I0916 11:12:33.655257  303072 oci.go:103] Successfully created a docker volume default-k8s-diff-port-006978
	I0916 11:12:33.655345  303072 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-006978-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-006978 --entrypoint /usr/bin/test -v default-k8s-diff-port-006978:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 11:12:34.731781  303072 cli_runner.go:217] Completed: docker run --rm --name default-k8s-diff-port-006978-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-006978 --entrypoint /usr/bin/test -v default-k8s-diff-port-006978:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib: (1.076336693s)
	I0916 11:12:34.731816  303072 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-006978
	I0916 11:12:34.731846  303072 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:12:34.731872  303072 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 11:12:34.731946  303072 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-006978:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 11:12:34.739277  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:37.239310  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:35.149722  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:37.650375  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:37.770648  283294 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-371039" in "kube-system" namespace has status "Ready":"True"
	I0916 11:12:37.770673  283294 pod_ready.go:82] duration metric: took 11.007535908s for pod "kube-scheduler-old-k8s-version-371039" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:37.770686  283294 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace to be "Ready" ...
	I0916 11:12:39.777546  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:40.784380  303072 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-006978:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (6.052385534s)
	I0916 11:12:40.784418  303072 kic.go:203] duration metric: took 6.052542506s to extract preloaded images to volume ...
	W0916 11:12:40.784564  303072 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 11:12:40.784661  303072 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 11:12:40.837569  303072 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-006978 --name default-k8s-diff-port-006978 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-006978 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-006978 --network default-k8s-diff-port-006978 --ip 192.168.76.2 --volume default-k8s-diff-port-006978:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 11:12:41.151535  303072 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-006978 --format={{.State.Running}}
	I0916 11:12:41.171308  303072 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-006978 --format={{.State.Status}}
	I0916 11:12:41.190871  303072 cli_runner.go:164] Run: docker exec default-k8s-diff-port-006978 stat /var/lib/dpkg/alternatives/iptables
	I0916 11:12:41.233480  303072 oci.go:144] the created container "default-k8s-diff-port-006978" has a running status.
	I0916 11:12:41.233522  303072 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/default-k8s-diff-port-006978/id_rsa...
	I0916 11:12:41.414049  303072 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3687/.minikube/machines/default-k8s-diff-port-006978/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 11:12:41.436388  303072 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-006978 --format={{.State.Status}}
	I0916 11:12:41.455455  303072 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 11:12:41.455481  303072 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-006978 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 11:12:41.511490  303072 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-006978 --format={{.State.Status}}
	I0916 11:12:41.540142  303072 machine.go:93] provisionDockerMachine start ...
	I0916 11:12:41.540258  303072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:12:41.563377  303072 main.go:141] libmachine: Using SSH client type: native
	I0916 11:12:41.563597  303072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I0916 11:12:41.563607  303072 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 11:12:41.821654  303072 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-006978
	
	I0916 11:12:41.821689  303072 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-006978"
	I0916 11:12:41.821753  303072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:12:41.840337  303072 main.go:141] libmachine: Using SSH client type: native
	I0916 11:12:41.840544  303072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I0916 11:12:41.840564  303072 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-006978 && echo "default-k8s-diff-port-006978" | sudo tee /etc/hostname
	I0916 11:12:41.992032  303072 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-006978
	
	I0916 11:12:41.992120  303072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:12:42.009447  303072 main.go:141] libmachine: Using SSH client type: native
	I0916 11:12:42.009695  303072 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33088 <nil> <nil>}
	I0916 11:12:42.009733  303072 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-006978' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-006978/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-006978' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:12:42.148459  303072 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:12:42.148487  303072 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 11:12:42.148510  303072 ubuntu.go:177] setting up certificates
	I0916 11:12:42.148538  303072 provision.go:84] configureAuth start
	I0916 11:12:42.148598  303072 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-006978
	I0916 11:12:42.166372  303072 provision.go:143] copyHostCerts
	I0916 11:12:42.166428  303072 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 11:12:42.166436  303072 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 11:12:42.166501  303072 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 11:12:42.166586  303072 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 11:12:42.166595  303072 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 11:12:42.166621  303072 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 11:12:42.166674  303072 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 11:12:42.166682  303072 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 11:12:42.166703  303072 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 11:12:42.166753  303072 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-006978 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-006978 localhost minikube]
	I0916 11:12:42.306401  303072 provision.go:177] copyRemoteCerts
	I0916 11:12:42.306461  303072 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:12:42.306495  303072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:12:42.323490  303072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/default-k8s-diff-port-006978/id_rsa Username:docker}
	I0916 11:12:42.420814  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 11:12:42.443662  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0916 11:12:42.466807  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 11:12:42.490564  303072 provision.go:87] duration metric: took 342.007302ms to configureAuth
	I0916 11:12:42.490593  303072 ubuntu.go:193] setting minikube options for container-runtime
	I0916 11:12:42.490820  303072 config.go:182] Loaded profile config "default-k8s-diff-port-006978": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:12:42.490837  303072 machine.go:96] duration metric: took 950.665124ms to provisionDockerMachine
	I0916 11:12:42.490846  303072 client.go:171] duration metric: took 9.021328095s to LocalClient.Create
	I0916 11:12:42.490871  303072 start.go:167] duration metric: took 9.02141907s to libmachine.API.Create "default-k8s-diff-port-006978"
	I0916 11:12:42.490884  303072 start.go:293] postStartSetup for "default-k8s-diff-port-006978" (driver="docker")
	I0916 11:12:42.490896  303072 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:12:42.490957  303072 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:12:42.491009  303072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:12:42.508314  303072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/default-k8s-diff-port-006978/id_rsa Username:docker}
	I0916 11:12:42.606294  303072 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:12:42.609598  303072 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 11:12:42.609636  303072 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 11:12:42.609645  303072 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 11:12:42.609651  303072 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 11:12:42.609662  303072 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 11:12:42.609720  303072 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 11:12:42.609807  303072 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 11:12:42.609896  303072 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:12:42.618062  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:12:42.641241  303072 start.go:296] duration metric: took 150.341833ms for postStartSetup
	I0916 11:12:42.641601  303072 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-006978
	I0916 11:12:42.660638  303072 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/config.json ...
	I0916 11:12:42.660910  303072 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 11:12:42.660959  303072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:12:42.681352  303072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/default-k8s-diff-port-006978/id_rsa Username:docker}
	I0916 11:12:42.773016  303072 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 11:12:42.777639  303072 start.go:128] duration metric: took 9.311183024s to createHost
	I0916 11:12:42.777671  303072 start.go:83] releasing machines lock for "default-k8s-diff-port-006978", held for 9.311348572s
	I0916 11:12:42.777730  303072 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-006978
	I0916 11:12:42.794671  303072 ssh_runner.go:195] Run: cat /version.json
	I0916 11:12:42.794729  303072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:12:42.794734  303072 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:12:42.794809  303072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:12:42.812760  303072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/default-k8s-diff-port-006978/id_rsa Username:docker}
	I0916 11:12:42.812961  303072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/default-k8s-diff-port-006978/id_rsa Username:docker}
	I0916 11:12:42.983965  303072 ssh_runner.go:195] Run: systemctl --version
	I0916 11:12:42.988168  303072 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:12:42.992468  303072 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 11:12:43.016957  303072 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 11:12:43.017041  303072 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:12:43.045266  303072 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 11:12:43.045297  303072 start.go:495] detecting cgroup driver to use...
	I0916 11:12:43.045326  303072 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 11:12:43.045377  303072 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 11:12:43.057420  303072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 11:12:43.068346  303072 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:12:43.068404  303072 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:12:43.081261  303072 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:12:43.094734  303072 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:12:43.175775  303072 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:12:39.737995  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:41.742098  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:43.261975  303072 docker.go:233] disabling docker service ...
	I0916 11:12:43.262038  303072 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:12:43.282995  303072 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:12:43.295522  303072 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:12:43.379559  303072 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:12:43.459884  303072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:12:43.472400  303072 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:12:43.487862  303072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 11:12:43.497717  303072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 11:12:43.507197  303072 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 11:12:43.507271  303072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 11:12:43.516769  303072 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 11:12:43.526040  303072 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 11:12:43.535489  303072 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 11:12:43.545566  303072 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:12:43.554727  303072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 11:12:43.564652  303072 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 11:12:43.574313  303072 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 11:12:43.584261  303072 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:12:43.592172  303072 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:12:43.600336  303072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:12:43.675960  303072 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 11:12:43.777200  303072 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 11:12:43.777376  303072 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 11:12:43.781384  303072 start.go:563] Will wait 60s for crictl version
	I0916 11:12:43.781440  303072 ssh_runner.go:195] Run: which crictl
	I0916 11:12:43.784718  303072 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:12:43.817809  303072 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 11:12:43.817866  303072 ssh_runner.go:195] Run: containerd --version
	I0916 11:12:43.839994  303072 ssh_runner.go:195] Run: containerd --version
	I0916 11:12:43.868789  303072 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 11:12:40.149138  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:42.149777  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:44.150077  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:43.870264  303072 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-006978 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:12:43.887693  303072 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0916 11:12:43.891552  303072 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:12:43.902196  303072 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-006978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-006978 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:12:43.902316  303072 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:12:43.902363  303072 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:12:43.933503  303072 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 11:12:43.933524  303072 containerd.go:534] Images already preloaded, skipping extraction
	I0916 11:12:43.933574  303072 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:12:43.966712  303072 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 11:12:43.966739  303072 cache_images.go:84] Images are preloaded, skipping loading
	I0916 11:12:43.966750  303072 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.31.1 containerd true true} ...
	I0916 11:12:43.966868  303072 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-006978 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-006978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:12:43.966924  303072 ssh_runner.go:195] Run: sudo crictl info
	I0916 11:12:44.000346  303072 cni.go:84] Creating CNI manager for ""
	I0916 11:12:44.000368  303072 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:12:44.000378  303072 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:12:44.000397  303072 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-006978 NodeName:default-k8s-diff-port-006978 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 11:12:44.000529  303072 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-006978"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:12:44.000585  303072 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 11:12:44.009158  303072 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 11:12:44.009228  303072 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:12:44.017370  303072 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I0916 11:12:44.034166  303072 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:12:44.050711  303072 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2182 bytes)
	I0916 11:12:44.068227  303072 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0916 11:12:44.071437  303072 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:12:44.081858  303072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:12:44.151949  303072 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:12:44.165524  303072 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978 for IP: 192.168.76.2
	I0916 11:12:44.165552  303072 certs.go:194] generating shared ca certs ...
	I0916 11:12:44.165574  303072 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:12:44.165741  303072 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 11:12:44.165796  303072 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 11:12:44.165809  303072 certs.go:256] generating profile certs ...
	I0916 11:12:44.165876  303072 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.key
	I0916 11:12:44.165895  303072 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.crt with IP's: []
	I0916 11:12:44.646752  303072 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.crt ...
	I0916 11:12:44.646790  303072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.crt: {Name:mk5fe57391c71a635bc2664646b46ecf8e7b30f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:12:44.646990  303072 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.key ...
	I0916 11:12:44.647008  303072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.key: {Name:mk8729c4f50d0dff1d65e22e9e0317a12cedc4f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:12:44.647122  303072 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/apiserver.key.9826bbf6
	I0916 11:12:44.647147  303072 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/apiserver.crt.9826bbf6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0916 11:12:44.927657  303072 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/apiserver.crt.9826bbf6 ...
	I0916 11:12:44.927689  303072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/apiserver.crt.9826bbf6: {Name:mk507ad25443e7441acfbd74f84b0e53e00a318e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:12:44.927907  303072 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/apiserver.key.9826bbf6 ...
	I0916 11:12:44.927928  303072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/apiserver.key.9826bbf6: {Name:mk9141b6cbd16a3cbc7444d9c738b092ec418bec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:12:44.928023  303072 certs.go:381] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/apiserver.crt.9826bbf6 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/apiserver.crt
	I0916 11:12:44.928103  303072 certs.go:385] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/apiserver.key.9826bbf6 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/apiserver.key
	I0916 11:12:44.928163  303072 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/proxy-client.key
	I0916 11:12:44.928181  303072 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/proxy-client.crt with IP's: []
	I0916 11:12:45.016612  303072 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/proxy-client.crt ...
	I0916 11:12:45.016645  303072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/proxy-client.crt: {Name:mkc1eab85fbe839e33e53386182e4a6afedec155 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:12:45.016821  303072 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/proxy-client.key ...
	I0916 11:12:45.016835  303072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/proxy-client.key: {Name:mk2b6e4ebf261029f43772640bda54fcc5f4921e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:12:45.017014  303072 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 11:12:45.017057  303072 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 11:12:45.017069  303072 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:12:45.017097  303072 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 11:12:45.017125  303072 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:12:45.017150  303072 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 11:12:45.017223  303072 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:12:45.018092  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:12:45.044144  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:12:45.069666  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:12:45.093991  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:12:45.118079  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0916 11:12:45.141384  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 11:12:45.165248  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:12:45.188695  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 11:12:45.211474  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 11:12:45.234691  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:12:45.260049  303072 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 11:12:45.285743  303072 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:12:45.305003  303072 ssh_runner.go:195] Run: openssl version
	I0916 11:12:45.310350  303072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 11:12:45.319899  303072 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 11:12:45.323233  303072 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 11:12:45.323295  303072 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 11:12:45.330231  303072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:12:45.339365  303072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:12:45.348101  303072 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:12:45.351431  303072 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:12:45.351477  303072 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:12:45.358772  303072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:12:45.368221  303072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 11:12:45.377642  303072 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 11:12:45.381484  303072 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 11:12:45.381557  303072 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 11:12:45.388377  303072 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 11:12:45.397630  303072 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:12:45.400839  303072 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 11:12:45.400901  303072 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-006978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-006978 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:12:45.400985  303072 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 11:12:45.401042  303072 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:12:45.435932  303072 cri.go:89] found id: ""
	I0916 11:12:45.435998  303072 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:12:45.444621  303072 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 11:12:45.453465  303072 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 11:12:45.453540  303072 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 11:12:45.461982  303072 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 11:12:45.462009  303072 kubeadm.go:157] found existing configuration files:
	
	I0916 11:12:45.462058  303072 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0916 11:12:45.471201  303072 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 11:12:45.471266  303072 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 11:12:45.479819  303072 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0916 11:12:45.488490  303072 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 11:12:45.488561  303072 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 11:12:45.497296  303072 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0916 11:12:45.505691  303072 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 11:12:45.505752  303072 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 11:12:45.514164  303072 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0916 11:12:45.522589  303072 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 11:12:45.522664  303072 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 11:12:45.531758  303072 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 11:12:45.570283  303072 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 11:12:45.570392  303072 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 11:12:45.587093  303072 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 11:12:45.587194  303072 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 11:12:45.587259  303072 kubeadm.go:310] OS: Linux
	I0916 11:12:45.587364  303072 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 11:12:45.587437  303072 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 11:12:45.587506  303072 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 11:12:45.587575  303072 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 11:12:45.587661  303072 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 11:12:45.587775  303072 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 11:12:45.587850  303072 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 11:12:45.587917  303072 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 11:12:45.587985  303072 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 11:12:45.641793  303072 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 11:12:45.641930  303072 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 11:12:45.642053  303072 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 11:12:45.649963  303072 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 11:12:42.276706  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:44.277484  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:46.777255  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:45.652421  303072 out.go:235]   - Generating certificates and keys ...
	I0916 11:12:45.652552  303072 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 11:12:45.652614  303072 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 11:12:45.754915  303072 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 11:12:45.828601  303072 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 11:12:46.002610  303072 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 11:12:46.107849  303072 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 11:12:46.248800  303072 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 11:12:46.248969  303072 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-006978 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0916 11:12:46.391232  303072 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 11:12:46.391389  303072 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-006978 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0916 11:12:46.580271  303072 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 11:12:46.848235  303072 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 11:12:46.941777  303072 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 11:12:46.942069  303072 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 11:12:47.120540  303072 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 11:12:47.246837  303072 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 11:12:47.425713  303072 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 11:12:47.548056  303072 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 11:12:47.729107  303072 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 11:12:47.729658  303072 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 11:12:47.732291  303072 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 11:12:47.734653  303072 out.go:235]   - Booting up control plane ...
	I0916 11:12:47.734798  303072 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 11:12:47.734923  303072 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 11:12:47.735780  303072 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 11:12:47.746631  303072 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 11:12:47.752853  303072 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 11:12:47.752933  303072 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 11:12:47.837084  303072 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 11:12:47.837210  303072 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 11:12:44.237974  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:46.238843  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:48.738940  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:46.649823  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:48.650010  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:48.785616  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:51.277044  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:48.338749  303072 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.690295ms
	I0916 11:12:48.338844  303072 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 11:12:53.340729  303072 kubeadm.go:310] [api-check] The API server is healthy after 5.001910312s
	I0916 11:12:53.352535  303072 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 11:12:53.365374  303072 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 11:12:53.386596  303072 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 11:12:53.386790  303072 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-006978 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 11:12:53.394294  303072 kubeadm.go:310] [bootstrap-token] Using token: 21xlxs.cbzjnrzj5tox0go3
	I0916 11:12:51.240955  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:53.739148  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:53.395665  303072 out.go:235]   - Configuring RBAC rules ...
	I0916 11:12:53.395858  303072 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 11:12:53.400957  303072 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 11:12:53.407354  303072 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 11:12:53.410317  303072 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 11:12:53.413111  303072 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 11:12:53.415993  303072 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 11:12:53.748017  303072 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 11:12:54.173321  303072 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 11:12:54.747458  303072 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 11:12:54.748595  303072 kubeadm.go:310] 
	I0916 11:12:54.748708  303072 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 11:12:54.748726  303072 kubeadm.go:310] 
	I0916 11:12:54.748792  303072 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 11:12:54.748815  303072 kubeadm.go:310] 
	I0916 11:12:54.748848  303072 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 11:12:54.748905  303072 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 11:12:54.748949  303072 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 11:12:54.748956  303072 kubeadm.go:310] 
	I0916 11:12:54.749000  303072 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 11:12:54.749007  303072 kubeadm.go:310] 
	I0916 11:12:54.749052  303072 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 11:12:54.749058  303072 kubeadm.go:310] 
	I0916 11:12:54.749133  303072 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 11:12:54.749244  303072 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 11:12:54.749349  303072 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 11:12:54.749358  303072 kubeadm.go:310] 
	I0916 11:12:54.749466  303072 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 11:12:54.749577  303072 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 11:12:54.749587  303072 kubeadm.go:310] 
	I0916 11:12:54.749715  303072 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token 21xlxs.cbzjnrzj5tox0go3 \
	I0916 11:12:54.749881  303072 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 \
	I0916 11:12:54.749917  303072 kubeadm.go:310] 	--control-plane 
	I0916 11:12:54.749927  303072 kubeadm.go:310] 
	I0916 11:12:54.750057  303072 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 11:12:54.750065  303072 kubeadm.go:310] 
	I0916 11:12:54.750183  303072 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token 21xlxs.cbzjnrzj5tox0go3 \
	I0916 11:12:54.750332  303072 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 
	I0916 11:12:54.753671  303072 kubeadm.go:310] W0916 11:12:45.567417    1138 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:12:54.753962  303072 kubeadm.go:310] W0916 11:12:45.568121    1138 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:12:54.754163  303072 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 11:12:54.754263  303072 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 11:12:54.754300  303072 cni.go:84] Creating CNI manager for ""
	I0916 11:12:54.754311  303072 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:12:54.756405  303072 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 11:12:50.650425  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:53.149646  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:53.776361  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:55.777283  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:54.757636  303072 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 11:12:54.761995  303072 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 11:12:54.762021  303072 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 11:12:54.781299  303072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 11:12:54.997779  303072 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 11:12:54.997865  303072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:12:54.997925  303072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-006978 minikube.k8s.io/updated_at=2024_09_16T11_12_54_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=default-k8s-diff-port-006978 minikube.k8s.io/primary=true
	I0916 11:12:55.122417  303072 ops.go:34] apiserver oom_adj: -16
	I0916 11:12:55.122453  303072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:12:55.623042  303072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:12:56.123379  303072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:12:56.622627  303072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:12:57.122828  303072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:12:57.623249  303072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:12:58.122814  303072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:12:56.237961  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:58.238231  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:58.623435  303072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:12:59.122743  303072 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:12:59.192533  303072 kubeadm.go:1113] duration metric: took 4.194732378s to wait for elevateKubeSystemPrivileges
	I0916 11:12:59.192570  303072 kubeadm.go:394] duration metric: took 13.791671494s to StartCluster
	I0916 11:12:59.192623  303072 settings.go:142] acquiring lock: {Name:mk5f140da961bb985c566f732d611f3e238f1f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:12:59.192717  303072 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:12:59.194519  303072 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:12:59.194804  303072 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 11:12:59.194808  303072 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 11:12:59.194880  303072 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:12:59.194998  303072 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-006978"
	I0916 11:12:59.195022  303072 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-006978"
	I0916 11:12:59.195023  303072 config.go:182] Loaded profile config "default-k8s-diff-port-006978": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:12:59.195039  303072 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-006978"
	I0916 11:12:59.195062  303072 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-006978"
	I0916 11:12:59.195069  303072 host.go:66] Checking if "default-k8s-diff-port-006978" exists ...
	I0916 11:12:59.195452  303072 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-006978 --format={{.State.Status}}
	I0916 11:12:59.195657  303072 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-006978 --format={{.State.Status}}
	I0916 11:12:59.196648  303072 out.go:177] * Verifying Kubernetes components...
	I0916 11:12:59.198253  303072 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:12:59.227528  303072 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:12:55.150355  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:57.649659  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:59.650582  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:12:59.229043  303072 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:12:59.229072  303072 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:12:59.229154  303072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:12:59.231097  303072 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-006978"
	I0916 11:12:59.231131  303072 host.go:66] Checking if "default-k8s-diff-port-006978" exists ...
	I0916 11:12:59.231438  303072 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-006978 --format={{.State.Status}}
	I0916 11:12:59.260139  303072 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:12:59.260162  303072 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:12:59.260235  303072 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:12:59.261484  303072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/default-k8s-diff-port-006978/id_rsa Username:docker}
	I0916 11:12:59.286575  303072 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33088 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/default-k8s-diff-port-006978/id_rsa Username:docker}
	I0916 11:12:59.436277  303072 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 11:12:59.436326  303072 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:12:59.548794  303072 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:12:59.549612  303072 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:12:59.965906  303072 start.go:971] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0916 11:12:59.967917  303072 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-006978" to be "Ready" ...
	I0916 11:13:00.025739  303072 node_ready.go:49] node "default-k8s-diff-port-006978" has status "Ready":"True"
	I0916 11:13:00.025770  303072 node_ready.go:38] duration metric: took 57.824532ms for node "default-k8s-diff-port-006978" to be "Ready" ...
	I0916 11:13:00.025783  303072 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:13:00.036420  303072 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-827n4" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:00.378574  303072 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0916 11:12:58.276870  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:00.277679  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:00.379868  303072 addons.go:510] duration metric: took 1.184988241s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0916 11:13:00.470687  303072 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-006978" context rescaled to 1 replicas
	I0916 11:13:01.540194  303072 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-827n4" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-827n4" not found
	I0916 11:13:01.540224  303072 pod_ready.go:82] duration metric: took 1.503769597s for pod "coredns-7c65d6cfc9-827n4" in "kube-system" namespace to be "Ready" ...
	E0916 11:13:01.540237  303072 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-827n4" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-827n4" not found
	I0916 11:13:01.540246  303072 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sc74v" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:00.239474  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:02.738165  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:02.148445  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:04.149068  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:02.776501  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:05.277166  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:03.545597  303072 pod_ready.go:103] pod "coredns-7c65d6cfc9-sc74v" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:05.545656  303072 pod_ready.go:103] pod "coredns-7c65d6cfc9-sc74v" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:08.045988  303072 pod_ready.go:103] pod "coredns-7c65d6cfc9-sc74v" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:04.738774  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:07.238765  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:06.150520  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:08.649261  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:07.775965  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:09.776234  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:11.777149  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:10.545650  303072 pod_ready.go:103] pod "coredns-7c65d6cfc9-sc74v" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:12.545787  303072 pod_ready.go:103] pod "coredns-7c65d6cfc9-sc74v" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:09.738393  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:12.240211  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:11.148506  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:13.149270  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:14.276783  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:16.776281  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:14.545899  303072 pod_ready.go:103] pod "coredns-7c65d6cfc9-sc74v" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:16.546214  303072 pod_ready.go:93] pod "coredns-7c65d6cfc9-sc74v" in "kube-system" namespace has status "Ready":"True"
	I0916 11:13:16.546237  303072 pod_ready.go:82] duration metric: took 15.005983715s for pod "coredns-7c65d6cfc9-sc74v" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:16.546248  303072 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-006978" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:16.550669  303072 pod_ready.go:93] pod "etcd-default-k8s-diff-port-006978" in "kube-system" namespace has status "Ready":"True"
	I0916 11:13:16.550693  303072 pod_ready.go:82] duration metric: took 4.439531ms for pod "etcd-default-k8s-diff-port-006978" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:16.550708  303072 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-006978" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:16.554968  303072 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-006978" in "kube-system" namespace has status "Ready":"True"
	I0916 11:13:16.554985  303072 pod_ready.go:82] duration metric: took 4.271061ms for pod "kube-apiserver-default-k8s-diff-port-006978" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:16.554994  303072 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-006978" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:16.558971  303072 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-006978" in "kube-system" namespace has status "Ready":"True"
	I0916 11:13:16.558989  303072 pod_ready.go:82] duration metric: took 3.989284ms for pod "kube-controller-manager-default-k8s-diff-port-006978" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:16.558999  303072 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2mcbv" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:16.562698  303072 pod_ready.go:93] pod "kube-proxy-2mcbv" in "kube-system" namespace has status "Ready":"True"
	I0916 11:13:16.562717  303072 pod_ready.go:82] duration metric: took 3.713096ms for pod "kube-proxy-2mcbv" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:16.562725  303072 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-006978" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:16.944094  303072 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-006978" in "kube-system" namespace has status "Ready":"True"
	I0916 11:13:16.944124  303072 pod_ready.go:82] duration metric: took 381.391034ms for pod "kube-scheduler-default-k8s-diff-port-006978" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:16.944135  303072 pod_ready.go:39] duration metric: took 16.918337249s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:13:16.944166  303072 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:13:16.944236  303072 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:13:16.955576  303072 api_server.go:72] duration metric: took 17.760737057s to wait for apiserver process to appear ...
	I0916 11:13:16.955602  303072 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:13:16.955640  303072 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0916 11:13:16.959362  303072 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I0916 11:13:16.960401  303072 api_server.go:141] control plane version: v1.31.1
	I0916 11:13:16.960425  303072 api_server.go:131] duration metric: took 4.816984ms to wait for apiserver health ...
	I0916 11:13:16.960434  303072 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 11:13:17.147884  303072 system_pods.go:59] 8 kube-system pods found
	I0916 11:13:17.147928  303072 system_pods.go:61] "coredns-7c65d6cfc9-sc74v" [5655635d-c5e6-4043-b178-77f3df972e86] Running
	I0916 11:13:17.147935  303072 system_pods.go:61] "etcd-default-k8s-diff-port-006978" [d2e16749-ded9-4a12-9454-f66910bb9e5e] Running
	I0916 11:13:17.147948  303072 system_pods.go:61] "kindnet-njckk" [666b88e3-d80f-4bd7-b0a9-2cec72a365f0] Running
	I0916 11:13:17.147953  303072 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-006978" [81a8cd00-67af-4a7b-adcb-26da8bd1403a] Running
	I0916 11:13:17.147959  303072 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-006978" [76606226-2ee9-4661-a691-29edce3f8d0e] Running
	I0916 11:13:17.147964  303072 system_pods.go:61] "kube-proxy-2mcbv" [8fae6563-2965-49e7-96b5-b9813dc369e1] Running
	I0916 11:13:17.147969  303072 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-006978" [a69636f9-c121-4c9e-856e-600dc2fea787] Running
	I0916 11:13:17.147977  303072 system_pods.go:61] "storage-provisioner" [08708819-cf0d-4505-a1f0-5563be02bd8c] Running
	I0916 11:13:17.147985  303072 system_pods.go:74] duration metric: took 187.54485ms to wait for pod list to return data ...
	I0916 11:13:17.147996  303072 default_sa.go:34] waiting for default service account to be created ...
	I0916 11:13:17.344472  303072 default_sa.go:45] found service account: "default"
	I0916 11:13:17.344500  303072 default_sa.go:55] duration metric: took 196.497574ms for default service account to be created ...
	I0916 11:13:17.344510  303072 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 11:13:17.546956  303072 system_pods.go:86] 8 kube-system pods found
	I0916 11:13:17.546985  303072 system_pods.go:89] "coredns-7c65d6cfc9-sc74v" [5655635d-c5e6-4043-b178-77f3df972e86] Running
	I0916 11:13:17.546990  303072 system_pods.go:89] "etcd-default-k8s-diff-port-006978" [d2e16749-ded9-4a12-9454-f66910bb9e5e] Running
	I0916 11:13:17.546995  303072 system_pods.go:89] "kindnet-njckk" [666b88e3-d80f-4bd7-b0a9-2cec72a365f0] Running
	I0916 11:13:17.546999  303072 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-006978" [81a8cd00-67af-4a7b-adcb-26da8bd1403a] Running
	I0916 11:13:17.547003  303072 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-006978" [76606226-2ee9-4661-a691-29edce3f8d0e] Running
	I0916 11:13:17.547006  303072 system_pods.go:89] "kube-proxy-2mcbv" [8fae6563-2965-49e7-96b5-b9813dc369e1] Running
	I0916 11:13:17.547009  303072 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-006978" [a69636f9-c121-4c9e-856e-600dc2fea787] Running
	I0916 11:13:17.547013  303072 system_pods.go:89] "storage-provisioner" [08708819-cf0d-4505-a1f0-5563be02bd8c] Running
	I0916 11:13:17.547018  303072 system_pods.go:126] duration metric: took 202.504183ms to wait for k8s-apps to be running ...
	I0916 11:13:17.547033  303072 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 11:13:17.547078  303072 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:13:17.559046  303072 system_svc.go:56] duration metric: took 12.001345ms WaitForService to wait for kubelet
	I0916 11:13:17.559077  303072 kubeadm.go:582] duration metric: took 18.364243961s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:13:17.559097  303072 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:13:17.744389  303072 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 11:13:17.744416  303072 node_conditions.go:123] node cpu capacity is 8
	I0916 11:13:17.744431  303072 node_conditions.go:105] duration metric: took 185.330735ms to run NodePressure ...
	I0916 11:13:17.744442  303072 start.go:241] waiting for startup goroutines ...
	I0916 11:13:17.744448  303072 start.go:246] waiting for cluster config update ...
	I0916 11:13:17.744458  303072 start.go:255] writing updated cluster config ...
	I0916 11:13:17.744735  303072 ssh_runner.go:195] Run: rm -f paused
	I0916 11:13:17.750858  303072 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-006978" cluster and "default" namespace by default
	E0916 11:13:17.752103  303072 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	I0916 11:13:14.737902  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:16.737931  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:18.738172  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:15.649224  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:17.649297  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:18.777270  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:21.277449  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	308f1d6d730a2       c69fa2e9cbf5f       7 seconds ago       Running             coredns                   0                   a427aaf4dc7bf       coredns-7c65d6cfc9-sc74v
	6f355202fdbbe       6e38f40d628db       22 seconds ago      Running             storage-provisioner       0                   7bb5692bf820a       storage-provisioner
	3d2d679d3f920       12968670680f4       22 seconds ago      Running             kindnet-cni               0                   c9b1db2846501       kindnet-njckk
	947c3b3b00e44       60c005f310ff3       22 seconds ago      Running             kube-proxy                0                   d0095dc7cbd78       kube-proxy-2mcbv
	06406ac4e01c0       2e96e5913fc06       34 seconds ago      Running             etcd                      0                   6908ea2d82b0c       etcd-default-k8s-diff-port-006978
	3b1640b111894       9aa1fad941575       34 seconds ago      Running             kube-scheduler            0                   75eb18111b77e       kube-scheduler-default-k8s-diff-port-006978
	a085c20f4e6d1       175ffd71cce3d       34 seconds ago      Running             kube-controller-manager   0                   4e59876f0bb83       kube-controller-manager-default-k8s-diff-port-006978
	bdf3aa888730f       6bab7719df100       34 seconds ago      Running             kube-apiserver            0                   8f6d53f6f0c9d       kube-apiserver-default-k8s-diff-port-006978
	
	
	==> containerd <==
	Sep 16 11:13:00 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:00.766422496Z" level=info msg="CreateContainer within sandbox \"7bb5692bf820af9b4baaf102e9bcee78709750dc811c3040ccf45c8afde3f945\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Sep 16 11:13:00 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:00.779827089Z" level=info msg="CreateContainer within sandbox \"7bb5692bf820af9b4baaf102e9bcee78709750dc811c3040ccf45c8afde3f945\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"6f355202fdbbeadd28797b1e19193703a5ee560921eef405227ef9296cb15a44\""
	Sep 16 11:13:00 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:00.780483761Z" level=info msg="StartContainer for \"6f355202fdbbeadd28797b1e19193703a5ee560921eef405227ef9296cb15a44\""
	Sep 16 11:13:00 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:00.834312428Z" level=info msg="StartContainer for \"6f355202fdbbeadd28797b1e19193703a5ee560921eef405227ef9296cb15a44\" returns successfully"
	Sep 16 11:13:04 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:04.282404342Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Sep 16 11:13:15 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:15.036034724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-sc74v,Uid:5655635d-c5e6-4043-b178-77f3df972e86,Namespace:kube-system,Attempt:0,}"
	Sep 16 11:13:15 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:15.071775195Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 11:13:15 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:15.071867312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 11:13:15 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:15.071884979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:13:15 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:15.071998114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:13:15 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:15.118805252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-sc74v,Uid:5655635d-c5e6-4043-b178-77f3df972e86,Namespace:kube-system,Attempt:0,} returns sandbox id \"a427aaf4dc7bf56af8e59e4086f4715ac46e1c9db615ec7c033d1c861cc37441\""
	Sep 16 11:13:15 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:15.121605030Z" level=info msg="CreateContainer within sandbox \"a427aaf4dc7bf56af8e59e4086f4715ac46e1c9db615ec7c033d1c861cc37441\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Sep 16 11:13:15 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:15.134951310Z" level=info msg="CreateContainer within sandbox \"a427aaf4dc7bf56af8e59e4086f4715ac46e1c9db615ec7c033d1c861cc37441\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"308f1d6d730a2535f1e6a084892c5da3b5b775594ef073672e029c09537b57da\""
	Sep 16 11:13:15 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:15.135602631Z" level=info msg="StartContainer for \"308f1d6d730a2535f1e6a084892c5da3b5b775594ef073672e029c09537b57da\""
	Sep 16 11:13:15 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:15.180235908Z" level=info msg="StartContainer for \"308f1d6d730a2535f1e6a084892c5da3b5b775594ef073672e029c09537b57da\" returns successfully"
	Sep 16 11:13:22 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:22.268042384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:metrics-server-6867b74b74-shznv,Uid:a7a51241-b731-46a8-abc5-cdbd6bf2d41e,Namespace:kube-system,Attempt:0,}"
	Sep 16 11:13:22 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:22.311869880Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Sep 16 11:13:22 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:22.311961101Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Sep 16 11:13:22 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:22.312004256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:13:22 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:22.312180050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Sep 16 11:13:22 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:22.380039101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:metrics-server-6867b74b74-shznv,Uid:a7a51241-b731-46a8-abc5-cdbd6bf2d41e,Namespace:kube-system,Attempt:0,} returns sandbox id \"1dea6be6641cdc0696bd40228bed1cdc4902d9ea8193d112244add9c7cb2d2c9\""
	Sep 16 11:13:22 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:22.382847266Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 11:13:22 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:22.418941325Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Sep 16 11:13:22 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:22.420373235Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 16 11:13:22 default-k8s-diff-port-006978 containerd[864]: time="2024-09-16T11:13:22.420421450Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [308f1d6d730a2535f1e6a084892c5da3b5b775594ef073672e029c09537b57da] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:41169 - 50662 "HINFO IN 4844345484503832019.4449023886173755708. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011300932s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-006978
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-006978
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=default-k8s-diff-port-006978
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T11_12_54_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:12:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-006978
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:13:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:13:04 +0000   Mon, 16 Sep 2024 11:12:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:13:04 +0000   Mon, 16 Sep 2024 11:12:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:13:04 +0000   Mon, 16 Sep 2024 11:12:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:13:04 +0000   Mon, 16 Sep 2024 11:12:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-006978
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 9f862fabd65249baa1ce0a392f842af0
	  System UUID:                15408216-8343-44b6-bf08-785f58970e8a
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-sc74v                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-default-k8s-diff-port-006978                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-njckk                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-default-k8s-diff-port-006978             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-006978    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-2mcbv                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-default-k8s-diff-port-006978             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 metrics-server-6867b74b74-shznv                         100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         2s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 22s                kube-proxy       
	  Normal   NodeHasSufficientMemory  35s (x8 over 35s)  kubelet          Node default-k8s-diff-port-006978 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    35s (x7 over 35s)  kubelet          Node default-k8s-diff-port-006978 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     35s (x7 over 35s)  kubelet          Node default-k8s-diff-port-006978 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  35s                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 30s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 30s                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  29s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  29s                kubelet          Node default-k8s-diff-port-006978 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    29s                kubelet          Node default-k8s-diff-port-006978 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     29s                kubelet          Node default-k8s-diff-port-006978 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           25s                node-controller  Node default-k8s-diff-port-006978 event: Registered Node default-k8s-diff-port-006978 in Controller
	
	
	==> dmesg <==
	[  +0.000003] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	[  +1.007060] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000007] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000001] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000001] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	[  +2.015770] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000006] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000001] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000001] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	[  +4.191585] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000008] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	[  +0.000009] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000001] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000002] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	[  +8.191312] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000007] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000002] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-5c8d67185b35
	[  +0.000002] ll header: 00000000: 02 42 d3 b2 14 79 02 42 c0 a8 55 02 08 00
	
	
	==> etcd [06406ac4e01c0da912225abf9c21537955221a3a9f8dfb94deb8f00faf06ca09] <==
	{"level":"info","ts":"2024-09-16T11:12:49.134376Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T11:12:49.134546Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-09-16T11:12:49.134585Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-09-16T11:12:49.135386Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T11:12:49.135426Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T11:12:49.463407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T11:12:49.463466Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T11:12:49.463496Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2024-09-16T11:12:49.463511Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T11:12:49.463520Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2024-09-16T11:12:49.463536Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2024-09-16T11:12:49.463550Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2024-09-16T11:12:49.464413Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:12:49.465019Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:default-k8s-diff-port-006978 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T11:12:49.465072Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:12:49.465051Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:12:49.465386Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T11:12:49.465431Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T11:12:49.466272Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:12:49.467109Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2024-09-16T11:12:49.467934Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:12:49.468738Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T11:12:49.470945Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:12:49.471063Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:12:49.471097Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 11:13:23 up 55 min,  0 users,  load average: 1.97, 2.91, 2.22
	Linux default-k8s-diff-port-006978 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [3d2d679d3f9204404ae6eba2a4077abf21a6fcc5943b04d5a2e69b78836e3cd3] <==
	I0916 11:13:00.621671       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 11:13:00.621894       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0916 11:13:00.622047       1 main.go:148] setting mtu 1500 for CNI 
	I0916 11:13:00.622069       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 11:13:00.622081       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 11:13:00.948822       1 controller.go:334] Starting controller kube-network-policies
	I0916 11:13:00.948853       1 controller.go:338] Waiting for informer caches to sync
	I0916 11:13:00.948861       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 11:13:01.249787       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 11:13:01.249852       1 metrics.go:61] Registering metrics
	I0916 11:13:01.249923       1 controller.go:374] Syncing nftables rules
	I0916 11:13:10.948665       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0916 11:13:10.948741       1 main.go:299] handling current node
	I0916 11:13:20.951953       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0916 11:13:20.952035       1 main.go:299] handling current node
	
	
	==> kube-apiserver [bdf3aa888730f420448b90d2c08e29c4518e341208ebdb9b901916839a3b5862] <==
	E0916 11:13:21.935573       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0916 11:13:21.936996       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I0916 11:13:22.042892       1 alloc.go:330] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.110.61.57"}
	W0916 11:13:22.049002       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 11:13:22.049067       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0916 11:13:22.053674       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 11:13:22.053736       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0916 11:13:22.930436       1 handler_proxy.go:99] no RequestInfo found in the context
	W0916 11:13:22.930471       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 11:13:22.930477       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0916 11:13:22.930574       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0916 11:13:22.931584       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0916 11:13:22.931628       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [a085c20f4e6d11ad60d4935c3c6e2b3ee2a685ac38a3ca0cae9820d4bab0905e] <==
	I0916 11:12:58.409038       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 11:12:58.817951       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:12:58.898115       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:12:58.898137       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 11:12:59.210806       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-006978"
	I0916 11:12:59.426362       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="1.017098138s"
	I0916 11:12:59.433522       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="7.101603ms"
	I0916 11:12:59.433635       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="61.021µs"
	I0916 11:12:59.520137       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="123.944µs"
	I0916 11:12:59.539861       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="150.746µs"
	I0916 11:13:00.058053       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="15.011457ms"
	I0916 11:13:00.126185       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="68.061499ms"
	I0916 11:13:00.126320       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="81.97µs"
	I0916 11:13:01.081758       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="88.254µs"
	I0916 11:13:01.086415       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="103.992µs"
	I0916 11:13:01.089510       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="69.774µs"
	I0916 11:13:04.291467       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-006978"
	I0916 11:13:16.102024       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="80.781µs"
	I0916 11:13:16.119318       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.696792ms"
	I0916 11:13:16.119456       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="86.29µs"
	I0916 11:13:21.966599       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="18.537342ms"
	I0916 11:13:21.975332       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="8.670034ms"
	I0916 11:13:21.975437       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="61.655µs"
	I0916 11:13:21.979253       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="71.081µs"
	I0916 11:13:23.115282       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="69.174µs"
	
	
	==> kube-proxy [947c3b3b00e44516aae5e8db652e25532afcfccf08cf372511b7656ed49e1dce] <==
	I0916 11:13:00.253825       1 server_linux.go:66] "Using iptables proxy"
	I0916 11:13:00.407401       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.76.2"]
	E0916 11:13:00.407487       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 11:13:00.429078       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 11:13:00.429182       1 server_linux.go:169] "Using iptables Proxier"
	I0916 11:13:00.432606       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 11:13:00.434317       1 server.go:483] "Version info" version="v1.31.1"
	I0916 11:13:00.434355       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:13:00.436922       1 config.go:199] "Starting service config controller"
	I0916 11:13:00.436961       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 11:13:00.436992       1 config.go:328] "Starting node config controller"
	I0916 11:13:00.436998       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 11:13:00.437231       1 config.go:105] "Starting endpoint slice config controller"
	I0916 11:13:00.437259       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 11:13:00.537105       1 shared_informer.go:320] Caches are synced for node config
	I0916 11:13:00.537113       1 shared_informer.go:320] Caches are synced for service config
	I0916 11:13:00.538249       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [3b1640b111894c26e26aef3c208049c9bac3c487c7b837d73cc15609a25be8a5] <==
	W0916 11:12:51.533950       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 11:12:51.533967       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:51.534038       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 11:12:51.534052       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:51.534137       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 11:12:51.534151       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:51.534222       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 11:12:51.534236       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:51.534340       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:12:51.534355       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:51.534367       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 11:12:51.534374       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:51.534388       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 11:12:51.534396       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:52.339002       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 11:12:52.339046       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:52.406598       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 11:12:52.406652       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:52.413957       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 11:12:52.413997       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:52.416027       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 11:12:52.416071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:52.594671       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:12:52.594714       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 11:12:54.631845       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 11:12:59 default-k8s-diff-port-006978 kubelet[1604]: E0916 11:12:59.853777    1604 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c13c121968d649e2f76dc9c20d74e291bf0d8459ab5bca233f9663685b619659\": failed to find network info for sandbox \"c13c121968d649e2f76dc9c20d74e291bf0d8459ab5bca233f9663685b619659\"" pod="kube-system/coredns-7c65d6cfc9-sc74v"
	Sep 16 11:12:59 default-k8s-diff-port-006978 kubelet[1604]: E0916 11:12:59.853808    1604 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c13c121968d649e2f76dc9c20d74e291bf0d8459ab5bca233f9663685b619659\": failed to find network info for sandbox \"c13c121968d649e2f76dc9c20d74e291bf0d8459ab5bca233f9663685b619659\"" pod="kube-system/coredns-7c65d6cfc9-sc74v"
	Sep 16 11:12:59 default-k8s-diff-port-006978 kubelet[1604]: E0916 11:12:59.853861    1604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-sc74v_kube-system(5655635d-c5e6-4043-b178-77f3df972e86)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-sc74v_kube-system(5655635d-c5e6-4043-b178-77f3df972e86)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c13c121968d649e2f76dc9c20d74e291bf0d8459ab5bca233f9663685b619659\\\": failed to find network info for sandbox \\\"c13c121968d649e2f76dc9c20d74e291bf0d8459ab5bca233f9663685b619659\\\"\"" pod="kube-system/coredns-7c65d6cfc9-sc74v" podUID="5655635d-c5e6-4043-b178-77f3df972e86"
	Sep 16 11:13:00 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:00.126731    1604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/950246f0-ecc4-4b7c-b89b-09a027a772d0-config-volume\") pod \"950246f0-ecc4-4b7c-b89b-09a027a772d0\" (UID: \"950246f0-ecc4-4b7c-b89b-09a027a772d0\") "
	Sep 16 11:13:00 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:00.126802    1604 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f8xl\" (UniqueName: \"kubernetes.io/projected/950246f0-ecc4-4b7c-b89b-09a027a772d0-kube-api-access-6f8xl\") pod \"950246f0-ecc4-4b7c-b89b-09a027a772d0\" (UID: \"950246f0-ecc4-4b7c-b89b-09a027a772d0\") "
	Sep 16 11:13:00 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:00.127337    1604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/950246f0-ecc4-4b7c-b89b-09a027a772d0-config-volume" (OuterVolumeSpecName: "config-volume") pod "950246f0-ecc4-4b7c-b89b-09a027a772d0" (UID: "950246f0-ecc4-4b7c-b89b-09a027a772d0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Sep 16 11:13:00 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:00.131029    1604 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/950246f0-ecc4-4b7c-b89b-09a027a772d0-kube-api-access-6f8xl" (OuterVolumeSpecName: "kube-api-access-6f8xl") pod "950246f0-ecc4-4b7c-b89b-09a027a772d0" (UID: "950246f0-ecc4-4b7c-b89b-09a027a772d0"). InnerVolumeSpecName "kube-api-access-6f8xl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 16 11:13:00 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:00.227037    1604 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-6f8xl\" (UniqueName: \"kubernetes.io/projected/950246f0-ecc4-4b7c-b89b-09a027a772d0-kube-api-access-6f8xl\") on node \"default-k8s-diff-port-006978\" DevicePath \"\""
	Sep 16 11:13:00 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:00.227076    1604 reconciler_common.go:288] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/950246f0-ecc4-4b7c-b89b-09a027a772d0-config-volume\") on node \"default-k8s-diff-port-006978\" DevicePath \"\""
	Sep 16 11:13:00 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:00.428505    1604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/08708819-cf0d-4505-a1f0-5563be02bd8c-tmp\") pod \"storage-provisioner\" (UID: \"08708819-cf0d-4505-a1f0-5563be02bd8c\") " pod="kube-system/storage-provisioner"
	Sep 16 11:13:00 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:00.428562    1604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4hzl\" (UniqueName: \"kubernetes.io/projected/08708819-cf0d-4505-a1f0-5563be02bd8c-kube-api-access-h4hzl\") pod \"storage-provisioner\" (UID: \"08708819-cf0d-4505-a1f0-5563be02bd8c\") " pod="kube-system/storage-provisioner"
	Sep 16 11:13:01 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:01.069502    1604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2mcbv" podStartSLOduration=2.069481573 podStartE2EDuration="2.069481573s" podCreationTimestamp="2024-09-16 11:12:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:13:01.069398727 +0000 UTC m=+7.138484871" watchObservedRunningTime="2024-09-16 11:13:01.069481573 +0000 UTC m=+7.138567716"
	Sep 16 11:13:01 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:01.098813    1604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=1.098790323 podStartE2EDuration="1.098790323s" podCreationTimestamp="2024-09-16 11:13:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:13:01.09871627 +0000 UTC m=+7.167802413" watchObservedRunningTime="2024-09-16 11:13:01.098790323 +0000 UTC m=+7.167876468"
	Sep 16 11:13:01 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:01.108472    1604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-njckk" podStartSLOduration=2.108444312 podStartE2EDuration="2.108444312s" podCreationTimestamp="2024-09-16 11:12:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:13:01.108144012 +0000 UTC m=+7.177230167" watchObservedRunningTime="2024-09-16 11:13:01.108444312 +0000 UTC m=+7.177530457"
	Sep 16 11:13:02 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:02.037418    1604 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="950246f0-ecc4-4b7c-b89b-09a027a772d0" path="/var/lib/kubelet/pods/950246f0-ecc4-4b7c-b89b-09a027a772d0/volumes"
	Sep 16 11:13:04 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:04.281799    1604 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 16 11:13:04 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:04.282674    1604 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 16 11:13:16 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:16.113460    1604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-sc74v" podStartSLOduration=17.113430858 podStartE2EDuration="17.113430858s" podCreationTimestamp="2024-09-16 11:12:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-16 11:13:16.102461271 +0000 UTC m=+22.171547414" watchObservedRunningTime="2024-09-16 11:13:16.113430858 +0000 UTC m=+22.182517002"
	Sep 16 11:13:22 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:22.149574    1604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw8qk\" (UniqueName: \"kubernetes.io/projected/a7a51241-b731-46a8-abc5-cdbd6bf2d41e-kube-api-access-fw8qk\") pod \"metrics-server-6867b74b74-shznv\" (UID: \"a7a51241-b731-46a8-abc5-cdbd6bf2d41e\") " pod="kube-system/metrics-server-6867b74b74-shznv"
	Sep 16 11:13:22 default-k8s-diff-port-006978 kubelet[1604]: I0916 11:13:22.149653    1604 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a7a51241-b731-46a8-abc5-cdbd6bf2d41e-tmp-dir\") pod \"metrics-server-6867b74b74-shznv\" (UID: \"a7a51241-b731-46a8-abc5-cdbd6bf2d41e\") " pod="kube-system/metrics-server-6867b74b74-shznv"
	Sep 16 11:13:22 default-k8s-diff-port-006978 kubelet[1604]: E0916 11:13:22.420693    1604 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 16 11:13:22 default-k8s-diff-port-006978 kubelet[1604]: E0916 11:13:22.420781    1604 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 16 11:13:22 default-k8s-diff-port-006978 kubelet[1604]: E0916 11:13:22.421003    1604 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fw8qk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountP
ropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile
:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-shznv_kube-system(a7a51241-b731-46a8-abc5-cdbd6bf2d41e): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" logger="UnhandledError"
	Sep 16 11:13:22 default-k8s-diff-port-006978 kubelet[1604]: E0916 11:13:22.422246    1604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-6867b74b74-shznv" podUID="a7a51241-b731-46a8-abc5-cdbd6bf2d41e"
	Sep 16 11:13:23 default-k8s-diff-port-006978 kubelet[1604]: E0916 11:13:23.105242    1604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-shznv" podUID="a7a51241-b731-46a8-abc5-cdbd6bf2d41e"
	
	
	==> storage-provisioner [6f355202fdbbeadd28797b1e19193703a5ee560921eef405227ef9296cb15a44] <==
	I0916 11:13:00.842140       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 11:13:00.849696       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 11:13:00.849745       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 11:13:00.858720       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 11:13:00.858874       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-006978_6e2d25a8-69d9-42c8-bc96-a993683990b2!
	I0916 11:13:00.860283       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"48271e48-bb5a-477f-91cc-b9e1963cd811", APIVersion:"v1", ResourceVersion:"414", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-006978_6e2d25a8-69d9-42c8-bc96-a993683990b2 became leader
	I0916 11:13:00.959514       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-006978_6e2d25a8-69d9-42c8-bc96-a993683990b2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-006978 -n default-k8s-diff-port-006978
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-006978 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-006978 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (533.501µs)
helpers_test.go:263: kubectl --context default-k8s-diff-port-006978 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (7.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-lmdbc" [6d63e6f7-5f9b-45ff-b20e-561f691403c2] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003920518s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-349453 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-349453 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: fork/exec /usr/local/bin/kubectl: exec format error (751.106µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-349453 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": fork/exec /usr/local/bin/kubectl: exec format error
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-349453
helpers_test.go:235: (dbg) docker inspect no-preload-349453:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d44e8cc5581d6fb816ecfca3e6fae57d456bc84775e39f1a8ca84f29fd4dc0a3",
	        "Created": "2024-09-16T11:08:35.617729941Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 275108,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T11:09:49.167493269Z",
	            "FinishedAt": "2024-09-16T11:09:48.269291982Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/d44e8cc5581d6fb816ecfca3e6fae57d456bc84775e39f1a8ca84f29fd4dc0a3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d44e8cc5581d6fb816ecfca3e6fae57d456bc84775e39f1a8ca84f29fd4dc0a3/hostname",
	        "HostsPath": "/var/lib/docker/containers/d44e8cc5581d6fb816ecfca3e6fae57d456bc84775e39f1a8ca84f29fd4dc0a3/hosts",
	        "LogPath": "/var/lib/docker/containers/d44e8cc5581d6fb816ecfca3e6fae57d456bc84775e39f1a8ca84f29fd4dc0a3/d44e8cc5581d6fb816ecfca3e6fae57d456bc84775e39f1a8ca84f29fd4dc0a3-json.log",
	        "Name": "/no-preload-349453",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-349453:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-349453",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1f16bfb5a739593a3a28aa0d43852a8530e8255fd42575cca67f10b48c3d69d7-init/diff:/var/lib/docker/overlay2/e391a31d00d345931d18da68f8a7d7338830f6d9b98fe203461225d03a65d594/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1f16bfb5a739593a3a28aa0d43852a8530e8255fd42575cca67f10b48c3d69d7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1f16bfb5a739593a3a28aa0d43852a8530e8255fd42575cca67f10b48c3d69d7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1f16bfb5a739593a3a28aa0d43852a8530e8255fd42575cca67f10b48c3d69d7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-349453",
	                "Source": "/var/lib/docker/volumes/no-preload-349453/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-349453",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-349453",
	                "name.minikube.sigs.k8s.io": "no-preload-349453",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "562dc4862ddc80ee2b19268c334b2fc91e456c588d13373c0450f30f49a4ec54",
	            "SandboxKey": "/var/run/docker/netns/562dc4862ddc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-349453": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "DriverOpts": null,
	                    "NetworkID": "2cc59d4eff808c995119ae607628ad9854df9618b8c5cd5213cb8d98e98ab4f4",
	                    "EndpointID": "0e286a5b08aac515e3ac6197b1a025ee9a86c22bc1fb2c477392294723532387",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-349453",
	                        "d44e8cc5581d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-349453 -n no-preload-349453
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-349453 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-349453 logs -n 25: (1.533620797s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-840054 -- sudo                         | cert-options-840054          | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-840054                                 | cert-options-840054          | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:08 UTC |
	| start   | -p no-preload-349453                                   | no-preload-349453            | jenkins | v1.34.0 | 16 Sep 24 11:08 UTC | 16 Sep 24 11:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-349453             | no-preload-349453            | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC | 16 Sep 24 11:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-349453                                   | no-preload-349453            | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC | 16 Sep 24 11:09 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-349453                  | no-preload-349453            | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC | 16 Sep 24 11:09 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-349453                                   | no-preload-349453            | jenkins | v1.34.0 | 16 Sep 24 11:09 UTC | 16 Sep 24 11:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-371039        | old-k8s-version-371039       | jenkins | v1.34.0 | 16 Sep 24 11:10 UTC | 16 Sep 24 11:10 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-371039                              | old-k8s-version-371039       | jenkins | v1.34.0 | 16 Sep 24 11:10 UTC | 16 Sep 24 11:10 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-371039             | old-k8s-version-371039       | jenkins | v1.34.0 | 16 Sep 24 11:10 UTC | 16 Sep 24 11:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-371039                              | old-k8s-version-371039       | jenkins | v1.34.0 | 16 Sep 24 11:10 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-021107                              | cert-expiration-021107       | jenkins | v1.34.0 | 16 Sep 24 11:11 UTC | 16 Sep 24 11:11 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-021107                              | cert-expiration-021107       | jenkins | v1.34.0 | 16 Sep 24 11:11 UTC | 16 Sep 24 11:11 UTC |
	| start   | -p embed-certs-679624                                  | embed-certs-679624           | jenkins | v1.34.0 | 16 Sep 24 11:11 UTC | 16 Sep 24 11:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-679624            | embed-certs-679624           | jenkins | v1.34.0 | 16 Sep 24 11:12 UTC | 16 Sep 24 11:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-679624                                  | embed-certs-679624           | jenkins | v1.34.0 | 16 Sep 24 11:12 UTC | 16 Sep 24 11:12 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-679624                 | embed-certs-679624           | jenkins | v1.34.0 | 16 Sep 24 11:12 UTC | 16 Sep 24 11:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-679624                                  | embed-certs-679624           | jenkins | v1.34.0 | 16 Sep 24 11:12 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-311911                           | kubernetes-upgrade-311911    | jenkins | v1.34.0 | 16 Sep 24 11:12 UTC | 16 Sep 24 11:12 UTC |
	| delete  | -p                                                     | disable-driver-mounts-852440 | jenkins | v1.34.0 | 16 Sep 24 11:12 UTC | 16 Sep 24 11:12 UTC |
	|         | disable-driver-mounts-852440                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-006978 | jenkins | v1.34.0 | 16 Sep 24 11:12 UTC | 16 Sep 24 11:13 UTC |
	|         | default-k8s-diff-port-006978                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-006978  | default-k8s-diff-port-006978 | jenkins | v1.34.0 | 16 Sep 24 11:13 UTC | 16 Sep 24 11:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-006978 | jenkins | v1.34.0 | 16 Sep 24 11:13 UTC | 16 Sep 24 11:13 UTC |
	|         | default-k8s-diff-port-006978                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-006978       | default-k8s-diff-port-006978 | jenkins | v1.34.0 | 16 Sep 24 11:13 UTC | 16 Sep 24 11:13 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-006978 | jenkins | v1.34.0 | 16 Sep 24 11:13 UTC |                     |
	|         | default-k8s-diff-port-006978                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:13:29
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:13:29.784831  309669 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:13:29.784960  309669 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:13:29.784972  309669 out.go:358] Setting ErrFile to fd 2...
	I0916 11:13:29.784977  309669 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:13:29.785180  309669 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 11:13:29.785783  309669 out.go:352] Setting JSON to false
	I0916 11:13:29.787227  309669 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3354,"bootTime":1726481856,"procs":376,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:13:29.787330  309669 start.go:139] virtualization: kvm guest
	I0916 11:13:29.789745  309669 out.go:177] * [default-k8s-diff-port-006978] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:13:29.791177  309669 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:13:29.791252  309669 notify.go:220] Checking for updates...
	I0916 11:13:29.794408  309669 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:13:29.795937  309669 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:13:29.797095  309669 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 11:13:29.798237  309669 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:13:29.799595  309669 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:13:29.801278  309669 config.go:182] Loaded profile config "default-k8s-diff-port-006978": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:13:29.801815  309669 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:13:29.826122  309669 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 11:13:29.826271  309669 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:13:29.881255  309669 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:70 OomKillDisable:true NGoroutines:86 SystemTime:2024-09-16 11:13:29.870982632 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:13:29.881355  309669 docker.go:318] overlay module found
	I0916 11:13:29.883233  309669 out.go:177] * Using the docker driver based on existing profile
	I0916 11:13:29.884886  309669 start.go:297] selected driver: docker
	I0916 11:13:29.884903  309669 start.go:901] validating driver "docker" against &{Name:default-k8s-diff-port-006978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-006978 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:13:29.885007  309669 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:13:29.885901  309669 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:13:29.934071  309669 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:70 OomKillDisable:true NGoroutines:86 SystemTime:2024-09-16 11:13:29.924647191 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:13:29.934458  309669 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:13:29.934488  309669 cni.go:84] Creating CNI manager for ""
	I0916 11:13:29.934544  309669 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:13:29.934633  309669 start.go:340] cluster config:
	{Name:default-k8s-diff-port-006978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-006978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.
L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:13:29.936877  309669 out.go:177] * Starting "default-k8s-diff-port-006978" primary control-plane node in "default-k8s-diff-port-006978" cluster
	I0916 11:13:29.938278  309669 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 11:13:29.939726  309669 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 11:13:29.940879  309669 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:13:29.940917  309669 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 11:13:29.940921  309669 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4
	I0916 11:13:29.940964  309669 cache.go:56] Caching tarball of preloaded images
	I0916 11:13:29.941089  309669 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 11:13:29.941104  309669 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 11:13:29.941240  309669 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/config.json ...
	W0916 11:13:29.963458  309669 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 11:13:29.963483  309669 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 11:13:29.963588  309669 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 11:13:29.963606  309669 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 11:13:29.963612  309669 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 11:13:29.963630  309669 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 11:13:29.963641  309669 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 11:13:30.018265  309669 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 11:13:30.018314  309669 cache.go:194] Successfully downloaded all kic artifacts
	I0916 11:13:30.018359  309669 start.go:360] acquireMachinesLock for default-k8s-diff-port-006978: {Name:mke54f99fcd9e320f7c2bc8102220e65af70efd7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:13:30.018435  309669 start.go:364] duration metric: took 50.796µs to acquireMachinesLock for "default-k8s-diff-port-006978"
	I0916 11:13:30.018459  309669 start.go:96] Skipping create...Using existing machine configuration
	I0916 11:13:30.018474  309669 fix.go:54] fixHost starting: 
	I0916 11:13:30.018778  309669 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-006978 --format={{.State.Status}}
	I0916 11:13:30.036912  309669 fix.go:112] recreateIfNeeded on default-k8s-diff-port-006978: state=Stopped err=<nil>
	W0916 11:13:30.036950  309669 fix.go:138] unexpected machine state, will restart: <nil>
	I0916 11:13:30.039131  309669 out.go:177] * Restarting existing docker container for "default-k8s-diff-port-006978" ...
	I0916 11:13:26.649230  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:29.149391  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:28.276756  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:30.777776  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:30.739657  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:33.238496  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:30.040638  309669 cli_runner.go:164] Run: docker start default-k8s-diff-port-006978
	I0916 11:13:30.327354  309669 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-006978 --format={{.State.Status}}
	I0916 11:13:30.347404  309669 kic.go:430] container "default-k8s-diff-port-006978" state is running.
	I0916 11:13:30.347855  309669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-006978
	I0916 11:13:30.367500  309669 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/config.json ...
	I0916 11:13:30.367720  309669 machine.go:93] provisionDockerMachine start ...
	I0916 11:13:30.367836  309669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:13:30.386724  309669 main.go:141] libmachine: Using SSH client type: native
	I0916 11:13:30.386969  309669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I0916 11:13:30.386987  309669 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 11:13:30.387659  309669 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36226->127.0.0.1:33093: read: connection reset by peer
	I0916 11:13:33.524327  309669 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-006978
	
	I0916 11:13:33.524355  309669 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-006978"
	I0916 11:13:33.524431  309669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:13:33.542107  309669 main.go:141] libmachine: Using SSH client type: native
	I0916 11:13:33.542284  309669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I0916 11:13:33.542298  309669 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-006978 && echo "default-k8s-diff-port-006978" | sudo tee /etc/hostname
	I0916 11:13:33.687065  309669 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-006978
	
	I0916 11:13:33.687142  309669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:13:33.705737  309669 main.go:141] libmachine: Using SSH client type: native
	I0916 11:13:33.705916  309669 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I0916 11:13:33.705941  309669 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-006978' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-006978/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-006978' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:13:33.839948  309669 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:13:33.839992  309669 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 11:13:33.840018  309669 ubuntu.go:177] setting up certificates
	I0916 11:13:33.840027  309669 provision.go:84] configureAuth start
	I0916 11:13:33.840092  309669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-006978
	I0916 11:13:33.858248  309669 provision.go:143] copyHostCerts
	I0916 11:13:33.858311  309669 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 11:13:33.858322  309669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 11:13:33.858408  309669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 11:13:33.858511  309669 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 11:13:33.858520  309669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 11:13:33.858562  309669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 11:13:33.858621  309669 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 11:13:33.858628  309669 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 11:13:33.858650  309669 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 11:13:33.858697  309669 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-006978 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-006978 localhost minikube]
	I0916 11:13:33.961541  309669 provision.go:177] copyRemoteCerts
	I0916 11:13:33.961604  309669 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:13:33.961637  309669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:13:33.979732  309669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/default-k8s-diff-port-006978/id_rsa Username:docker}
	I0916 11:13:34.080749  309669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 11:13:34.103102  309669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0916 11:13:34.126726  309669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 11:13:34.151976  309669 provision.go:87] duration metric: took 311.9347ms to configureAuth
	I0916 11:13:34.152004  309669 ubuntu.go:193] setting minikube options for container-runtime
	I0916 11:13:34.152185  309669 config.go:182] Loaded profile config "default-k8s-diff-port-006978": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:13:34.152200  309669 machine.go:96] duration metric: took 3.784467335s to provisionDockerMachine
	I0916 11:13:34.152209  309669 start.go:293] postStartSetup for "default-k8s-diff-port-006978" (driver="docker")
	I0916 11:13:34.152222  309669 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:13:34.152289  309669 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:13:34.152341  309669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:13:34.170568  309669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/default-k8s-diff-port-006978/id_rsa Username:docker}
	I0916 11:13:34.265005  309669 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:13:34.268196  309669 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 11:13:34.268221  309669 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 11:13:34.268228  309669 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 11:13:34.268234  309669 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 11:13:34.268251  309669 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 11:13:34.268300  309669 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 11:13:34.268379  309669 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 11:13:34.268472  309669 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:13:34.277674  309669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:13:34.299825  309669 start.go:296] duration metric: took 147.601238ms for postStartSetup
	I0916 11:13:34.299903  309669 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 11:13:34.299955  309669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:13:34.317174  309669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/default-k8s-diff-port-006978/id_rsa Username:docker}
	I0916 11:13:34.408551  309669 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 11:13:34.412644  309669 fix.go:56] duration metric: took 4.394165167s for fixHost
	I0916 11:13:34.412670  309669 start.go:83] releasing machines lock for "default-k8s-diff-port-006978", held for 4.394222729s
	I0916 11:13:34.412733  309669 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-006978
	I0916 11:13:34.429828  309669 ssh_runner.go:195] Run: cat /version.json
	I0916 11:13:34.429876  309669 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:13:34.429883  309669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:13:34.429922  309669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:13:34.448212  309669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/default-k8s-diff-port-006978/id_rsa Username:docker}
	I0916 11:13:34.448941  309669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/default-k8s-diff-port-006978/id_rsa Username:docker}
	I0916 11:13:34.619248  309669 ssh_runner.go:195] Run: systemctl --version
	I0916 11:13:34.623583  309669 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:13:34.627662  309669 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 11:13:34.645018  309669 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 11:13:34.645088  309669 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:13:34.654167  309669 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0916 11:13:34.654199  309669 start.go:495] detecting cgroup driver to use...
	I0916 11:13:34.654234  309669 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 11:13:34.654312  309669 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 11:13:34.668373  309669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 11:13:34.680140  309669 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:13:34.680197  309669 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:13:34.692746  309669 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:13:34.704139  309669 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:13:34.778898  309669 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:13:31.649567  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:34.149132  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:34.857827  309669 docker.go:233] disabling docker service ...
	I0916 11:13:34.857893  309669 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:13:34.869873  309669 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:13:34.880945  309669 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:13:34.963380  309669 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:13:35.045117  309669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:13:35.057165  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:13:35.073326  309669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 11:13:35.083428  309669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 11:13:35.093736  309669 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 11:13:35.093821  309669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 11:13:35.104117  309669 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 11:13:35.114406  309669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 11:13:35.124213  309669 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 11:13:35.134060  309669 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:13:35.143314  309669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 11:13:35.154745  309669 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 11:13:35.164778  309669 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 11:13:35.175336  309669 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:13:35.184979  309669 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:13:35.193983  309669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:13:35.274907  309669 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 11:13:35.380955  309669 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 11:13:35.381014  309669 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 11:13:35.384959  309669 start.go:563] Will wait 60s for crictl version
	I0916 11:13:35.385019  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:13:35.388639  309669 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:13:35.427087  309669 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 11:13:35.427152  309669 ssh_runner.go:195] Run: containerd --version
	I0916 11:13:35.450491  309669 ssh_runner.go:195] Run: containerd --version
	I0916 11:13:35.479914  309669 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 11:13:35.481483  309669 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-006978 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:13:35.499028  309669 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0916 11:13:35.502924  309669 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:13:35.514725  309669 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-006978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-006978 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:13:35.514869  309669 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:13:35.514916  309669 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:13:35.547634  309669 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 11:13:35.547656  309669 containerd.go:534] Images already preloaded, skipping extraction
	I0916 11:13:35.547716  309669 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:13:35.581377  309669 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 11:13:35.581396  309669 cache_images.go:84] Images are preloaded, skipping loading
	I0916 11:13:35.581404  309669 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.31.1 containerd true true} ...
	I0916 11:13:35.581518  309669 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-006978 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-006978 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:13:35.581575  309669 ssh_runner.go:195] Run: sudo crictl info
	I0916 11:13:35.615857  309669 cni.go:84] Creating CNI manager for ""
	I0916 11:13:35.615885  309669 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:13:35.615893  309669 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:13:35.615916  309669 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-006978 NodeName:default-k8s-diff-port-006978 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 11:13:35.616087  309669 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-006978"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:13:35.616154  309669 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 11:13:35.624744  309669 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 11:13:35.624819  309669 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:13:35.633301  309669 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I0916 11:13:35.651821  309669 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:13:35.669840  309669 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2182 bytes)
	I0916 11:13:35.687161  309669 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0916 11:13:35.690416  309669 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:13:35.702100  309669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:13:35.787301  309669 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:13:35.800886  309669 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978 for IP: 192.168.76.2
	I0916 11:13:35.800907  309669 certs.go:194] generating shared ca certs ...
	I0916 11:13:35.800921  309669 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:13:35.801078  309669 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 11:13:35.801123  309669 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 11:13:35.801136  309669 certs.go:256] generating profile certs ...
	I0916 11:13:35.801216  309669 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.key
	I0916 11:13:35.801273  309669 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/apiserver.key.9826bbf6
	I0916 11:13:35.801309  309669 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/proxy-client.key
	I0916 11:13:35.801406  309669 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 11:13:35.801433  309669 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 11:13:35.801442  309669 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:13:35.801467  309669 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 11:13:35.801529  309669 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:13:35.801564  309669 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 11:13:35.801602  309669 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:13:35.802150  309669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:13:35.828334  309669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:13:35.854199  309669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:13:35.925558  309669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:13:35.955868  309669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0916 11:13:35.981005  309669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 11:13:36.025480  309669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:13:36.051417  309669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 11:13:36.075511  309669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:13:36.099022  309669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 11:13:36.121694  309669 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 11:13:36.143786  309669 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:13:36.161826  309669 ssh_runner.go:195] Run: openssl version
	I0916 11:13:36.166967  309669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:13:36.176703  309669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:13:36.180169  309669 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:13:36.180234  309669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:13:36.187092  309669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:13:36.196044  309669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 11:13:36.205079  309669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 11:13:36.208403  309669 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 11:13:36.208450  309669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 11:13:36.214676  309669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 11:13:36.223144  309669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 11:13:36.232899  309669 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 11:13:36.236791  309669 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 11:13:36.236843  309669 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 11:13:36.243520  309669 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:13:36.252296  309669 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:13:36.255944  309669 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0916 11:13:36.262274  309669 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0916 11:13:36.268633  309669 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0916 11:13:36.275492  309669 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0916 11:13:36.282621  309669 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0916 11:13:36.289303  309669 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0916 11:13:36.296273  309669 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-006978 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-006978 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mou
ntString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:13:36.296393  309669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 11:13:36.296440  309669 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:13:36.332460  309669 cri.go:89] found id: "308f1d6d730a2535f1e6a084892c5da3b5b775594ef073672e029c09537b57da"
	I0916 11:13:36.332485  309669 cri.go:89] found id: "6f355202fdbbeadd28797b1e19193703a5ee560921eef405227ef9296cb15a44"
	I0916 11:13:36.332491  309669 cri.go:89] found id: "3d2d679d3f9204404ae6eba2a4077abf21a6fcc5943b04d5a2e69b78836e3cd3"
	I0916 11:13:36.332497  309669 cri.go:89] found id: "947c3b3b00e44516aae5e8db652e25532afcfccf08cf372511b7656ed49e1dce"
	I0916 11:13:36.332500  309669 cri.go:89] found id: "06406ac4e01c0da912225abf9c21537955221a3a9f8dfb94deb8f00faf06ca09"
	I0916 11:13:36.332504  309669 cri.go:89] found id: "3b1640b111894c26e26aef3c208049c9bac3c487c7b837d73cc15609a25be8a5"
	I0916 11:13:36.332509  309669 cri.go:89] found id: "a085c20f4e6d11ad60d4935c3c6e2b3ee2a685ac38a3ca0cae9820d4bab0905e"
	I0916 11:13:36.332512  309669 cri.go:89] found id: "bdf3aa888730f420448b90d2c08e29c4518e341208ebdb9b901916839a3b5862"
	I0916 11:13:36.332517  309669 cri.go:89] found id: ""
	I0916 11:13:36.332569  309669 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0916 11:13:36.346716  309669 cri.go:116] JSON = null
	W0916 11:13:36.346768  309669 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0916 11:13:36.346829  309669 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:13:36.355988  309669 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0916 11:13:36.356086  309669 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0916 11:13:36.356140  309669 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0916 11:13:36.364703  309669 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0916 11:13:36.365633  309669 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-006978" does not appear in /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:13:36.366331  309669 kubeconfig.go:62] /home/jenkins/minikube-integration/19651-3687/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-006978" cluster setting kubeconfig missing "default-k8s-diff-port-006978" context setting]
	I0916 11:13:36.367190  309669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:13:36.369117  309669 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0916 11:13:36.383040  309669 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0916 11:13:36.383135  309669 kubeadm.go:597] duration metric: took 27.031445ms to restartPrimaryControlPlane
	I0916 11:13:36.383158  309669 kubeadm.go:394] duration metric: took 86.894418ms to StartCluster
	I0916 11:13:36.383199  309669 settings.go:142] acquiring lock: {Name:mk5f140da961bb985c566f732d611f3e238f1f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:13:36.383271  309669 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:13:36.385760  309669 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:13:36.386015  309669 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 11:13:36.386179  309669 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:13:36.386309  309669 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-006978"
	I0916 11:13:36.386340  309669 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-006978"
	I0916 11:13:36.386354  309669 config.go:182] Loaded profile config "default-k8s-diff-port-006978": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	W0916 11:13:36.386361  309669 addons.go:243] addon storage-provisioner should already be in state true
	I0916 11:13:36.386395  309669 host.go:66] Checking if "default-k8s-diff-port-006978" exists ...
	I0916 11:13:36.386393  309669 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-006978"
	I0916 11:13:36.386413  309669 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-006978"
	I0916 11:13:36.386423  309669 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-006978"
	I0916 11:13:36.386434  309669 addons.go:234] Setting addon dashboard=true in "default-k8s-diff-port-006978"
	W0916 11:13:36.386449  309669 addons.go:243] addon dashboard should already be in state true
	I0916 11:13:36.386485  309669 host.go:66] Checking if "default-k8s-diff-port-006978" exists ...
	I0916 11:13:36.386420  309669 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-006978"
	I0916 11:13:36.386545  309669 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-006978"
	W0916 11:13:36.386561  309669 addons.go:243] addon metrics-server should already be in state true
	I0916 11:13:36.386600  309669 host.go:66] Checking if "default-k8s-diff-port-006978" exists ...
	I0916 11:13:36.386815  309669 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-006978 --format={{.State.Status}}
	I0916 11:13:36.386898  309669 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-006978 --format={{.State.Status}}
	I0916 11:13:36.386978  309669 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-006978 --format={{.State.Status}}
	I0916 11:13:36.387068  309669 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-006978 --format={{.State.Status}}
	I0916 11:13:36.388962  309669 out.go:177] * Verifying Kubernetes components...
	I0916 11:13:36.390348  309669 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:13:36.412452  309669 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0916 11:13:36.412528  309669 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0916 11:13:36.413333  309669 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-006978"
	W0916 11:13:36.413352  309669 addons.go:243] addon default-storageclass should already be in state true
	I0916 11:13:36.413374  309669 host.go:66] Checking if "default-k8s-diff-port-006978" exists ...
	I0916 11:13:36.413685  309669 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-006978 --format={{.State.Status}}
	I0916 11:13:36.414153  309669 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0916 11:13:36.414173  309669 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0916 11:13:36.414225  309669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:13:36.415126  309669 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0916 11:13:36.416178  309669 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:13:33.276185  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:35.276972  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:35.238786  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:37.240201  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:36.416183  309669 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0916 11:13:36.416239  309669 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0916 11:13:36.416288  309669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:13:36.417328  309669 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:13:36.417353  309669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:13:36.417404  309669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:13:36.444033  309669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/default-k8s-diff-port-006978/id_rsa Username:docker}
	I0916 11:13:36.448461  309669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/default-k8s-diff-port-006978/id_rsa Username:docker}
	I0916 11:13:36.450257  309669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/default-k8s-diff-port-006978/id_rsa Username:docker}
	I0916 11:13:36.451476  309669 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:13:36.451496  309669 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:13:36.451551  309669 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-006978
	I0916 11:13:36.476773  309669 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/default-k8s-diff-port-006978/id_rsa Username:docker}
	I0916 11:13:36.721812  309669 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:13:36.743447  309669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:13:36.743887  309669 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-006978" to be "Ready" ...
	I0916 11:13:36.750385  309669 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0916 11:13:36.750412  309669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0916 11:13:36.752092  309669 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0916 11:13:36.752113  309669 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0916 11:13:36.845618  309669 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0916 11:13:36.845660  309669 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0916 11:13:36.847950  309669 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0916 11:13:36.847971  309669 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0916 11:13:36.928823  309669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:13:36.944638  309669 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0916 11:13:36.944669  309669 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0916 11:13:37.020296  309669 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 11:13:37.020328  309669 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0916 11:13:37.120543  309669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0916 11:13:37.122304  309669 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0916 11:13:37.122334  309669 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0916 11:13:37.130025  309669 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 11:13:37.130060  309669 retry.go:31] will retry after 206.304668ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 11:13:37.220376  309669 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0916 11:13:37.220411  309669 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0916 11:13:37.329016  309669 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0916 11:13:37.329048  309669 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0916 11:13:37.332648  309669 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 11:13:37.332689  309669 retry.go:31] will retry after 135.300045ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0916 11:13:37.336809  309669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:13:37.432580  309669 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0916 11:13:37.432646  309669 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0916 11:13:37.468887  309669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:13:37.532323  309669 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0916 11:13:37.532354  309669 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0916 11:13:37.632132  309669 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0916 11:13:37.632158  309669 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0916 11:13:37.721876  309669 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0916 11:13:36.149705  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:38.649998  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:37.777152  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:39.777460  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:39.954156  309669 node_ready.go:49] node "default-k8s-diff-port-006978" has status "Ready":"True"
	I0916 11:13:39.954193  309669 node_ready.go:38] duration metric: took 3.210253397s for node "default-k8s-diff-port-006978" to be "Ready" ...
	I0916 11:13:39.954207  309669 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:13:39.963883  309669 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sc74v" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:40.043197  309669 pod_ready.go:93] pod "coredns-7c65d6cfc9-sc74v" in "kube-system" namespace has status "Ready":"True"
	I0916 11:13:40.043225  309669 pod_ready.go:82] duration metric: took 79.31305ms for pod "coredns-7c65d6cfc9-sc74v" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:40.043239  309669 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-006978" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:40.135465  309669 pod_ready.go:93] pod "etcd-default-k8s-diff-port-006978" in "kube-system" namespace has status "Ready":"True"
	I0916 11:13:40.135576  309669 pod_ready.go:82] duration metric: took 92.324719ms for pod "etcd-default-k8s-diff-port-006978" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:40.135623  309669 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-006978" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:40.142443  309669 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-006978" in "kube-system" namespace has status "Ready":"True"
	I0916 11:13:40.142499  309669 pod_ready.go:82] duration metric: took 6.852472ms for pod "kube-apiserver-default-k8s-diff-port-006978" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:40.142521  309669 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-006978" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:40.224793  309669 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-006978" in "kube-system" namespace has status "Ready":"True"
	I0916 11:13:40.224826  309669 pod_ready.go:82] duration metric: took 82.292301ms for pod "kube-controller-manager-default-k8s-diff-port-006978" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:40.224839  309669 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2mcbv" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:40.231334  309669 pod_ready.go:93] pod "kube-proxy-2mcbv" in "kube-system" namespace has status "Ready":"True"
	I0916 11:13:40.231377  309669 pod_ready.go:82] duration metric: took 6.519811ms for pod "kube-proxy-2mcbv" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:40.231390  309669 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-006978" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:40.622879  309669 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-006978" in "kube-system" namespace has status "Ready":"True"
	I0916 11:13:40.622914  309669 pod_ready.go:82] duration metric: took 391.513701ms for pod "kube-scheduler-default-k8s-diff-port-006978" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:40.622928  309669 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace to be "Ready" ...
	I0916 11:13:42.428857  309669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.091995705s)
	I0916 11:13:42.428918  309669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.9600041s)
	I0916 11:13:42.429445  309669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.308843094s)
	I0916 11:13:42.429568  309669 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-006978"
	I0916 11:13:42.629884  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:42.721455  309669 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.999531228s)
	I0916 11:13:42.723694  309669 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-006978 addons enable metrics-server
	
	I0916 11:13:42.725711  309669 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0916 11:13:39.741173  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:42.239604  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:42.727176  309669 addons.go:510] duration metric: took 6.340999258s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I0916 11:13:41.149712  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:43.649763  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:42.276445  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:44.277029  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:46.277361  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:44.738302  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:47.238066  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:45.128384  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:47.129697  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:49.630020  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:46.148603  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:48.149781  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:48.777520  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:50.778155  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:49.238589  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:51.739301  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:52.129438  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:54.628237  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:50.150159  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:52.649784  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:53.277620  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:55.776792  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:54.241776  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:56.737827  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:58.745677  274695 pod_ready.go:103] pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:56.628830  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:59.129154  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:55.150289  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:57.648998  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:59.649386  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:57.777222  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:59.777303  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:13:59.738311  274695 pod_ready.go:82] duration metric: took 4m0.005638102s for pod "metrics-server-6867b74b74-zw8sx" in "kube-system" namespace to be "Ready" ...
	E0916 11:13:59.738341  274695 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0916 11:13:59.738351  274695 pod_ready.go:39] duration metric: took 4m0.608804427s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:13:59.738394  274695 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:13:59.738429  274695 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:13:59.738493  274695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:13:59.773717  274695 cri.go:89] found id: "10ad4718e97218c1ce95ab1f27ef0f499520ec8391a382c341a385aafc4e5027"
	I0916 11:13:59.773741  274695 cri.go:89] found id: "5d35346ecb3ed281435941a815ae449eb1e54a96a4798b3c0a33aa91f7f98817"
	I0916 11:13:59.773747  274695 cri.go:89] found id: ""
	I0916 11:13:59.773755  274695 logs.go:276] 2 containers: [10ad4718e97218c1ce95ab1f27ef0f499520ec8391a382c341a385aafc4e5027 5d35346ecb3ed281435941a815ae449eb1e54a96a4798b3c0a33aa91f7f98817]
	I0916 11:13:59.773808  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:13:59.778088  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:13:59.781624  274695 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:13:59.781694  274695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:13:59.815450  274695 cri.go:89] found id: "88bafae096207a5768a4f8dd93ca28361db384719eb695eac05b48c800386a05"
	I0916 11:13:59.815478  274695 cri.go:89] found id: "0b8b34459e3719308549f98b1bdb7656a632bebe98e50614463b773f213a735d"
	I0916 11:13:59.815483  274695 cri.go:89] found id: ""
	I0916 11:13:59.815493  274695 logs.go:276] 2 containers: [88bafae096207a5768a4f8dd93ca28361db384719eb695eac05b48c800386a05 0b8b34459e3719308549f98b1bdb7656a632bebe98e50614463b773f213a735d]
	I0916 11:13:59.815541  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:13:59.818986  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:13:59.822724  274695 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:13:59.822792  274695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:13:59.860269  274695 cri.go:89] found id: "d846e6651f0b5a0d4ae1295abfeddc533b22e95c3ba913c4fe997264fafb3790"
	I0916 11:13:59.860294  274695 cri.go:89] found id: "30acbc7b45e29f44070b3c2cd9e349609594ebbaf39bfad9077eca4ac3da4b6d"
	I0916 11:13:59.860300  274695 cri.go:89] found id: ""
	I0916 11:13:59.860308  274695 logs.go:276] 2 containers: [d846e6651f0b5a0d4ae1295abfeddc533b22e95c3ba913c4fe997264fafb3790 30acbc7b45e29f44070b3c2cd9e349609594ebbaf39bfad9077eca4ac3da4b6d]
	I0916 11:13:59.860366  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:13:59.863818  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:13:59.867186  274695 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:13:59.867261  274695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:13:59.901475  274695 cri.go:89] found id: "e5e8d406dc25bdcb4fd4237dfc1f5e1fa2926422f84d0261628f272925fd1cd3"
	I0916 11:13:59.901498  274695 cri.go:89] found id: "5c82d38a57c77763e3457d1b66ad9e5b564c5c6137cef3dc16a48cd5d44dcf69"
	I0916 11:13:59.901502  274695 cri.go:89] found id: ""
	I0916 11:13:59.901508  274695 logs.go:276] 2 containers: [e5e8d406dc25bdcb4fd4237dfc1f5e1fa2926422f84d0261628f272925fd1cd3 5c82d38a57c77763e3457d1b66ad9e5b564c5c6137cef3dc16a48cd5d44dcf69]
	I0916 11:13:59.901558  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:13:59.905021  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:13:59.908106  274695 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:13:59.908156  274695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:13:59.942205  274695 cri.go:89] found id: "e7f26c19553d969b1a9b5681d3f324438c37c93225102701c76043b7e78ba74e"
	I0916 11:13:59.942233  274695 cri.go:89] found id: "49542fa155836c3fde0549a1cb7c27e6491fe3d03f704b1dd6194e52b361cf04"
	I0916 11:13:59.942239  274695 cri.go:89] found id: ""
	I0916 11:13:59.942248  274695 logs.go:276] 2 containers: [e7f26c19553d969b1a9b5681d3f324438c37c93225102701c76043b7e78ba74e 49542fa155836c3fde0549a1cb7c27e6491fe3d03f704b1dd6194e52b361cf04]
	I0916 11:13:59.942316  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:13:59.946334  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:13:59.949808  274695 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:13:59.949881  274695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:13:59.982513  274695 cri.go:89] found id: "0ce842d576ac95e9a9812149dc6915d43b35aa6a7456fb7b3e051bceb7f8c4fc"
	I0916 11:13:59.982533  274695 cri.go:89] found id: "a4b95a39232c279fb958cb4f48f6828232143f571fa31629227ec0c0bfd79969"
	I0916 11:13:59.982536  274695 cri.go:89] found id: ""
	I0916 11:13:59.982543  274695 logs.go:276] 2 containers: [0ce842d576ac95e9a9812149dc6915d43b35aa6a7456fb7b3e051bceb7f8c4fc a4b95a39232c279fb958cb4f48f6828232143f571fa31629227ec0c0bfd79969]
	I0916 11:13:59.982610  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:13:59.986392  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:13:59.989680  274695 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:13:59.989736  274695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:14:00.023031  274695 cri.go:89] found id: "9dd702b61dc808d16191c089c084e5383f48e949a2c2e29a816b32865282edc5"
	I0916 11:14:00.023057  274695 cri.go:89] found id: "b30641ccb64e362b81a950f5292970ba402b0ac24d54dd1f6caba97fe10efe1a"
	I0916 11:14:00.023063  274695 cri.go:89] found id: ""
	I0916 11:14:00.023070  274695 logs.go:276] 2 containers: [9dd702b61dc808d16191c089c084e5383f48e949a2c2e29a816b32865282edc5 b30641ccb64e362b81a950f5292970ba402b0ac24d54dd1f6caba97fe10efe1a]
	I0916 11:14:00.023115  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:14:00.026940  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:14:00.030190  274695 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0916 11:14:00.030258  274695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0916 11:14:00.065357  274695 cri.go:89] found id: "52380cf083b1ab1dce8f2f0bdf9ff84e39b7a8e2ab7f38a2acbc07f61860618c"
	I0916 11:14:00.065387  274695 cri.go:89] found id: ""
	I0916 11:14:00.065398  274695 logs.go:276] 1 containers: [52380cf083b1ab1dce8f2f0bdf9ff84e39b7a8e2ab7f38a2acbc07f61860618c]
	I0916 11:14:00.065453  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:14:00.069143  274695 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:14:00.069217  274695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:14:00.102823  274695 cri.go:89] found id: "aff7b51ea5d554f201a1d75f00a8b5836f2faea86fffc4675990367ad090f812"
	I0916 11:14:00.102845  274695 cri.go:89] found id: "89e6bdaa9e7031f477cb7b7d7be9bac9229ff2b0f07eae0e9b2219de39fad023"
	I0916 11:14:00.102849  274695 cri.go:89] found id: ""
	I0916 11:14:00.102858  274695 logs.go:276] 2 containers: [aff7b51ea5d554f201a1d75f00a8b5836f2faea86fffc4675990367ad090f812 89e6bdaa9e7031f477cb7b7d7be9bac9229ff2b0f07eae0e9b2219de39fad023]
	I0916 11:14:00.102909  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:14:00.106770  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:14:00.110222  274695 logs.go:123] Gathering logs for kube-apiserver [10ad4718e97218c1ce95ab1f27ef0f499520ec8391a382c341a385aafc4e5027] ...
	I0916 11:14:00.110253  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10ad4718e97218c1ce95ab1f27ef0f499520ec8391a382c341a385aafc4e5027"
	I0916 11:14:00.154191  274695 logs.go:123] Gathering logs for kube-scheduler [e5e8d406dc25bdcb4fd4237dfc1f5e1fa2926422f84d0261628f272925fd1cd3] ...
	I0916 11:14:00.154222  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e5e8d406dc25bdcb4fd4237dfc1f5e1fa2926422f84d0261628f272925fd1cd3"
	I0916 11:14:00.188329  274695 logs.go:123] Gathering logs for kube-scheduler [5c82d38a57c77763e3457d1b66ad9e5b564c5c6137cef3dc16a48cd5d44dcf69] ...
	I0916 11:14:00.188361  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c82d38a57c77763e3457d1b66ad9e5b564c5c6137cef3dc16a48cd5d44dcf69"
	I0916 11:14:00.231393  274695 logs.go:123] Gathering logs for kindnet [9dd702b61dc808d16191c089c084e5383f48e949a2c2e29a816b32865282edc5] ...
	I0916 11:14:00.231426  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9dd702b61dc808d16191c089c084e5383f48e949a2c2e29a816b32865282edc5"
	I0916 11:14:00.270158  274695 logs.go:123] Gathering logs for kubernetes-dashboard [52380cf083b1ab1dce8f2f0bdf9ff84e39b7a8e2ab7f38a2acbc07f61860618c] ...
	I0916 11:14:00.270197  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52380cf083b1ab1dce8f2f0bdf9ff84e39b7a8e2ab7f38a2acbc07f61860618c"
	I0916 11:14:00.305345  274695 logs.go:123] Gathering logs for storage-provisioner [aff7b51ea5d554f201a1d75f00a8b5836f2faea86fffc4675990367ad090f812] ...
	I0916 11:14:00.305377  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aff7b51ea5d554f201a1d75f00a8b5836f2faea86fffc4675990367ad090f812"
	I0916 11:14:00.339079  274695 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:14:00.339103  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:14:00.434992  274695 logs.go:123] Gathering logs for etcd [0b8b34459e3719308549f98b1bdb7656a632bebe98e50614463b773f213a735d] ...
	I0916 11:14:00.435038  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b8b34459e3719308549f98b1bdb7656a632bebe98e50614463b773f213a735d"
	I0916 11:14:00.476537  274695 logs.go:123] Gathering logs for coredns [d846e6651f0b5a0d4ae1295abfeddc533b22e95c3ba913c4fe997264fafb3790] ...
	I0916 11:14:00.476574  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d846e6651f0b5a0d4ae1295abfeddc533b22e95c3ba913c4fe997264fafb3790"
	I0916 11:14:00.512422  274695 logs.go:123] Gathering logs for kube-proxy [e7f26c19553d969b1a9b5681d3f324438c37c93225102701c76043b7e78ba74e] ...
	I0916 11:14:00.512451  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7f26c19553d969b1a9b5681d3f324438c37c93225102701c76043b7e78ba74e"
	I0916 11:14:00.547368  274695 logs.go:123] Gathering logs for kube-proxy [49542fa155836c3fde0549a1cb7c27e6491fe3d03f704b1dd6194e52b361cf04] ...
	I0916 11:14:00.547401  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49542fa155836c3fde0549a1cb7c27e6491fe3d03f704b1dd6194e52b361cf04"
	I0916 11:14:00.580473  274695 logs.go:123] Gathering logs for kindnet [b30641ccb64e362b81a950f5292970ba402b0ac24d54dd1f6caba97fe10efe1a] ...
	I0916 11:14:00.580509  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b30641ccb64e362b81a950f5292970ba402b0ac24d54dd1f6caba97fe10efe1a"
	I0916 11:14:00.613457  274695 logs.go:123] Gathering logs for dmesg ...
	I0916 11:14:00.613483  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:14:00.637765  274695 logs.go:123] Gathering logs for etcd [88bafae096207a5768a4f8dd93ca28361db384719eb695eac05b48c800386a05] ...
	I0916 11:14:00.637801  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 88bafae096207a5768a4f8dd93ca28361db384719eb695eac05b48c800386a05"
	I0916 11:14:00.681622  274695 logs.go:123] Gathering logs for kube-controller-manager [0ce842d576ac95e9a9812149dc6915d43b35aa6a7456fb7b3e051bceb7f8c4fc] ...
	I0916 11:14:00.681675  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ce842d576ac95e9a9812149dc6915d43b35aa6a7456fb7b3e051bceb7f8c4fc"
	I0916 11:14:00.735696  274695 logs.go:123] Gathering logs for kube-controller-manager [a4b95a39232c279fb958cb4f48f6828232143f571fa31629227ec0c0bfd79969] ...
	I0916 11:14:00.735747  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4b95a39232c279fb958cb4f48f6828232143f571fa31629227ec0c0bfd79969"
	I0916 11:14:00.786506  274695 logs.go:123] Gathering logs for storage-provisioner [89e6bdaa9e7031f477cb7b7d7be9bac9229ff2b0f07eae0e9b2219de39fad023] ...
	I0916 11:14:00.786552  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89e6bdaa9e7031f477cb7b7d7be9bac9229ff2b0f07eae0e9b2219de39fad023"
	I0916 11:14:00.819919  274695 logs.go:123] Gathering logs for container status ...
	I0916 11:14:00.819948  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:14:00.858961  274695 logs.go:123] Gathering logs for kubelet ...
	I0916 11:14:00.858991  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:14:00.917497  274695 logs.go:123] Gathering logs for kube-apiserver [5d35346ecb3ed281435941a815ae449eb1e54a96a4798b3c0a33aa91f7f98817] ...
	I0916 11:14:00.917532  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d35346ecb3ed281435941a815ae449eb1e54a96a4798b3c0a33aa91f7f98817"
	I0916 11:14:00.962543  274695 logs.go:123] Gathering logs for coredns [30acbc7b45e29f44070b3c2cd9e349609594ebbaf39bfad9077eca4ac3da4b6d] ...
	I0916 11:14:00.962582  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30acbc7b45e29f44070b3c2cd9e349609594ebbaf39bfad9077eca4ac3da4b6d"
	I0916 11:14:00.998971  274695 logs.go:123] Gathering logs for containerd ...
	I0916 11:14:00.998999  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:14:03.551265  274695 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:14:03.562737  274695 api_server.go:72] duration metric: took 4m8.257566757s to wait for apiserver process to appear ...
	I0916 11:14:03.562772  274695 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:14:03.562816  274695 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:14:03.562872  274695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:14:03.595601  274695 cri.go:89] found id: "10ad4718e97218c1ce95ab1f27ef0f499520ec8391a382c341a385aafc4e5027"
	I0916 11:14:03.595628  274695 cri.go:89] found id: "5d35346ecb3ed281435941a815ae449eb1e54a96a4798b3c0a33aa91f7f98817"
	I0916 11:14:03.595634  274695 cri.go:89] found id: ""
	I0916 11:14:03.595641  274695 logs.go:276] 2 containers: [10ad4718e97218c1ce95ab1f27ef0f499520ec8391a382c341a385aafc4e5027 5d35346ecb3ed281435941a815ae449eb1e54a96a4798b3c0a33aa91f7f98817]
	I0916 11:14:03.595683  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:14:03.599282  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:14:03.602464  274695 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:14:03.602528  274695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:14:03.637649  274695 cri.go:89] found id: "88bafae096207a5768a4f8dd93ca28361db384719eb695eac05b48c800386a05"
	I0916 11:14:03.637675  274695 cri.go:89] found id: "0b8b34459e3719308549f98b1bdb7656a632bebe98e50614463b773f213a735d"
	I0916 11:14:03.637682  274695 cri.go:89] found id: ""
	I0916 11:14:03.637691  274695 logs.go:276] 2 containers: [88bafae096207a5768a4f8dd93ca28361db384719eb695eac05b48c800386a05 0b8b34459e3719308549f98b1bdb7656a632bebe98e50614463b773f213a735d]
	I0916 11:14:03.637743  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:14:03.641403  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:14:03.644700  274695 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:14:03.644758  274695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:14:03.682194  274695 cri.go:89] found id: "d846e6651f0b5a0d4ae1295abfeddc533b22e95c3ba913c4fe997264fafb3790"
	I0916 11:14:03.682217  274695 cri.go:89] found id: "30acbc7b45e29f44070b3c2cd9e349609594ebbaf39bfad9077eca4ac3da4b6d"
	I0916 11:14:03.682222  274695 cri.go:89] found id: ""
	I0916 11:14:03.682228  274695 logs.go:276] 2 containers: [d846e6651f0b5a0d4ae1295abfeddc533b22e95c3ba913c4fe997264fafb3790 30acbc7b45e29f44070b3c2cd9e349609594ebbaf39bfad9077eca4ac3da4b6d]
	I0916 11:14:03.682268  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:14:03.685711  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:14:03.689141  274695 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:14:03.689197  274695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:14:03.722232  274695 cri.go:89] found id: "e5e8d406dc25bdcb4fd4237dfc1f5e1fa2926422f84d0261628f272925fd1cd3"
	I0916 11:14:03.722252  274695 cri.go:89] found id: "5c82d38a57c77763e3457d1b66ad9e5b564c5c6137cef3dc16a48cd5d44dcf69"
	I0916 11:14:03.722255  274695 cri.go:89] found id: ""
	I0916 11:14:03.722262  274695 logs.go:276] 2 containers: [e5e8d406dc25bdcb4fd4237dfc1f5e1fa2926422f84d0261628f272925fd1cd3 5c82d38a57c77763e3457d1b66ad9e5b564c5c6137cef3dc16a48cd5d44dcf69]
	I0916 11:14:03.722305  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:14:03.726345  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:14:03.729713  274695 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:14:03.729778  274695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:14:03.762341  274695 cri.go:89] found id: "e7f26c19553d969b1a9b5681d3f324438c37c93225102701c76043b7e78ba74e"
	I0916 11:14:03.762364  274695 cri.go:89] found id: "49542fa155836c3fde0549a1cb7c27e6491fe3d03f704b1dd6194e52b361cf04"
	I0916 11:14:03.762369  274695 cri.go:89] found id: ""
	I0916 11:14:03.762378  274695 logs.go:276] 2 containers: [e7f26c19553d969b1a9b5681d3f324438c37c93225102701c76043b7e78ba74e 49542fa155836c3fde0549a1cb7c27e6491fe3d03f704b1dd6194e52b361cf04]
	I0916 11:14:03.762431  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:14:03.765849  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:14:03.769096  274695 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:14:03.769164  274695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:14:01.628022  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:03.628722  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:02.149111  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:04.149753  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:02.276719  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:04.277992  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:06.776489  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:03.802511  274695 cri.go:89] found id: "0ce842d576ac95e9a9812149dc6915d43b35aa6a7456fb7b3e051bceb7f8c4fc"
	I0916 11:14:03.802543  274695 cri.go:89] found id: "a4b95a39232c279fb958cb4f48f6828232143f571fa31629227ec0c0bfd79969"
	I0916 11:14:03.802551  274695 cri.go:89] found id: ""
	I0916 11:14:03.802564  274695 logs.go:276] 2 containers: [0ce842d576ac95e9a9812149dc6915d43b35aa6a7456fb7b3e051bceb7f8c4fc a4b95a39232c279fb958cb4f48f6828232143f571fa31629227ec0c0bfd79969]
	I0916 11:14:03.802622  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:14:03.806414  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:14:03.809693  274695 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:14:03.809745  274695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:14:03.841643  274695 cri.go:89] found id: "9dd702b61dc808d16191c089c084e5383f48e949a2c2e29a816b32865282edc5"
	I0916 11:14:03.841665  274695 cri.go:89] found id: "b30641ccb64e362b81a950f5292970ba402b0ac24d54dd1f6caba97fe10efe1a"
	I0916 11:14:03.841669  274695 cri.go:89] found id: ""
	I0916 11:14:03.841676  274695 logs.go:276] 2 containers: [9dd702b61dc808d16191c089c084e5383f48e949a2c2e29a816b32865282edc5 b30641ccb64e362b81a950f5292970ba402b0ac24d54dd1f6caba97fe10efe1a]
	I0916 11:14:03.841726  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:14:03.845221  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:14:03.848427  274695 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0916 11:14:03.848486  274695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0916 11:14:03.882040  274695 cri.go:89] found id: "52380cf083b1ab1dce8f2f0bdf9ff84e39b7a8e2ab7f38a2acbc07f61860618c"
	I0916 11:14:03.882063  274695 cri.go:89] found id: ""
	I0916 11:14:03.882073  274695 logs.go:276] 1 containers: [52380cf083b1ab1dce8f2f0bdf9ff84e39b7a8e2ab7f38a2acbc07f61860618c]
	I0916 11:14:03.882129  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:14:03.885499  274695 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:14:03.885595  274695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:14:03.919092  274695 cri.go:89] found id: "aff7b51ea5d554f201a1d75f00a8b5836f2faea86fffc4675990367ad090f812"
	I0916 11:14:03.919118  274695 cri.go:89] found id: "89e6bdaa9e7031f477cb7b7d7be9bac9229ff2b0f07eae0e9b2219de39fad023"
	I0916 11:14:03.919122  274695 cri.go:89] found id: ""
	I0916 11:14:03.919129  274695 logs.go:276] 2 containers: [aff7b51ea5d554f201a1d75f00a8b5836f2faea86fffc4675990367ad090f812 89e6bdaa9e7031f477cb7b7d7be9bac9229ff2b0f07eae0e9b2219de39fad023]
	I0916 11:14:03.919174  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:14:03.922664  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:14:03.926005  274695 logs.go:123] Gathering logs for kube-apiserver [5d35346ecb3ed281435941a815ae449eb1e54a96a4798b3c0a33aa91f7f98817] ...
	I0916 11:14:03.926033  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d35346ecb3ed281435941a815ae449eb1e54a96a4798b3c0a33aa91f7f98817"
	I0916 11:14:03.972127  274695 logs.go:123] Gathering logs for kube-scheduler [e5e8d406dc25bdcb4fd4237dfc1f5e1fa2926422f84d0261628f272925fd1cd3] ...
	I0916 11:14:03.972173  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e5e8d406dc25bdcb4fd4237dfc1f5e1fa2926422f84d0261628f272925fd1cd3"
	I0916 11:14:04.009771  274695 logs.go:123] Gathering logs for kube-controller-manager [a4b95a39232c279fb958cb4f48f6828232143f571fa31629227ec0c0bfd79969] ...
	I0916 11:14:04.009800  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4b95a39232c279fb958cb4f48f6828232143f571fa31629227ec0c0bfd79969"
	I0916 11:14:04.058264  274695 logs.go:123] Gathering logs for kubernetes-dashboard [52380cf083b1ab1dce8f2f0bdf9ff84e39b7a8e2ab7f38a2acbc07f61860618c] ...
	I0916 11:14:04.058306  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52380cf083b1ab1dce8f2f0bdf9ff84e39b7a8e2ab7f38a2acbc07f61860618c"
	I0916 11:14:04.092526  274695 logs.go:123] Gathering logs for kube-apiserver [10ad4718e97218c1ce95ab1f27ef0f499520ec8391a382c341a385aafc4e5027] ...
	I0916 11:14:04.092558  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10ad4718e97218c1ce95ab1f27ef0f499520ec8391a382c341a385aafc4e5027"
	I0916 11:14:04.134933  274695 logs.go:123] Gathering logs for kindnet [9dd702b61dc808d16191c089c084e5383f48e949a2c2e29a816b32865282edc5] ...
	I0916 11:14:04.134976  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9dd702b61dc808d16191c089c084e5383f48e949a2c2e29a816b32865282edc5"
	I0916 11:14:04.173798  274695 logs.go:123] Gathering logs for kindnet [b30641ccb64e362b81a950f5292970ba402b0ac24d54dd1f6caba97fe10efe1a] ...
	I0916 11:14:04.173828  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b30641ccb64e362b81a950f5292970ba402b0ac24d54dd1f6caba97fe10efe1a"
	I0916 11:14:04.207920  274695 logs.go:123] Gathering logs for storage-provisioner [89e6bdaa9e7031f477cb7b7d7be9bac9229ff2b0f07eae0e9b2219de39fad023] ...
	I0916 11:14:04.207948  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89e6bdaa9e7031f477cb7b7d7be9bac9229ff2b0f07eae0e9b2219de39fad023"
	I0916 11:14:04.240815  274695 logs.go:123] Gathering logs for container status ...
	I0916 11:14:04.240842  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:14:04.280489  274695 logs.go:123] Gathering logs for kube-proxy [e7f26c19553d969b1a9b5681d3f324438c37c93225102701c76043b7e78ba74e] ...
	I0916 11:14:04.280525  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7f26c19553d969b1a9b5681d3f324438c37c93225102701c76043b7e78ba74e"
	I0916 11:14:04.315959  274695 logs.go:123] Gathering logs for coredns [d846e6651f0b5a0d4ae1295abfeddc533b22e95c3ba913c4fe997264fafb3790] ...
	I0916 11:14:04.315989  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d846e6651f0b5a0d4ae1295abfeddc533b22e95c3ba913c4fe997264fafb3790"
	I0916 11:14:04.352113  274695 logs.go:123] Gathering logs for kube-scheduler [5c82d38a57c77763e3457d1b66ad9e5b564c5c6137cef3dc16a48cd5d44dcf69] ...
	I0916 11:14:04.352143  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c82d38a57c77763e3457d1b66ad9e5b564c5c6137cef3dc16a48cd5d44dcf69"
	I0916 11:14:04.395925  274695 logs.go:123] Gathering logs for kube-controller-manager [0ce842d576ac95e9a9812149dc6915d43b35aa6a7456fb7b3e051bceb7f8c4fc] ...
	I0916 11:14:04.395965  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ce842d576ac95e9a9812149dc6915d43b35aa6a7456fb7b3e051bceb7f8c4fc"
	I0916 11:14:04.445756  274695 logs.go:123] Gathering logs for dmesg ...
	I0916 11:14:04.445789  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:14:04.470623  274695 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:14:04.470662  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:14:04.561423  274695 logs.go:123] Gathering logs for etcd [88bafae096207a5768a4f8dd93ca28361db384719eb695eac05b48c800386a05] ...
	I0916 11:14:04.561495  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 88bafae096207a5768a4f8dd93ca28361db384719eb695eac05b48c800386a05"
	I0916 11:14:04.599153  274695 logs.go:123] Gathering logs for etcd [0b8b34459e3719308549f98b1bdb7656a632bebe98e50614463b773f213a735d] ...
	I0916 11:14:04.599182  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b8b34459e3719308549f98b1bdb7656a632bebe98e50614463b773f213a735d"
	I0916 11:14:04.642360  274695 logs.go:123] Gathering logs for coredns [30acbc7b45e29f44070b3c2cd9e349609594ebbaf39bfad9077eca4ac3da4b6d] ...
	I0916 11:14:04.642393  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30acbc7b45e29f44070b3c2cd9e349609594ebbaf39bfad9077eca4ac3da4b6d"
	I0916 11:14:04.677000  274695 logs.go:123] Gathering logs for kube-proxy [49542fa155836c3fde0549a1cb7c27e6491fe3d03f704b1dd6194e52b361cf04] ...
	I0916 11:14:04.677032  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49542fa155836c3fde0549a1cb7c27e6491fe3d03f704b1dd6194e52b361cf04"
	I0916 11:14:04.711391  274695 logs.go:123] Gathering logs for storage-provisioner [aff7b51ea5d554f201a1d75f00a8b5836f2faea86fffc4675990367ad090f812] ...
	I0916 11:14:04.711415  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aff7b51ea5d554f201a1d75f00a8b5836f2faea86fffc4675990367ad090f812"
	I0916 11:14:04.745642  274695 logs.go:123] Gathering logs for containerd ...
	I0916 11:14:04.745673  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:14:04.798292  274695 logs.go:123] Gathering logs for kubelet ...
	I0916 11:14:04.798332  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:14:07.355279  274695 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0916 11:14:07.360920  274695 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0916 11:14:07.362096  274695 api_server.go:141] control plane version: v1.31.1
	I0916 11:14:07.362131  274695 api_server.go:131] duration metric: took 3.799351482s to wait for apiserver health ...
	I0916 11:14:07.362143  274695 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 11:14:07.362171  274695 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:14:07.362256  274695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:14:07.400307  274695 cri.go:89] found id: "10ad4718e97218c1ce95ab1f27ef0f499520ec8391a382c341a385aafc4e5027"
	I0916 11:14:07.400330  274695 cri.go:89] found id: "5d35346ecb3ed281435941a815ae449eb1e54a96a4798b3c0a33aa91f7f98817"
	I0916 11:14:07.400335  274695 cri.go:89] found id: ""
	I0916 11:14:07.400343  274695 logs.go:276] 2 containers: [10ad4718e97218c1ce95ab1f27ef0f499520ec8391a382c341a385aafc4e5027 5d35346ecb3ed281435941a815ae449eb1e54a96a4798b3c0a33aa91f7f98817]
	I0916 11:14:07.400394  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:14:07.404513  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:14:07.408237  274695 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:14:07.408317  274695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:14:07.443602  274695 cri.go:89] found id: "88bafae096207a5768a4f8dd93ca28361db384719eb695eac05b48c800386a05"
	I0916 11:14:07.443625  274695 cri.go:89] found id: "0b8b34459e3719308549f98b1bdb7656a632bebe98e50614463b773f213a735d"
	I0916 11:14:07.443631  274695 cri.go:89] found id: ""
	I0916 11:14:07.443638  274695 logs.go:276] 2 containers: [88bafae096207a5768a4f8dd93ca28361db384719eb695eac05b48c800386a05 0b8b34459e3719308549f98b1bdb7656a632bebe98e50614463b773f213a735d]
	I0916 11:14:07.443703  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:14:07.447625  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:14:07.451345  274695 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:14:07.451419  274695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:14:07.489774  274695 cri.go:89] found id: "d846e6651f0b5a0d4ae1295abfeddc533b22e95c3ba913c4fe997264fafb3790"
	I0916 11:14:07.489802  274695 cri.go:89] found id: "30acbc7b45e29f44070b3c2cd9e349609594ebbaf39bfad9077eca4ac3da4b6d"
	I0916 11:14:07.489807  274695 cri.go:89] found id: ""
	I0916 11:14:07.489815  274695 logs.go:276] 2 containers: [d846e6651f0b5a0d4ae1295abfeddc533b22e95c3ba913c4fe997264fafb3790 30acbc7b45e29f44070b3c2cd9e349609594ebbaf39bfad9077eca4ac3da4b6d]
	I0916 11:14:07.489889  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:14:07.493835  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:14:07.497403  274695 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:14:07.497467  274695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:14:07.531690  274695 cri.go:89] found id: "e5e8d406dc25bdcb4fd4237dfc1f5e1fa2926422f84d0261628f272925fd1cd3"
	I0916 11:14:07.531711  274695 cri.go:89] found id: "5c82d38a57c77763e3457d1b66ad9e5b564c5c6137cef3dc16a48cd5d44dcf69"
	I0916 11:14:07.531725  274695 cri.go:89] found id: ""
	I0916 11:14:07.531744  274695 logs.go:276] 2 containers: [e5e8d406dc25bdcb4fd4237dfc1f5e1fa2926422f84d0261628f272925fd1cd3 5c82d38a57c77763e3457d1b66ad9e5b564c5c6137cef3dc16a48cd5d44dcf69]
	I0916 11:14:07.531801  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:14:07.535408  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:14:07.539093  274695 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:14:07.539164  274695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:14:07.573298  274695 cri.go:89] found id: "e7f26c19553d969b1a9b5681d3f324438c37c93225102701c76043b7e78ba74e"
	I0916 11:14:07.573318  274695 cri.go:89] found id: "49542fa155836c3fde0549a1cb7c27e6491fe3d03f704b1dd6194e52b361cf04"
	I0916 11:14:07.573322  274695 cri.go:89] found id: ""
	I0916 11:14:07.573329  274695 logs.go:276] 2 containers: [e7f26c19553d969b1a9b5681d3f324438c37c93225102701c76043b7e78ba74e 49542fa155836c3fde0549a1cb7c27e6491fe3d03f704b1dd6194e52b361cf04]
	I0916 11:14:07.573372  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:14:07.576911  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:14:07.580551  274695 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:14:07.580627  274695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:14:07.613363  274695 cri.go:89] found id: "0ce842d576ac95e9a9812149dc6915d43b35aa6a7456fb7b3e051bceb7f8c4fc"
	I0916 11:14:07.613392  274695 cri.go:89] found id: "a4b95a39232c279fb958cb4f48f6828232143f571fa31629227ec0c0bfd79969"
	I0916 11:14:07.613398  274695 cri.go:89] found id: ""
	I0916 11:14:07.613405  274695 logs.go:276] 2 containers: [0ce842d576ac95e9a9812149dc6915d43b35aa6a7456fb7b3e051bceb7f8c4fc a4b95a39232c279fb958cb4f48f6828232143f571fa31629227ec0c0bfd79969]
	I0916 11:14:07.613448  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:14:07.616887  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:14:07.619996  274695 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:14:07.620053  274695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:14:07.656652  274695 cri.go:89] found id: "9dd702b61dc808d16191c089c084e5383f48e949a2c2e29a816b32865282edc5"
	I0916 11:14:07.656673  274695 cri.go:89] found id: "b30641ccb64e362b81a950f5292970ba402b0ac24d54dd1f6caba97fe10efe1a"
	I0916 11:14:07.656677  274695 cri.go:89] found id: ""
	I0916 11:14:07.656684  274695 logs.go:276] 2 containers: [9dd702b61dc808d16191c089c084e5383f48e949a2c2e29a816b32865282edc5 b30641ccb64e362b81a950f5292970ba402b0ac24d54dd1f6caba97fe10efe1a]
	I0916 11:14:07.656735  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:14:07.660394  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:14:07.663632  274695 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:14:07.663686  274695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:14:07.696100  274695 cri.go:89] found id: "aff7b51ea5d554f201a1d75f00a8b5836f2faea86fffc4675990367ad090f812"
	I0916 11:14:07.696126  274695 cri.go:89] found id: "89e6bdaa9e7031f477cb7b7d7be9bac9229ff2b0f07eae0e9b2219de39fad023"
	I0916 11:14:07.696132  274695 cri.go:89] found id: ""
	I0916 11:14:07.696144  274695 logs.go:276] 2 containers: [aff7b51ea5d554f201a1d75f00a8b5836f2faea86fffc4675990367ad090f812 89e6bdaa9e7031f477cb7b7d7be9bac9229ff2b0f07eae0e9b2219de39fad023]
	I0916 11:14:07.696196  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:14:07.699948  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:14:07.703704  274695 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0916 11:14:07.703831  274695 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0916 11:14:07.737908  274695 cri.go:89] found id: "52380cf083b1ab1dce8f2f0bdf9ff84e39b7a8e2ab7f38a2acbc07f61860618c"
	I0916 11:14:07.737936  274695 cri.go:89] found id: ""
	I0916 11:14:07.737948  274695 logs.go:276] 1 containers: [52380cf083b1ab1dce8f2f0bdf9ff84e39b7a8e2ab7f38a2acbc07f61860618c]
	I0916 11:14:07.738017  274695 ssh_runner.go:195] Run: which crictl
	I0916 11:14:07.741591  274695 logs.go:123] Gathering logs for dmesg ...
	I0916 11:14:07.741616  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:14:07.766475  274695 logs.go:123] Gathering logs for kube-scheduler [e5e8d406dc25bdcb4fd4237dfc1f5e1fa2926422f84d0261628f272925fd1cd3] ...
	I0916 11:14:07.766511  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e5e8d406dc25bdcb4fd4237dfc1f5e1fa2926422f84d0261628f272925fd1cd3"
	I0916 11:14:07.801372  274695 logs.go:123] Gathering logs for kube-scheduler [5c82d38a57c77763e3457d1b66ad9e5b564c5c6137cef3dc16a48cd5d44dcf69] ...
	I0916 11:14:07.801395  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5c82d38a57c77763e3457d1b66ad9e5b564c5c6137cef3dc16a48cd5d44dcf69"
	I0916 11:14:07.844243  274695 logs.go:123] Gathering logs for kube-controller-manager [a4b95a39232c279fb958cb4f48f6828232143f571fa31629227ec0c0bfd79969] ...
	I0916 11:14:07.844278  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a4b95a39232c279fb958cb4f48f6828232143f571fa31629227ec0c0bfd79969"
	I0916 11:14:07.892754  274695 logs.go:123] Gathering logs for kube-controller-manager [0ce842d576ac95e9a9812149dc6915d43b35aa6a7456fb7b3e051bceb7f8c4fc] ...
	I0916 11:14:07.892788  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ce842d576ac95e9a9812149dc6915d43b35aa6a7456fb7b3e051bceb7f8c4fc"
	I0916 11:14:07.948346  274695 logs.go:123] Gathering logs for kindnet [9dd702b61dc808d16191c089c084e5383f48e949a2c2e29a816b32865282edc5] ...
	I0916 11:14:07.948382  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9dd702b61dc808d16191c089c084e5383f48e949a2c2e29a816b32865282edc5"
	I0916 11:14:07.986418  274695 logs.go:123] Gathering logs for kindnet [b30641ccb64e362b81a950f5292970ba402b0ac24d54dd1f6caba97fe10efe1a] ...
	I0916 11:14:07.986455  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b30641ccb64e362b81a950f5292970ba402b0ac24d54dd1f6caba97fe10efe1a"
	I0916 11:14:08.022432  274695 logs.go:123] Gathering logs for container status ...
	I0916 11:14:08.022460  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:14:08.062725  274695 logs.go:123] Gathering logs for kubelet ...
	I0916 11:14:08.062758  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:14:08.122318  274695 logs.go:123] Gathering logs for kube-apiserver [10ad4718e97218c1ce95ab1f27ef0f499520ec8391a382c341a385aafc4e5027] ...
	I0916 11:14:08.122356  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 10ad4718e97218c1ce95ab1f27ef0f499520ec8391a382c341a385aafc4e5027"
	I0916 11:14:08.166508  274695 logs.go:123] Gathering logs for kube-apiserver [5d35346ecb3ed281435941a815ae449eb1e54a96a4798b3c0a33aa91f7f98817] ...
	I0916 11:14:08.166543  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d35346ecb3ed281435941a815ae449eb1e54a96a4798b3c0a33aa91f7f98817"
	I0916 11:14:08.208903  274695 logs.go:123] Gathering logs for coredns [30acbc7b45e29f44070b3c2cd9e349609594ebbaf39bfad9077eca4ac3da4b6d] ...
	I0916 11:14:08.208935  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30acbc7b45e29f44070b3c2cd9e349609594ebbaf39bfad9077eca4ac3da4b6d"
	I0916 11:14:08.241668  274695 logs.go:123] Gathering logs for storage-provisioner [aff7b51ea5d554f201a1d75f00a8b5836f2faea86fffc4675990367ad090f812] ...
	I0916 11:14:08.241693  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aff7b51ea5d554f201a1d75f00a8b5836f2faea86fffc4675990367ad090f812"
	I0916 11:14:08.277092  274695 logs.go:123] Gathering logs for storage-provisioner [89e6bdaa9e7031f477cb7b7d7be9bac9229ff2b0f07eae0e9b2219de39fad023] ...
	I0916 11:14:08.277119  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 89e6bdaa9e7031f477cb7b7d7be9bac9229ff2b0f07eae0e9b2219de39fad023"
	I0916 11:14:08.310097  274695 logs.go:123] Gathering logs for kubernetes-dashboard [52380cf083b1ab1dce8f2f0bdf9ff84e39b7a8e2ab7f38a2acbc07f61860618c] ...
	I0916 11:14:08.310126  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52380cf083b1ab1dce8f2f0bdf9ff84e39b7a8e2ab7f38a2acbc07f61860618c"
	I0916 11:14:08.344272  274695 logs.go:123] Gathering logs for containerd ...
	I0916 11:14:08.344302  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:14:08.397042  274695 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:14:08.397084  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:14:08.491675  274695 logs.go:123] Gathering logs for etcd [88bafae096207a5768a4f8dd93ca28361db384719eb695eac05b48c800386a05] ...
	I0916 11:14:08.491711  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 88bafae096207a5768a4f8dd93ca28361db384719eb695eac05b48c800386a05"
	I0916 11:14:08.529577  274695 logs.go:123] Gathering logs for etcd [0b8b34459e3719308549f98b1bdb7656a632bebe98e50614463b773f213a735d] ...
	I0916 11:14:08.529606  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0b8b34459e3719308549f98b1bdb7656a632bebe98e50614463b773f213a735d"
	I0916 11:14:08.569584  274695 logs.go:123] Gathering logs for coredns [d846e6651f0b5a0d4ae1295abfeddc533b22e95c3ba913c4fe997264fafb3790] ...
	I0916 11:14:08.569620  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d846e6651f0b5a0d4ae1295abfeddc533b22e95c3ba913c4fe997264fafb3790"
	I0916 11:14:08.606342  274695 logs.go:123] Gathering logs for kube-proxy [e7f26c19553d969b1a9b5681d3f324438c37c93225102701c76043b7e78ba74e] ...
	I0916 11:14:08.606381  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7f26c19553d969b1a9b5681d3f324438c37c93225102701c76043b7e78ba74e"
	I0916 11:14:08.641670  274695 logs.go:123] Gathering logs for kube-proxy [49542fa155836c3fde0549a1cb7c27e6491fe3d03f704b1dd6194e52b361cf04] ...
	I0916 11:14:08.641705  274695 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49542fa155836c3fde0549a1cb7c27e6491fe3d03f704b1dd6194e52b361cf04"
	I0916 11:14:05.629250  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:08.128429  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:06.648204  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:08.648894  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:11.184530  274695 system_pods.go:59] 9 kube-system pods found
	I0916 11:14:11.184579  274695 system_pods.go:61] "coredns-7c65d6cfc9-9zbwk" [427a37dd-9a56-455f-bd9e-3ee604164481] Running
	I0916 11:14:11.184587  274695 system_pods.go:61] "etcd-no-preload-349453" [11377b15-3f0d-463f-8b6f-8eae19108040] Running
	I0916 11:14:11.184591  274695 system_pods.go:61] "kindnet-qbh58" [b43e889b-3179-48c6-b6c4-e6c42131c50c] Running
	I0916 11:14:11.184595  274695 system_pods.go:61] "kube-apiserver-no-preload-349453" [06a64195-a484-412f-930b-310199ce4d80] Running
	I0916 11:14:11.184598  274695 system_pods.go:61] "kube-controller-manager-no-preload-349453" [f31864d9-a8cc-4d04-8f94-497ae381d9ca] Running
	I0916 11:14:11.184601  274695 system_pods.go:61] "kube-proxy-n7m28" [a0580caf-fdc3-483b-a5b9-f29db25b8ef6] Running
	I0916 11:14:11.184605  274695 system_pods.go:61] "kube-scheduler-no-preload-349453" [94505c46-f9fd-4bc6-8287-27c30e4696f0] Running
	I0916 11:14:11.184614  274695 system_pods.go:61] "metrics-server-6867b74b74-zw8sx" [ac34c3d4-46cd-404d-8aa8-7d28840fa4d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 11:14:11.184621  274695 system_pods.go:61] "storage-provisioner" [2f218f7f-9232-4d85-bd8d-6cdc6516c83f] Running
	I0916 11:14:11.184631  274695 system_pods.go:74] duration metric: took 3.822481974s to wait for pod list to return data ...
	I0916 11:14:11.184640  274695 default_sa.go:34] waiting for default service account to be created ...
	I0916 11:14:11.187203  274695 default_sa.go:45] found service account: "default"
	I0916 11:14:11.187227  274695 default_sa.go:55] duration metric: took 2.581ms for default service account to be created ...
	I0916 11:14:11.187235  274695 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 11:14:11.192144  274695 system_pods.go:86] 9 kube-system pods found
	I0916 11:14:11.192173  274695 system_pods.go:89] "coredns-7c65d6cfc9-9zbwk" [427a37dd-9a56-455f-bd9e-3ee604164481] Running
	I0916 11:14:11.192180  274695 system_pods.go:89] "etcd-no-preload-349453" [11377b15-3f0d-463f-8b6f-8eae19108040] Running
	I0916 11:14:11.192184  274695 system_pods.go:89] "kindnet-qbh58" [b43e889b-3179-48c6-b6c4-e6c42131c50c] Running
	I0916 11:14:11.192188  274695 system_pods.go:89] "kube-apiserver-no-preload-349453" [06a64195-a484-412f-930b-310199ce4d80] Running
	I0916 11:14:11.192192  274695 system_pods.go:89] "kube-controller-manager-no-preload-349453" [f31864d9-a8cc-4d04-8f94-497ae381d9ca] Running
	I0916 11:14:11.192195  274695 system_pods.go:89] "kube-proxy-n7m28" [a0580caf-fdc3-483b-a5b9-f29db25b8ef6] Running
	I0916 11:14:11.192199  274695 system_pods.go:89] "kube-scheduler-no-preload-349453" [94505c46-f9fd-4bc6-8287-27c30e4696f0] Running
	I0916 11:14:11.192208  274695 system_pods.go:89] "metrics-server-6867b74b74-zw8sx" [ac34c3d4-46cd-404d-8aa8-7d28840fa4d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 11:14:11.192213  274695 system_pods.go:89] "storage-provisioner" [2f218f7f-9232-4d85-bd8d-6cdc6516c83f] Running
	I0916 11:14:11.192221  274695 system_pods.go:126] duration metric: took 4.981505ms to wait for k8s-apps to be running ...
	I0916 11:14:11.192229  274695 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 11:14:11.192273  274695 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:14:11.203385  274695 system_svc.go:56] duration metric: took 11.147951ms WaitForService to wait for kubelet
	I0916 11:14:11.203414  274695 kubeadm.go:582] duration metric: took 4m15.898249119s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:14:11.203436  274695 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:14:11.206397  274695 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 11:14:11.206434  274695 node_conditions.go:123] node cpu capacity is 8
	I0916 11:14:11.206448  274695 node_conditions.go:105] duration metric: took 3.006878ms to run NodePressure ...
	I0916 11:14:11.206460  274695 start.go:241] waiting for startup goroutines ...
	I0916 11:14:11.206467  274695 start.go:246] waiting for cluster config update ...
	I0916 11:14:11.206477  274695 start.go:255] writing updated cluster config ...
	I0916 11:14:11.206744  274695 ssh_runner.go:195] Run: rm -f paused
	I0916 11:14:11.213190  274695 out.go:177] * Done! kubectl is now configured to use "no-preload-349453" cluster and "default" namespace by default
	E0916 11:14:11.214885  274695 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	I0916 11:14:08.776532  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:11.277052  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:10.129356  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:12.628416  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:11.149069  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:13.149276  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:13.776954  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:15.776993  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:15.128974  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:17.629104  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:15.648465  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:17.648603  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:18.276441  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:14:20.277436  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	a3fec77855800       523cad1a4df73       57 seconds ago      Exited              dashboard-metrics-scraper   5                   37b8042cd7253       dashboard-metrics-scraper-7c96f5b85b-l7jwj
	aff7b51ea5d55       6e38f40d628db       3 minutes ago       Running             storage-provisioner         2                   a2a05b00d4414       storage-provisioner
	52380cf083b1a       07655ddf2eebe       4 minutes ago       Running             kubernetes-dashboard        0                   cf3d7f096bae7       kubernetes-dashboard-695b96c756-lmdbc
	d846e6651f0b5       c69fa2e9cbf5f       4 minutes ago       Running             coredns                     1                   8e5b599dc6d86       coredns-7c65d6cfc9-9zbwk
	89e6bdaa9e703       6e38f40d628db       4 minutes ago       Exited              storage-provisioner         1                   a2a05b00d4414       storage-provisioner
	9dd702b61dc80       12968670680f4       4 minutes ago       Running             kindnet-cni                 1                   dd0fc798fbeec       kindnet-qbh58
	e7f26c19553d9       60c005f310ff3       4 minutes ago       Running             kube-proxy                  1                   da2873e44e260       kube-proxy-n7m28
	0ce842d576ac9       175ffd71cce3d       4 minutes ago       Running             kube-controller-manager     1                   376247da34f2f       kube-controller-manager-no-preload-349453
	10ad4718e9721       6bab7719df100       4 minutes ago       Running             kube-apiserver              1                   7f79445428327       kube-apiserver-no-preload-349453
	e5e8d406dc25b       9aa1fad941575       4 minutes ago       Running             kube-scheduler              1                   e5e79ef88ddad       kube-scheduler-no-preload-349453
	88bafae096207       2e96e5913fc06       4 minutes ago       Running             etcd                        1                   6c514f96ef815       etcd-no-preload-349453
	30acbc7b45e29       c69fa2e9cbf5f       4 minutes ago       Exited              coredns                     0                   290db8b125607       coredns-7c65d6cfc9-9zbwk
	b30641ccb64e3       12968670680f4       5 minutes ago       Exited              kindnet-cni                 0                   06502caa119d4       kindnet-qbh58
	49542fa155836       60c005f310ff3       5 minutes ago       Exited              kube-proxy                  0                   0072787e29726       kube-proxy-n7m28
	a4b95a39232c2       175ffd71cce3d       5 minutes ago       Exited              kube-controller-manager     0                   8aeec0e766fdb       kube-controller-manager-no-preload-349453
	5c82d38a57c77       9aa1fad941575       5 minutes ago       Exited              kube-scheduler              0                   8200d83c8723c       kube-scheduler-no-preload-349453
	0b8b34459e371       2e96e5913fc06       5 minutes ago       Exited              etcd                        0                   151cda393a927       etcd-no-preload-349453
	5d35346ecb3ed       6bab7719df100       5 minutes ago       Exited              kube-apiserver              0                   4db1422602ab8       kube-apiserver-no-preload-349453
	
	
	==> containerd <==
	Sep 16 11:11:33 no-preload-349453 containerd[594]: time="2024-09-16T11:11:33.960976802Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host" host=fake.domain
	Sep 16 11:11:33 no-preload-349453 containerd[594]: time="2024-09-16T11:11:33.962500706Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
	Sep 16 11:11:33 no-preload-349453 containerd[594]: time="2024-09-16T11:11:33.962578325Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 16 11:11:57 no-preload-349453 containerd[594]: time="2024-09-16T11:11:57.942615788Z" level=info msg="CreateContainer within sandbox \"37b8042cd7253c934986b9a6288598145231a03692c1a9fd0ec95d1ac7913db5\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:4,}"
	Sep 16 11:11:57 no-preload-349453 containerd[594]: time="2024-09-16T11:11:57.954991403Z" level=info msg="CreateContainer within sandbox \"37b8042cd7253c934986b9a6288598145231a03692c1a9fd0ec95d1ac7913db5\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:4,} returns container id \"cfdc0dfbd49939228dc94f531025a118e6c4e34863d4caac163a94206fe14cd2\""
	Sep 16 11:11:57 no-preload-349453 containerd[594]: time="2024-09-16T11:11:57.955636787Z" level=info msg="StartContainer for \"cfdc0dfbd49939228dc94f531025a118e6c4e34863d4caac163a94206fe14cd2\""
	Sep 16 11:11:57 no-preload-349453 containerd[594]: time="2024-09-16T11:11:57.998628162Z" level=info msg="StartContainer for \"cfdc0dfbd49939228dc94f531025a118e6c4e34863d4caac163a94206fe14cd2\" returns successfully"
	Sep 16 11:11:58 no-preload-349453 containerd[594]: time="2024-09-16T11:11:58.028415770Z" level=info msg="shim disconnected" id=cfdc0dfbd49939228dc94f531025a118e6c4e34863d4caac163a94206fe14cd2 namespace=k8s.io
	Sep 16 11:11:58 no-preload-349453 containerd[594]: time="2024-09-16T11:11:58.028488064Z" level=warning msg="cleaning up after shim disconnected" id=cfdc0dfbd49939228dc94f531025a118e6c4e34863d4caac163a94206fe14cd2 namespace=k8s.io
	Sep 16 11:11:58 no-preload-349453 containerd[594]: time="2024-09-16T11:11:58.028499081Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 16 11:11:58 no-preload-349453 containerd[594]: time="2024-09-16T11:11:58.612329213Z" level=info msg="RemoveContainer for \"f5dd539bd52a9eb4a7965278c274759bf1fc85f707efc362eaf5c8176b3ec4f5\""
	Sep 16 11:11:58 no-preload-349453 containerd[594]: time="2024-09-16T11:11:58.617441638Z" level=info msg="RemoveContainer for \"f5dd539bd52a9eb4a7965278c274759bf1fc85f707efc362eaf5c8176b3ec4f5\" returns successfully"
	Sep 16 11:13:03 no-preload-349453 containerd[594]: time="2024-09-16T11:13:03.940969536Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 11:13:03 no-preload-349453 containerd[594]: time="2024-09-16T11:13:03.972877906Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host" host=fake.domain
	Sep 16 11:13:03 no-preload-349453 containerd[594]: time="2024-09-16T11:13:03.974520468Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
	Sep 16 11:13:03 no-preload-349453 containerd[594]: time="2024-09-16T11:13:03.974585537Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 16 11:13:25 no-preload-349453 containerd[594]: time="2024-09-16T11:13:25.942776616Z" level=info msg="CreateContainer within sandbox \"37b8042cd7253c934986b9a6288598145231a03692c1a9fd0ec95d1ac7913db5\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:5,}"
	Sep 16 11:13:25 no-preload-349453 containerd[594]: time="2024-09-16T11:13:25.954169213Z" level=info msg="CreateContainer within sandbox \"37b8042cd7253c934986b9a6288598145231a03692c1a9fd0ec95d1ac7913db5\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:5,} returns container id \"a3fec778558009b7b3e63ee63f276f30f5aa75aa85894c26fd1cf4b98f9aeb23\""
	Sep 16 11:13:25 no-preload-349453 containerd[594]: time="2024-09-16T11:13:25.954806277Z" level=info msg="StartContainer for \"a3fec778558009b7b3e63ee63f276f30f5aa75aa85894c26fd1cf4b98f9aeb23\""
	Sep 16 11:13:25 no-preload-349453 containerd[594]: time="2024-09-16T11:13:25.998016746Z" level=info msg="StartContainer for \"a3fec778558009b7b3e63ee63f276f30f5aa75aa85894c26fd1cf4b98f9aeb23\" returns successfully"
	Sep 16 11:13:26 no-preload-349453 containerd[594]: time="2024-09-16T11:13:26.028473124Z" level=info msg="shim disconnected" id=a3fec778558009b7b3e63ee63f276f30f5aa75aa85894c26fd1cf4b98f9aeb23 namespace=k8s.io
	Sep 16 11:13:26 no-preload-349453 containerd[594]: time="2024-09-16T11:13:26.028539960Z" level=warning msg="cleaning up after shim disconnected" id=a3fec778558009b7b3e63ee63f276f30f5aa75aa85894c26fd1cf4b98f9aeb23 namespace=k8s.io
	Sep 16 11:13:26 no-preload-349453 containerd[594]: time="2024-09-16T11:13:26.028555164Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 16 11:13:26 no-preload-349453 containerd[594]: time="2024-09-16T11:13:26.812406896Z" level=info msg="RemoveContainer for \"cfdc0dfbd49939228dc94f531025a118e6c4e34863d4caac163a94206fe14cd2\""
	Sep 16 11:13:26 no-preload-349453 containerd[594]: time="2024-09-16T11:13:26.816859325Z" level=info msg="RemoveContainer for \"cfdc0dfbd49939228dc94f531025a118e6c4e34863d4caac163a94206fe14cd2\" returns successfully"
	
	
	==> coredns [30acbc7b45e29f44070b3c2cd9e349609594ebbaf39bfad9077eca4ac3da4b6d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:57592 - 13339 "HINFO IN 8962497822399797364.2477591037072266195. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011748401s
	
	
	==> coredns [d846e6651f0b5a0d4ae1295abfeddc533b22e95c3ba913c4fe997264fafb3790] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:39784 - 42754 "HINFO IN 4506389195264767820.5863092275442924321. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017483715s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2056989246]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 11:10:01.568) (total time: 30000ms):
	Trace[2056989246]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (11:10:31.568)
	Trace[2056989246]: [30.000678928s] [30.000678928s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[2067028540]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 11:10:01.568) (total time: 30000ms):
	Trace[2067028540]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (11:10:31.568)
	Trace[2067028540]: [30.000829045s] [30.000829045s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1324356539]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 11:10:01.568) (total time: 30000ms):
	Trace[1324356539]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (11:10:31.568)
	Trace[1324356539]: [30.000767767s] [30.000767767s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               no-preload-349453
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-349453
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=no-preload-349453
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T11_09_01_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:08:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-349453
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:14:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:10:29 +0000   Mon, 16 Sep 2024 11:08:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:10:29 +0000   Mon, 16 Sep 2024 11:08:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:10:29 +0000   Mon, 16 Sep 2024 11:08:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:10:29 +0000   Mon, 16 Sep 2024 11:08:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-349453
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 2875f559925c49fab36caf96912fb16a
	  System UUID:                28dd4bdd-2700-4b67-8389-386a38b68a64
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-9zbwk                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     5m17s
	  kube-system                 etcd-no-preload-349453                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m22s
	  kube-system                 kindnet-qbh58                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      5m18s
	  kube-system                 kube-apiserver-no-preload-349453              250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m22s
	  kube-system                 kube-controller-manager-no-preload-349453     200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m22s
	  kube-system                 kube-proxy-n7m28                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 kube-scheduler-no-preload-349453              100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m22s
	  kube-system                 metrics-server-6867b74b74-zw8sx               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m43s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m17s
	  kubernetes-dashboard        dashboard-metrics-scraper-7c96f5b85b-l7jwj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m21s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-lmdbc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m16s                  kube-proxy       
	  Normal   Starting                 4m22s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  5m23s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 5m23s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  5m23s                  kubelet          Node no-preload-349453 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m23s                  kubelet          Node no-preload-349453 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m23s                  kubelet          Node no-preload-349453 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m23s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           5m19s                  node-controller  Node no-preload-349453 event: Registered Node no-preload-349453 in Controller
	  Normal   Starting                 4m29s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m29s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  4m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  4m28s (x8 over 4m29s)  kubelet          Node no-preload-349453 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m28s (x7 over 4m29s)  kubelet          Node no-preload-349453 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m28s (x7 over 4m29s)  kubelet          Node no-preload-349453 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m21s                  node-controller  Node no-preload-349453 event: Registered Node no-preload-349453 in Controller
	
	
	==> dmesg <==
	[  +0.000004] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +1.024015] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000007] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000005] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000001] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +2.015813] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000006] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000002] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +4.063624] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000002] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +0.000004] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000002] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +8.191266] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000006] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000002] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	
	
	==> etcd [0b8b34459e3719308549f98b1bdb7656a632bebe98e50614463b773f213a735d] <==
	{"level":"info","ts":"2024-09-16T11:08:56.341999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T11:08:56.342053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T11:08:56.342083Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 1"}
	{"level":"info","ts":"2024-09-16T11:08:56.342100Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T11:08:56.342109Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2024-09-16T11:08:56.342122Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 2"}
	{"level":"info","ts":"2024-09-16T11:08:56.342153Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2024-09-16T11:08:56.343021Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:08:56.343585Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:08:56.343582Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:no-preload-349453 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T11:08:56.343760Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:08:56.343891Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T11:08:56.343954Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T11:08:56.344739Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:08:56.344836Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:08:56.344861Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:08:56.344977Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:08:56.345568Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:08:56.346072Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T11:08:56.346688Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2024-09-16T11:08:59.251819Z","caller":"traceutil/trace.go:171","msg":"trace[909223504] linearizableReadLoop","detail":"{readStateIndex:78; appliedIndex:77; }","duration":"124.299534ms","start":"2024-09-16T11:08:59.127499Z","end":"2024-09-16T11:08:59.251798Z","steps":["trace[909223504] 'read index received'  (duration: 61.163504ms)","trace[909223504] 'applied index is now lower than readState.Index'  (duration: 63.13541ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-16T11:08:59.251872Z","caller":"traceutil/trace.go:171","msg":"trace[1280881910] transaction","detail":"{read_only:false; response_revision:74; number_of_response:1; }","duration":"128.600617ms","start":"2024-09-16T11:08:59.123247Z","end":"2024-09-16T11:08:59.251847Z","steps":["trace[1280881910] 'process raft request'  (duration: 65.397729ms)","trace[1280881910] 'compare'  (duration: 63.021346ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-16T11:08:59.251948Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.433124ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2024-09-16T11:08:59.252009Z","caller":"traceutil/trace.go:171","msg":"trace[1202054448] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:74; }","duration":"124.508287ms","start":"2024-09-16T11:08:59.127491Z","end":"2024-09-16T11:08:59.251999Z","steps":["trace[1202054448] 'agreement among raft nodes before linearized reading'  (duration: 124.386955ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T11:08:59.439373Z","caller":"traceutil/trace.go:171","msg":"trace[1221868137] transaction","detail":"{read_only:false; response_revision:75; number_of_response:1; }","duration":"183.565022ms","start":"2024-09-16T11:08:59.255790Z","end":"2024-09-16T11:08:59.439355Z","steps":["trace[1221868137] 'process raft request'  (duration: 120.890221ms)","trace[1221868137] 'compare'  (duration: 62.56898ms)"],"step_count":2}
	
	
	==> etcd [88bafae096207a5768a4f8dd93ca28361db384719eb695eac05b48c800386a05] <==
	{"level":"info","ts":"2024-09-16T11:09:56.246064Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"]}
	{"level":"info","ts":"2024-09-16T11:09:56.246141Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:09:56.246161Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:09:56.248037Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T11:09:56.248246Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T11:09:56.248268Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T11:09:56.248379Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2024-09-16T11:09:56.248388Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2024-09-16T11:09:58.035954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-16T11:09:58.035999Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T11:09:58.036046Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2024-09-16T11:09:58.036065Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T11:09:58.036077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2024-09-16T11:09:58.036088Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2024-09-16T11:09:58.036109Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2024-09-16T11:09:58.037396Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:no-preload-349453 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T11:09:58.037470Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:09:58.037483Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:09:58.037912Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T11:09:58.038042Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T11:09:58.039496Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:09:58.039517Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:09:58.040369Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T11:09:58.040384Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2024-09-16T11:11:07.598904Z","caller":"traceutil/trace.go:171","msg":"trace[483765360] transaction","detail":"{read_only:false; response_revision:724; number_of_response:1; }","duration":"106.82985ms","start":"2024-09-16T11:11:07.492053Z","end":"2024-09-16T11:11:07.598882Z","steps":["trace[483765360] 'process raft request'  (duration: 106.665604ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:14:23 up 56 min,  0 users,  load average: 2.60, 2.92, 2.26
	Linux no-preload-349453 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [9dd702b61dc808d16191c089c084e5383f48e949a2c2e29a816b32865282edc5] <==
	I0916 11:12:21.947875       1 main.go:299] handling current node
	I0916 11:12:31.949713       1 main.go:295] Handling node with IPs: map[192.168.94.2:{}]
	I0916 11:12:31.949747       1 main.go:299] handling current node
	I0916 11:12:41.943920       1 main.go:295] Handling node with IPs: map[192.168.94.2:{}]
	I0916 11:12:41.943973       1 main.go:299] handling current node
	I0916 11:12:51.947844       1 main.go:295] Handling node with IPs: map[192.168.94.2:{}]
	I0916 11:12:51.947889       1 main.go:299] handling current node
	I0916 11:13:01.940916       1 main.go:295] Handling node with IPs: map[192.168.94.2:{}]
	I0916 11:13:01.940952       1 main.go:299] handling current node
	I0916 11:13:11.943844       1 main.go:295] Handling node with IPs: map[192.168.94.2:{}]
	I0916 11:13:11.943886       1 main.go:299] handling current node
	I0916 11:13:21.945025       1 main.go:295] Handling node with IPs: map[192.168.94.2:{}]
	I0916 11:13:21.945088       1 main.go:299] handling current node
	I0916 11:13:31.943850       1 main.go:295] Handling node with IPs: map[192.168.94.2:{}]
	I0916 11:13:31.943891       1 main.go:299] handling current node
	I0916 11:13:41.943894       1 main.go:295] Handling node with IPs: map[192.168.94.2:{}]
	I0916 11:13:41.943936       1 main.go:299] handling current node
	I0916 11:13:51.947998       1 main.go:295] Handling node with IPs: map[192.168.94.2:{}]
	I0916 11:13:51.948037       1 main.go:299] handling current node
	I0916 11:14:01.941336       1 main.go:295] Handling node with IPs: map[192.168.94.2:{}]
	I0916 11:14:01.941369       1 main.go:299] handling current node
	I0916 11:14:11.943842       1 main.go:295] Handling node with IPs: map[192.168.94.2:{}]
	I0916 11:14:11.943891       1 main.go:299] handling current node
	I0916 11:14:21.941773       1 main.go:295] Handling node with IPs: map[192.168.94.2:{}]
	I0916 11:14:21.941808       1 main.go:299] handling current node
	
	
	==> kindnet [b30641ccb64e362b81a950f5292970ba402b0ac24d54dd1f6caba97fe10efe1a] <==
	I0916 11:09:10.022282       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 11:09:10.022538       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I0916 11:09:10.022724       1 main.go:148] setting mtu 1500 for CNI 
	I0916 11:09:10.022743       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 11:09:10.022773       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 11:09:10.420723       1 controller.go:334] Starting controller kube-network-policies
	I0916 11:09:10.421181       1 controller.go:338] Waiting for informer caches to sync
	I0916 11:09:10.421189       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 11:09:10.721709       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 11:09:10.721737       1 metrics.go:61] Registering metrics
	I0916 11:09:10.721785       1 controller.go:374] Syncing nftables rules
	I0916 11:09:20.425801       1 main.go:295] Handling node with IPs: map[192.168.94.2:{}]
	I0916 11:09:20.425835       1 main.go:299] handling current node
	I0916 11:09:30.427819       1 main.go:295] Handling node with IPs: map[192.168.94.2:{}]
	I0916 11:09:30.427851       1 main.go:299] handling current node
	I0916 11:09:40.423828       1 main.go:295] Handling node with IPs: map[192.168.94.2:{}]
	I0916 11:09:40.423873       1 main.go:299] handling current node
	
	
	==> kube-apiserver [10ad4718e97218c1ce95ab1f27ef0f499520ec8391a382c341a385aafc4e5027] <==
	I0916 11:10:01.228825       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 11:10:01.241013       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 11:10:01.443337       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.161.82"}
	I0916 11:10:01.531995       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.95.67"}
	I0916 11:10:02.630602       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 11:10:02.847565       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 11:10:02.897204       1 controller.go:615] quota admission added evaluator for: endpoints
	W0916 11:11:00.125803       1 handler_proxy.go:99] no RequestInfo found in the context
	W0916 11:11:00.125886       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 11:11:00.125893       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0916 11:11:00.125918       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0916 11:11:00.126985       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0916 11:11:00.127004       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0916 11:13:00.127380       1 handler_proxy.go:99] no RequestInfo found in the context
	W0916 11:13:00.127405       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 11:13:00.127450       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0916 11:13:00.127552       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0916 11:13:00.128611       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0916 11:13:00.128640       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [5d35346ecb3ed281435941a815ae449eb1e54a96a4798b3c0a33aa91f7f98817] <==
	E0916 11:09:40.814483       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0916 11:09:40.815799       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I0916 11:09:40.880166       1 alloc.go:330] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.104.72.125"}
	W0916 11:09:40.924744       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 11:09:40.924829       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0916 11:09:40.929525       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 11:09:40.929582       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0916 11:09:41.809066       1 handler_proxy.go:99] no RequestInfo found in the context
	W0916 11:09:41.809101       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 11:09:41.809117       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0916 11:09:41.809188       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0916 11:09:41.810270       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0916 11:09:41.810282       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [0ce842d576ac95e9a9812149dc6915d43b35aa6a7456fb7b3e051bceb7f8c4fc] <==
	I0916 11:11:07.687367       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="86.68069ms"
	I0916 11:11:07.687464       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="52.287µs"
	I0916 11:11:10.517597       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="5.788602ms"
	I0916 11:11:10.517704       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="58.574µs"
	I0916 11:11:17.906916       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="67.158µs"
	E0916 11:11:32.624504       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 11:11:33.046497       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0916 11:11:45.958641       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="89.333µs"
	I0916 11:11:56.951111       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="88.97µs"
	I0916 11:11:58.622978       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="64.229µs"
	E0916 11:12:02.630464       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 11:12:03.054869       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0916 11:12:07.907018       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="75.99µs"
	E0916 11:12:32.636420       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 11:12:33.063430       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0916 11:13:02.642255       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 11:13:03.069985       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0916 11:13:14.953164       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="88.938µs"
	I0916 11:13:26.822279       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="71.871µs"
	I0916 11:13:27.906561       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="80.211µs"
	I0916 11:13:27.949548       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="79.09µs"
	E0916 11:13:32.647996       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 11:13:33.077585       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0916 11:14:02.653944       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 11:14:03.084934       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-controller-manager [a4b95a39232c279fb958cb4f48f6828232143f571fa31629227ec0c0bfd79969] <==
	I0916 11:09:05.501553       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:09:05.582619       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:09:05.582651       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 11:09:05.590465       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-349453"
	I0916 11:09:06.045501       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="255.463843ms"
	I0916 11:09:06.052468       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.901285ms"
	I0916 11:09:06.052558       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="51.667µs"
	I0916 11:09:06.053697       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="66.481µs"
	I0916 11:09:06.131407       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="100.516µs"
	I0916 11:09:06.647300       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="17.526542ms"
	I0916 11:09:06.654990       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="7.635755ms"
	I0916 11:09:06.655120       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="81.851µs"
	I0916 11:09:07.881805       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="70.434µs"
	I0916 11:09:07.887535       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="88.293µs"
	I0916 11:09:07.891032       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="71.532µs"
	I0916 11:09:11.264980       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-349453"
	I0916 11:09:31.598112       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-349453"
	I0916 11:09:34.905630       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="79.673µs"
	I0916 11:09:34.923877       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.97092ms"
	I0916 11:09:34.923984       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="71.271µs"
	I0916 11:09:40.840605       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="13.714865ms"
	I0916 11:09:40.857675       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="17.012662ms"
	I0916 11:09:40.857775       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="54.761µs"
	I0916 11:09:40.857822       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="23.308µs"
	I0916 11:09:41.938017       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="114.313µs"
	
	
	==> kube-proxy [49542fa155836c3fde0549a1cb7c27e6491fe3d03f704b1dd6194e52b361cf04] <==
	I0916 11:09:06.867943       1 server_linux.go:66] "Using iptables proxy"
	I0916 11:09:06.995156       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.94.2"]
	E0916 11:09:06.995228       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 11:09:07.016693       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 11:09:07.016755       1 server_linux.go:169] "Using iptables Proxier"
	I0916 11:09:07.018577       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 11:09:07.018989       1 server.go:483] "Version info" version="v1.31.1"
	I0916 11:09:07.019027       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:09:07.020423       1 config.go:105] "Starting endpoint slice config controller"
	I0916 11:09:07.020505       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 11:09:07.020533       1 config.go:328] "Starting node config controller"
	I0916 11:09:07.020679       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 11:09:07.020603       1 config.go:199] "Starting service config controller"
	I0916 11:09:07.020757       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 11:09:07.121453       1 shared_informer.go:320] Caches are synced for service config
	I0916 11:09:07.121498       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 11:09:07.121503       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e7f26c19553d969b1a9b5681d3f324438c37c93225102701c76043b7e78ba74e] <==
	I0916 11:10:01.549117       1 server_linux.go:66] "Using iptables proxy"
	I0916 11:10:01.674977       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.94.2"]
	E0916 11:10:01.675039       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 11:10:01.693613       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 11:10:01.693672       1 server_linux.go:169] "Using iptables Proxier"
	I0916 11:10:01.695508       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 11:10:01.696007       1 server.go:483] "Version info" version="v1.31.1"
	I0916 11:10:01.696049       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:10:01.697306       1 config.go:105] "Starting endpoint slice config controller"
	I0916 11:10:01.697344       1 config.go:199] "Starting service config controller"
	I0916 11:10:01.697358       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 11:10:01.697364       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 11:10:01.697413       1 config.go:328] "Starting node config controller"
	I0916 11:10:01.697423       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 11:10:01.797528       1 shared_informer.go:320] Caches are synced for node config
	I0916 11:10:01.797551       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 11:10:01.797568       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [5c82d38a57c77763e3457d1b66ad9e5b564c5c6137cef3dc16a48cd5d44dcf69] <==
	W0916 11:08:59.221741       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 11:08:59.221790       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:59.261959       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 11:08:59.262001       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:59.265606       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:08:59.265658       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:59.490611       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 11:08:59.490652       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:59.579438       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 11:08:59.579489       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:59.585912       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 11:08:59.585982       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:59.629574       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 11:08:59.629617       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:59.663059       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:08:59.663100       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0916 11:08:59.685631       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 11:08:59.685685       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:59.695015       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 11:08:59.695064       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:59.697126       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 11:08:59.697157       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:08:59.699134       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 11:08:59.699171       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0916 11:09:02.728201       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [e5e8d406dc25bdcb4fd4237dfc1f5e1fa2926422f84d0261628f272925fd1cd3] <==
	I0916 11:09:57.220124       1 serving.go:386] Generated self-signed cert in-memory
	W0916 11:09:59.124184       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 11:09:59.124217       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 11:09:59.124243       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 11:09:59.124252       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 11:09:59.141215       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 11:09:59.141473       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:09:59.224990       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 11:09:59.225912       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 11:09:59.225987       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 11:09:59.228719       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 11:09:59.329596       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 11:13:03 no-preload-349453 kubelet[711]: E0916 11:13:03.974937     711 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 16 11:13:03 no-preload-349453 kubelet[711]: E0916 11:13:03.975130     711 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kwm25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:n
il,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin
:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-zw8sx_kube-system(ac34c3d4-46cd-404d-8aa8-7d28840fa4d0): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host" logger="UnhandledError"
	Sep 16 11:13:03 no-preload-349453 kubelet[711]: E0916 11:13:03.976370     711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-6867b74b74-zw8sx" podUID="ac34c3d4-46cd-404d-8aa8-7d28840fa4d0"
	Sep 16 11:13:14 no-preload-349453 kubelet[711]: I0916 11:13:14.940348     711 scope.go:117] "RemoveContainer" containerID="cfdc0dfbd49939228dc94f531025a118e6c4e34863d4caac163a94206fe14cd2"
	Sep 16 11:13:14 no-preload-349453 kubelet[711]: E0916 11:13:14.940570     711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7c96f5b85b-l7jwj_kubernetes-dashboard(9476003c-bf57-41ff-9f44-9412f89c7c16)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-l7jwj" podUID="9476003c-bf57-41ff-9f44-9412f89c7c16"
	Sep 16 11:13:14 no-preload-349453 kubelet[711]: E0916 11:13:14.941349     711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-zw8sx" podUID="ac34c3d4-46cd-404d-8aa8-7d28840fa4d0"
	Sep 16 11:13:25 no-preload-349453 kubelet[711]: I0916 11:13:25.940642     711 scope.go:117] "RemoveContainer" containerID="cfdc0dfbd49939228dc94f531025a118e6c4e34863d4caac163a94206fe14cd2"
	Sep 16 11:13:26 no-preload-349453 kubelet[711]: I0916 11:13:26.811170     711 scope.go:117] "RemoveContainer" containerID="cfdc0dfbd49939228dc94f531025a118e6c4e34863d4caac163a94206fe14cd2"
	Sep 16 11:13:26 no-preload-349453 kubelet[711]: I0916 11:13:26.811520     711 scope.go:117] "RemoveContainer" containerID="a3fec778558009b7b3e63ee63f276f30f5aa75aa85894c26fd1cf4b98f9aeb23"
	Sep 16 11:13:26 no-preload-349453 kubelet[711]: E0916 11:13:26.811694     711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7c96f5b85b-l7jwj_kubernetes-dashboard(9476003c-bf57-41ff-9f44-9412f89c7c16)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-l7jwj" podUID="9476003c-bf57-41ff-9f44-9412f89c7c16"
	Sep 16 11:13:27 no-preload-349453 kubelet[711]: I0916 11:13:27.897031     711 scope.go:117] "RemoveContainer" containerID="a3fec778558009b7b3e63ee63f276f30f5aa75aa85894c26fd1cf4b98f9aeb23"
	Sep 16 11:13:27 no-preload-349453 kubelet[711]: E0916 11:13:27.897267     711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7c96f5b85b-l7jwj_kubernetes-dashboard(9476003c-bf57-41ff-9f44-9412f89c7c16)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-l7jwj" podUID="9476003c-bf57-41ff-9f44-9412f89c7c16"
	Sep 16 11:13:27 no-preload-349453 kubelet[711]: E0916 11:13:27.940665     711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-zw8sx" podUID="ac34c3d4-46cd-404d-8aa8-7d28840fa4d0"
	Sep 16 11:13:39 no-preload-349453 kubelet[711]: I0916 11:13:39.940281     711 scope.go:117] "RemoveContainer" containerID="a3fec778558009b7b3e63ee63f276f30f5aa75aa85894c26fd1cf4b98f9aeb23"
	Sep 16 11:13:39 no-preload-349453 kubelet[711]: E0916 11:13:39.941022     711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-zw8sx" podUID="ac34c3d4-46cd-404d-8aa8-7d28840fa4d0"
	Sep 16 11:13:39 no-preload-349453 kubelet[711]: E0916 11:13:39.941194     711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7c96f5b85b-l7jwj_kubernetes-dashboard(9476003c-bf57-41ff-9f44-9412f89c7c16)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-l7jwj" podUID="9476003c-bf57-41ff-9f44-9412f89c7c16"
	Sep 16 11:13:50 no-preload-349453 kubelet[711]: I0916 11:13:50.939916     711 scope.go:117] "RemoveContainer" containerID="a3fec778558009b7b3e63ee63f276f30f5aa75aa85894c26fd1cf4b98f9aeb23"
	Sep 16 11:13:50 no-preload-349453 kubelet[711]: E0916 11:13:50.940154     711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7c96f5b85b-l7jwj_kubernetes-dashboard(9476003c-bf57-41ff-9f44-9412f89c7c16)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-l7jwj" podUID="9476003c-bf57-41ff-9f44-9412f89c7c16"
	Sep 16 11:13:54 no-preload-349453 kubelet[711]: E0916 11:13:54.941795     711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-zw8sx" podUID="ac34c3d4-46cd-404d-8aa8-7d28840fa4d0"
	Sep 16 11:14:04 no-preload-349453 kubelet[711]: I0916 11:14:04.940519     711 scope.go:117] "RemoveContainer" containerID="a3fec778558009b7b3e63ee63f276f30f5aa75aa85894c26fd1cf4b98f9aeb23"
	Sep 16 11:14:04 no-preload-349453 kubelet[711]: E0916 11:14:04.940771     711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7c96f5b85b-l7jwj_kubernetes-dashboard(9476003c-bf57-41ff-9f44-9412f89c7c16)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-l7jwj" podUID="9476003c-bf57-41ff-9f44-9412f89c7c16"
	Sep 16 11:14:08 no-preload-349453 kubelet[711]: E0916 11:14:08.940775     711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-zw8sx" podUID="ac34c3d4-46cd-404d-8aa8-7d28840fa4d0"
	Sep 16 11:14:15 no-preload-349453 kubelet[711]: I0916 11:14:15.939627     711 scope.go:117] "RemoveContainer" containerID="a3fec778558009b7b3e63ee63f276f30f5aa75aa85894c26fd1cf4b98f9aeb23"
	Sep 16 11:14:15 no-preload-349453 kubelet[711]: E0916 11:14:15.939868     711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7c96f5b85b-l7jwj_kubernetes-dashboard(9476003c-bf57-41ff-9f44-9412f89c7c16)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-l7jwj" podUID="9476003c-bf57-41ff-9f44-9412f89c7c16"
	Sep 16 11:14:22 no-preload-349453 kubelet[711]: E0916 11:14:22.940789     711 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-zw8sx" podUID="ac34c3d4-46cd-404d-8aa8-7d28840fa4d0"
	
	
	==> kubernetes-dashboard [52380cf083b1ab1dce8f2f0bdf9ff84e39b7a8e2ab7f38a2acbc07f61860618c] <==
	2024/09/16 11:10:07 Starting overwatch
	2024/09/16 11:10:07 Using namespace: kubernetes-dashboard
	2024/09/16 11:10:07 Using in-cluster config to connect to apiserver
	2024/09/16 11:10:07 Using secret token for csrf signing
	2024/09/16 11:10:07 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/16 11:10:07 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/16 11:10:07 Successful initial request to the apiserver, version: v1.31.1
	2024/09/16 11:10:07 Generating JWE encryption key
	2024/09/16 11:10:07 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/16 11:10:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/16 11:10:08 Initializing JWE encryption key from synchronized object
	2024/09/16 11:10:08 Creating in-cluster Sidecar client
	2024/09/16 11:10:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:10:08 Serving insecurely on HTTP port: 9090
	2024/09/16 11:10:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:11:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:11:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:12:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:12:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:13:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:13:38 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:14:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [89e6bdaa9e7031f477cb7b7d7be9bac9229ff2b0f07eae0e9b2219de39fad023] <==
	I0916 11:10:01.442350       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0916 11:10:31.448193       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [aff7b51ea5d554f201a1d75f00a8b5836f2faea86fffc4675990367ad090f812] <==
	I0916 11:10:47.017725       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 11:10:47.025109       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 11:10:47.025150       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 11:11:04.421218       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 11:11:04.421419       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-349453_0deca3c9-9c88-4b38-b175-421ccad840e0!
	I0916 11:11:04.421846       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6ddd7c41-8f63-47a8-9650-2ec5bbdf92e6", APIVersion:"v1", ResourceVersion:"715", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-349453_0deca3c9-9c88-4b38-b175-421ccad840e0 became leader
	I0916 11:11:04.522615       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-349453_0deca3c9-9c88-4b38-b175-421ccad840e0!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-349453 -n no-preload-349453
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-349453 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context no-preload-349453 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (580.066µs)
helpers_test.go:263: kubectl --context no-preload-349453 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (7.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (1800.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-771611 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Non-zero exit: kubectl --context auto-771611 replace --force -f testdata/netcat-deployment.yaml: fork/exec /usr/local/bin/kubectl: exec format error (522.855µs)
net_test.go:151: failed to apply netcat manifest: fork/exec /usr/local/bin/kubectl: exec format error
net_test.go:160: failed waiting for netcat deployment to stabilize: timed out waiting for the condition
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
E0916 11:32:03.924396   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:32:08.257111   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:32:12.844277   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:32:29.777008   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestNetworkPlugins/group/auto/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: client rate limiter Wait returned an error: context deadline exceeded
net_test.go:163: ***** TestNetworkPlugins/group/auto/NetCatPod: pod "app=netcat" failed to start within 15m0s: context deadline exceeded ****
net_test.go:163: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p auto-771611 -n auto-771611
net_test.go:163: TestNetworkPlugins/group/auto/NetCatPod: showing logs for failed pods as of 2024-09-16 11:46:01.35394727 +0000 UTC m=+5028.336204332
net_test.go:164: failed waiting for netcat pod: app=netcat within 15m0s: context deadline exceeded
--- FAIL: TestNetworkPlugins/group/auto/NetCatPod (1800.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (7.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-tzbgn" [71da0a2f-2db3-4f64-8f1b-090efc2a5371] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004221589s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-679624 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-679624 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: fork/exec /usr/local/bin/kubectl: exec format error (584.485µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-679624 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": fork/exec /usr/local/bin/kubectl: exec format error
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-679624
helpers_test.go:235: (dbg) docker inspect embed-certs-679624:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8a143ceb3281ed1e96f6f01866c194c6d8fa87e7cc776258c58582b8b277ac01",
	        "Created": "2024-09-16T11:11:24.339291508Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 298814,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T11:12:15.409886581Z",
	            "FinishedAt": "2024-09-16T11:12:14.558630798Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/8a143ceb3281ed1e96f6f01866c194c6d8fa87e7cc776258c58582b8b277ac01/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8a143ceb3281ed1e96f6f01866c194c6d8fa87e7cc776258c58582b8b277ac01/hostname",
	        "HostsPath": "/var/lib/docker/containers/8a143ceb3281ed1e96f6f01866c194c6d8fa87e7cc776258c58582b8b277ac01/hosts",
	        "LogPath": "/var/lib/docker/containers/8a143ceb3281ed1e96f6f01866c194c6d8fa87e7cc776258c58582b8b277ac01/8a143ceb3281ed1e96f6f01866c194c6d8fa87e7cc776258c58582b8b277ac01-json.log",
	        "Name": "/embed-certs-679624",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-679624:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-679624",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/cb7af4404e9e88f242c67c60187c08f2160de8b651471dce3186948c08f3dc76-init/diff:/var/lib/docker/overlay2/e391a31d00d345931d18da68f8a7d7338830f6d9b98fe203461225d03a65d594/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cb7af4404e9e88f242c67c60187c08f2160de8b651471dce3186948c08f3dc76/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cb7af4404e9e88f242c67c60187c08f2160de8b651471dce3186948c08f3dc76/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cb7af4404e9e88f242c67c60187c08f2160de8b651471dce3186948c08f3dc76/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-679624",
	                "Source": "/var/lib/docker/volumes/embed-certs-679624/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-679624",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-679624",
	                "name.minikube.sigs.k8s.io": "embed-certs-679624",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "02a25a75bf868a5880da4abaf2ebf8d6b80a19d63da93aab48b8a616eba179be",
	            "SandboxKey": "/var/run/docker/netns/02a25a75bf86",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-679624": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null,
	                    "NetworkID": "5c8d67185b352feb5e2b0195e3f409fe6cf79bd750730cb6897291fef1a3c3d7",
	                    "EndpointID": "3856910e6c0fa6871cb85c208ab8e4a5efd50e38328d5441425889657fed9c54",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-679624",
	                        "8a143ceb3281"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-679624 -n embed-certs-679624
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-679624 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-679624 logs -n 25: (1.625819557s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p embed-certs-679624                                  | embed-certs-679624           | jenkins | v1.34.0 | 16 Sep 24 11:12 UTC | 16 Sep 24 11:16 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-311911                           | kubernetes-upgrade-311911    | jenkins | v1.34.0 | 16 Sep 24 11:12 UTC | 16 Sep 24 11:12 UTC |
	| delete  | -p                                                     | disable-driver-mounts-852440 | jenkins | v1.34.0 | 16 Sep 24 11:12 UTC | 16 Sep 24 11:12 UTC |
	|         | disable-driver-mounts-852440                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-006978 | jenkins | v1.34.0 | 16 Sep 24 11:12 UTC | 16 Sep 24 11:13 UTC |
	|         | default-k8s-diff-port-006978                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-006978  | default-k8s-diff-port-006978 | jenkins | v1.34.0 | 16 Sep 24 11:13 UTC | 16 Sep 24 11:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-006978 | jenkins | v1.34.0 | 16 Sep 24 11:13 UTC | 16 Sep 24 11:13 UTC |
	|         | default-k8s-diff-port-006978                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-006978       | default-k8s-diff-port-006978 | jenkins | v1.34.0 | 16 Sep 24 11:13 UTC | 16 Sep 24 11:13 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-006978 | jenkins | v1.34.0 | 16 Sep 24 11:13 UTC |                     |
	|         | default-k8s-diff-port-006978                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | no-preload-349453 image list                           | no-preload-349453            | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC | 16 Sep 24 11:14 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-349453                                   | no-preload-349453            | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC | 16 Sep 24 11:14 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-349453                                   | no-preload-349453            | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC | 16 Sep 24 11:14 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-349453                                   | no-preload-349453            | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC | 16 Sep 24 11:14 UTC |
	| delete  | -p no-preload-349453                                   | no-preload-349453            | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC | 16 Sep 24 11:14 UTC |
	| start   | -p newest-cni-802652 --memory=2200 --alsologtostderr   | newest-cni-802652            | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC | 16 Sep 24 11:14 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-802652             | newest-cni-802652            | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC | 16 Sep 24 11:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-802652                                   | newest-cni-802652            | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC | 16 Sep 24 11:14 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-802652                  | newest-cni-802652            | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC | 16 Sep 24 11:14 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-802652 --memory=2200 --alsologtostderr   | newest-cni-802652            | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC | 16 Sep 24 11:15 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-802652 image list                           | newest-cni-802652            | jenkins | v1.34.0 | 16 Sep 24 11:15 UTC | 16 Sep 24 11:15 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-802652                                   | newest-cni-802652            | jenkins | v1.34.0 | 16 Sep 24 11:15 UTC | 16 Sep 24 11:15 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-802652                                   | newest-cni-802652            | jenkins | v1.34.0 | 16 Sep 24 11:15 UTC | 16 Sep 24 11:15 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-802652                                   | newest-cni-802652            | jenkins | v1.34.0 | 16 Sep 24 11:15 UTC | 16 Sep 24 11:15 UTC |
	| delete  | -p newest-cni-802652                                   | newest-cni-802652            | jenkins | v1.34.0 | 16 Sep 24 11:15 UTC | 16 Sep 24 11:15 UTC |
	| start   | -p auto-771611 --memory=3072                           | auto-771611                  | jenkins | v1.34.0 | 16 Sep 24 11:15 UTC | 16 Sep 24 11:16 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	| ssh     | -p auto-771611 pgrep -a                                | auto-771611                  | jenkins | v1.34.0 | 16 Sep 24 11:16 UTC | 16 Sep 24 11:16 UTC |
	|         | kubelet                                                |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:15:16
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:15:16.491202  324211 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:15:16.491307  324211 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:15:16.491315  324211 out.go:358] Setting ErrFile to fd 2...
	I0916 11:15:16.491319  324211 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:15:16.491505  324211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 11:15:16.492170  324211 out.go:352] Setting JSON to false
	I0916 11:15:16.493490  324211 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3460,"bootTime":1726481856,"procs":372,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:15:16.493587  324211 start.go:139] virtualization: kvm guest
	I0916 11:15:16.496049  324211 out.go:177] * [auto-771611] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:15:16.497684  324211 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:15:16.497683  324211 notify.go:220] Checking for updates...
	I0916 11:15:16.500618  324211 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:15:16.502291  324211 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:15:16.503799  324211 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 11:15:16.505097  324211 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:15:16.506455  324211 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:15:16.508026  324211 config.go:182] Loaded profile config "default-k8s-diff-port-006978": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:15:16.508127  324211 config.go:182] Loaded profile config "embed-certs-679624": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:15:16.508216  324211 config.go:182] Loaded profile config "old-k8s-version-371039": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0916 11:15:16.508306  324211 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:15:16.531821  324211 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 11:15:16.531940  324211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:15:16.579899  324211 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:86 SystemTime:2024-09-16 11:15:16.570131482 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:15:16.580004  324211 docker.go:318] overlay module found
	I0916 11:15:16.582193  324211 out.go:177] * Using the docker driver based on user configuration
	I0916 11:15:16.583893  324211 start.go:297] selected driver: docker
	I0916 11:15:16.583908  324211 start.go:901] validating driver "docker" against <nil>
	I0916 11:15:16.583919  324211 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:15:16.584755  324211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:15:16.638519  324211 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:86 SystemTime:2024-09-16 11:15:16.629539288 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:15:16.638683  324211 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 11:15:16.638920  324211 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:15:16.640914  324211 out.go:177] * Using Docker driver with root privileges
	I0916 11:15:16.642269  324211 cni.go:84] Creating CNI manager for ""
	I0916 11:15:16.642338  324211 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:15:16.642352  324211 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 11:15:16.642444  324211 start.go:340] cluster config:
	{Name:auto-771611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-771611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:con
tainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:15:16.643880  324211 out.go:177] * Starting "auto-771611" primary control-plane node in "auto-771611" cluster
	I0916 11:15:16.645152  324211 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 11:15:16.646614  324211 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 11:15:16.647924  324211 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:15:16.647955  324211 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 11:15:16.647988  324211 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4
	I0916 11:15:16.647999  324211 cache.go:56] Caching tarball of preloaded images
	I0916 11:15:16.648106  324211 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 11:15:16.648118  324211 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 11:15:16.648218  324211 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/config.json ...
	I0916 11:15:16.648235  324211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/config.json: {Name:mk89fefd2d210bb41a3dc406d8f222ddbeea70e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0916 11:15:16.669869  324211 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 11:15:16.669890  324211 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 11:15:16.669982  324211 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 11:15:16.670023  324211 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 11:15:16.670034  324211 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 11:15:16.670045  324211 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 11:15:16.670054  324211 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 11:15:16.725034  324211 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 11:15:16.725073  324211 cache.go:194] Successfully downloaded all kic artifacts
	I0916 11:15:16.725112  324211 start.go:360] acquireMachinesLock for auto-771611: {Name:mkce82cb3acc34d2456982cda19c354e6c6c50c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:15:16.725223  324211 start.go:364] duration metric: took 88.365µs to acquireMachinesLock for "auto-771611"
	I0916 11:15:16.725248  324211 start.go:93] Provisioning new machine with config: &{Name:auto-771611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-771611 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 11:15:16.725352  324211 start.go:125] createHost starting for "" (driver="docker")
	I0916 11:15:13.778008  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:16.276669  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:16.128971  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:18.129143  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:15.649177  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:17.650170  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:16.727217  324211 out.go:235] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0916 11:15:16.727452  324211 start.go:159] libmachine.API.Create for "auto-771611" (driver="docker")
	I0916 11:15:16.727483  324211 client.go:168] LocalClient.Create starting
	I0916 11:15:16.727562  324211 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem
	I0916 11:15:16.727595  324211 main.go:141] libmachine: Decoding PEM data...
	I0916 11:15:16.727611  324211 main.go:141] libmachine: Parsing certificate...
	I0916 11:15:16.727661  324211 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem
	I0916 11:15:16.727678  324211 main.go:141] libmachine: Decoding PEM data...
	I0916 11:15:16.727688  324211 main.go:141] libmachine: Parsing certificate...
	I0916 11:15:16.728057  324211 cli_runner.go:164] Run: docker network inspect auto-771611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 11:15:16.744520  324211 cli_runner.go:211] docker network inspect auto-771611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 11:15:16.744590  324211 network_create.go:284] running [docker network inspect auto-771611] to gather additional debugging logs...
	I0916 11:15:16.744609  324211 cli_runner.go:164] Run: docker network inspect auto-771611
	W0916 11:15:16.760960  324211 cli_runner.go:211] docker network inspect auto-771611 returned with exit code 1
	I0916 11:15:16.760987  324211 network_create.go:287] error running [docker network inspect auto-771611]: docker network inspect auto-771611: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-771611 not found
	I0916 11:15:16.760999  324211 network_create.go:289] output of [docker network inspect auto-771611]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-771611 not found
	
	** /stderr **
	I0916 11:15:16.761094  324211 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:15:16.781198  324211 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c95c64bb41bd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:bd:76:ab:c4} reservation:<nil>}
	I0916 11:15:16.782294  324211 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fad43aa9929b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:94:3f:61:fd} reservation:<nil>}
	I0916 11:15:16.783277  324211 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-49585fce923a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:ad:45:94:54} reservation:<nil>}
	I0916 11:15:16.784220  324211 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-77357235afce IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:c7:d5:e1:f1} reservation:<nil>}
	I0916 11:15:16.784994  324211 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-5c8d67185b35 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:d3:b2:14:79} reservation:<nil>}
	I0916 11:15:16.786010  324211 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d05770}
	I0916 11:15:16.786037  324211 network_create.go:124] attempt to create docker network auto-771611 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I0916 11:15:16.786092  324211 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-771611 auto-771611
	I0916 11:15:16.849999  324211 network_create.go:108] docker network auto-771611 192.168.94.0/24 created
	I0916 11:15:16.850029  324211 kic.go:121] calculated static IP "192.168.94.2" for the "auto-771611" container
	I0916 11:15:16.850095  324211 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 11:15:16.867895  324211 cli_runner.go:164] Run: docker volume create auto-771611 --label name.minikube.sigs.k8s.io=auto-771611 --label created_by.minikube.sigs.k8s.io=true
	I0916 11:15:16.885870  324211 oci.go:103] Successfully created a docker volume auto-771611
	I0916 11:15:16.885960  324211 cli_runner.go:164] Run: docker run --rm --name auto-771611-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-771611 --entrypoint /usr/bin/test -v auto-771611:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 11:15:17.409712  324211 oci.go:107] Successfully prepared a docker volume auto-771611
	I0916 11:15:17.409759  324211 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:15:17.409781  324211 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 11:15:17.409887  324211 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-771611:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 11:15:18.277253  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:20.777953  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:20.129415  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:22.155991  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:24.628140  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:20.149911  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:22.650313  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:22.905632  324211 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-771611:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (5.495691107s)
	I0916 11:15:22.905662  324211 kic.go:203] duration metric: took 5.495878439s to extract preloaded images to volume ...
	W0916 11:15:22.905808  324211 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 11:15:22.905935  324211 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 11:15:22.956944  324211 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-771611 --name auto-771611 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-771611 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-771611 --network auto-771611 --ip 192.168.94.2 --volume auto-771611:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 11:15:23.249949  324211 cli_runner.go:164] Run: docker container inspect auto-771611 --format={{.State.Running}}
	I0916 11:15:23.270918  324211 cli_runner.go:164] Run: docker container inspect auto-771611 --format={{.State.Status}}
	I0916 11:15:23.290352  324211 cli_runner.go:164] Run: docker exec auto-771611 stat /var/lib/dpkg/alternatives/iptables
	I0916 11:15:23.334029  324211 oci.go:144] the created container "auto-771611" has a running status.
	I0916 11:15:23.334084  324211 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/auto-771611/id_rsa...
	I0916 11:15:23.656356  324211 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3687/.minikube/machines/auto-771611/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 11:15:23.679547  324211 cli_runner.go:164] Run: docker container inspect auto-771611 --format={{.State.Status}}
	I0916 11:15:23.698336  324211 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 11:15:23.698358  324211 kic_runner.go:114] Args: [docker exec --privileged auto-771611 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 11:15:23.752890  324211 cli_runner.go:164] Run: docker container inspect auto-771611 --format={{.State.Status}}
	I0916 11:15:23.770404  324211 machine.go:93] provisionDockerMachine start ...
	I0916 11:15:23.770510  324211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-771611
	I0916 11:15:23.791298  324211 main.go:141] libmachine: Using SSH client type: native
	I0916 11:15:23.791514  324211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I0916 11:15:23.791531  324211 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 11:15:23.967555  324211 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-771611
	
	I0916 11:15:23.967583  324211 ubuntu.go:169] provisioning hostname "auto-771611"
	I0916 11:15:23.967633  324211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-771611
	I0916 11:15:23.987234  324211 main.go:141] libmachine: Using SSH client type: native
	I0916 11:15:23.987417  324211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I0916 11:15:23.987429  324211 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-771611 && echo "auto-771611" | sudo tee /etc/hostname
	I0916 11:15:24.132192  324211 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-771611
	
	I0916 11:15:24.132281  324211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-771611
	I0916 11:15:24.151037  324211 main.go:141] libmachine: Using SSH client type: native
	I0916 11:15:24.151209  324211 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I0916 11:15:24.151227  324211 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-771611' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-771611/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-771611' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:15:24.283777  324211 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:15:24.283807  324211 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 11:15:24.283853  324211 ubuntu.go:177] setting up certificates
	I0916 11:15:24.283867  324211 provision.go:84] configureAuth start
	I0916 11:15:24.283918  324211 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-771611
	I0916 11:15:24.301620  324211 provision.go:143] copyHostCerts
	I0916 11:15:24.301679  324211 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 11:15:24.301690  324211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 11:15:24.301755  324211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 11:15:24.301848  324211 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 11:15:24.301856  324211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 11:15:24.301880  324211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 11:15:24.301933  324211 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 11:15:24.301940  324211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 11:15:24.301962  324211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 11:15:24.302011  324211 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.auto-771611 san=[127.0.0.1 192.168.94.2 auto-771611 localhost minikube]
	I0916 11:15:24.466331  324211 provision.go:177] copyRemoteCerts
	I0916 11:15:24.466415  324211 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:15:24.466464  324211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-771611
	I0916 11:15:24.484176  324211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/auto-771611/id_rsa Username:docker}
	I0916 11:15:24.580783  324211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0916 11:15:24.603783  324211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 11:15:24.626549  324211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 11:15:24.650682  324211 provision.go:87] duration metric: took 366.798722ms to configureAuth
	I0916 11:15:24.650711  324211 ubuntu.go:193] setting minikube options for container-runtime
	I0916 11:15:24.650915  324211 config.go:182] Loaded profile config "auto-771611": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:15:24.650931  324211 machine.go:96] duration metric: took 880.50142ms to provisionDockerMachine
	I0916 11:15:24.650939  324211 client.go:171] duration metric: took 7.923447915s to LocalClient.Create
	I0916 11:15:24.650959  324211 start.go:167] duration metric: took 7.923507549s to libmachine.API.Create "auto-771611"
	I0916 11:15:24.650971  324211 start.go:293] postStartSetup for "auto-771611" (driver="docker")
	I0916 11:15:24.650982  324211 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:15:24.651034  324211 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:15:24.651091  324211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-771611
	I0916 11:15:24.668501  324211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/auto-771611/id_rsa Username:docker}
	I0916 11:15:24.764973  324211 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:15:24.768442  324211 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 11:15:24.768492  324211 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 11:15:24.768506  324211 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 11:15:24.768517  324211 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 11:15:24.768535  324211 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 11:15:24.768599  324211 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 11:15:24.768692  324211 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 11:15:24.768815  324211 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:15:24.778266  324211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:15:24.801699  324211 start.go:296] duration metric: took 150.710307ms for postStartSetup
	I0916 11:15:24.802117  324211 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-771611
	I0916 11:15:24.819045  324211 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/config.json ...
	I0916 11:15:24.819340  324211 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 11:15:24.819393  324211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-771611
	I0916 11:15:24.836188  324211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/auto-771611/id_rsa Username:docker}
	I0916 11:15:24.928745  324211 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 11:15:24.933230  324211 start.go:128] duration metric: took 8.207862066s to createHost
	I0916 11:15:24.933260  324211 start.go:83] releasing machines lock for "auto-771611", held for 8.208026193s
	I0916 11:15:24.933374  324211 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-771611
	I0916 11:15:24.951002  324211 ssh_runner.go:195] Run: cat /version.json
	I0916 11:15:24.951053  324211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-771611
	I0916 11:15:24.951094  324211 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:15:24.951157  324211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-771611
	I0916 11:15:24.969765  324211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/auto-771611/id_rsa Username:docker}
	I0916 11:15:24.970032  324211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/auto-771611/id_rsa Username:docker}
	I0916 11:15:25.140517  324211 ssh_runner.go:195] Run: systemctl --version
	I0916 11:15:25.145257  324211 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:15:25.150141  324211 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 11:15:25.173930  324211 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 11:15:25.174016  324211 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:15:25.201565  324211 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 11:15:25.201599  324211 start.go:495] detecting cgroup driver to use...
	I0916 11:15:25.201646  324211 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 11:15:25.201694  324211 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 11:15:25.213859  324211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 11:15:25.225297  324211 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:15:25.225364  324211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:15:25.238892  324211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:15:25.253971  324211 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:15:25.340483  324211 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:15:25.427144  324211 docker.go:233] disabling docker service ...
	I0916 11:15:25.427204  324211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:15:25.446270  324211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:15:25.457971  324211 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:15:25.545481  324211 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:15:25.631926  324211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:15:25.643185  324211 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:15:25.659951  324211 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 11:15:25.670222  324211 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 11:15:25.680078  324211 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 11:15:25.680160  324211 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 11:15:25.690271  324211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 11:15:25.700781  324211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 11:15:25.711686  324211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 11:15:25.721893  324211 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:15:25.732082  324211 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 11:15:25.742821  324211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 11:15:25.752537  324211 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 11:15:25.762485  324211 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:15:25.771210  324211 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:15:25.780391  324211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:15:25.852602  324211 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 11:15:25.954254  324211 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 11:15:25.954321  324211 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 11:15:25.957862  324211 start.go:563] Will wait 60s for crictl version
	I0916 11:15:25.957924  324211 ssh_runner.go:195] Run: which crictl
	I0916 11:15:25.961437  324211 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:15:25.996492  324211 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 11:15:25.996548  324211 ssh_runner.go:195] Run: containerd --version
	I0916 11:15:26.019277  324211 ssh_runner.go:195] Run: containerd --version
	I0916 11:15:26.044515  324211 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 11:15:26.045808  324211 cli_runner.go:164] Run: docker network inspect auto-771611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:15:26.063382  324211 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0916 11:15:26.067052  324211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:15:26.078272  324211 kubeadm.go:883] updating cluster {Name:auto-771611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-771611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:15:26.078404  324211 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:15:26.078476  324211 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:15:26.110993  324211 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 11:15:26.111014  324211 containerd.go:534] Images already preloaded, skipping extraction
	I0916 11:15:26.111067  324211 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:15:26.144394  324211 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 11:15:26.144416  324211 cache_images.go:84] Images are preloaded, skipping loading
	I0916 11:15:26.144426  324211 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.31.1 containerd true true} ...
	I0916 11:15:26.144538  324211 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-771611 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:auto-771611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0916 11:15:26.144600  324211 ssh_runner.go:195] Run: sudo crictl info
	I0916 11:15:26.178570  324211 cni.go:84] Creating CNI manager for ""
	I0916 11:15:26.178592  324211 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:15:26.178601  324211 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:15:26.178622  324211 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-771611 NodeName:auto-771611 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 11:15:26.178791  324211 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "auto-771611"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:15:26.178862  324211 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 11:15:26.187287  324211 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 11:15:26.187348  324211 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:15:26.196829  324211 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0916 11:15:26.216354  324211 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:15:26.234024  324211 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2165 bytes)
	I0916 11:15:26.251132  324211 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0916 11:15:26.254845  324211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:15:26.265612  324211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:15:26.343006  324211 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:15:26.357092  324211 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611 for IP: 192.168.94.2
	I0916 11:15:26.357116  324211 certs.go:194] generating shared ca certs ...
	I0916 11:15:26.357145  324211 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:15:26.357315  324211 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 11:15:26.357375  324211 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 11:15:26.357387  324211 certs.go:256] generating profile certs ...
	I0916 11:15:26.357467  324211 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/client.key
	I0916 11:15:26.357490  324211 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/client.crt with IP's: []
	I0916 11:15:23.277719  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:25.776832  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:26.720200  324211 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/client.crt ...
	I0916 11:15:26.720241  324211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/client.crt: {Name:mk7c25cde382970a41fea0d8bcabca6fa9174f9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:15:26.720442  324211 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/client.key ...
	I0916 11:15:26.720453  324211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/client.key: {Name:mk83ce9186fbe031e2465fd09efe2dcd60de8001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:15:26.720541  324211 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/apiserver.key.0959e437
	I0916 11:15:26.720558  324211 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/apiserver.crt.0959e437 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I0916 11:15:26.809958  324211 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/apiserver.crt.0959e437 ...
	I0916 11:15:26.809985  324211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/apiserver.crt.0959e437: {Name:mk5081b6c6869e86807e5b5a22f1e4c7ae1fe345 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:15:26.810142  324211 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/apiserver.key.0959e437 ...
	I0916 11:15:26.810155  324211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/apiserver.key.0959e437: {Name:mk1745f73c41b50e577a660c65dadc10bb51c498 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:15:26.810293  324211 certs.go:381] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/apiserver.crt.0959e437 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/apiserver.crt
	I0916 11:15:26.810409  324211 certs.go:385] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/apiserver.key.0959e437 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/apiserver.key
	I0916 11:15:26.810488  324211 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/proxy-client.key
	I0916 11:15:26.810503  324211 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/proxy-client.crt with IP's: []
	I0916 11:15:27.126686  324211 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/proxy-client.crt ...
	I0916 11:15:27.126714  324211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/proxy-client.crt: {Name:mk3e936e9e5c88d35f04fa63b2e753476bc8fba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:15:27.126894  324211 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/proxy-client.key ...
	I0916 11:15:27.126909  324211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/proxy-client.key: {Name:mk0cdaaf9832f7afc634b3129919209a3fc7564e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:15:27.127131  324211 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 11:15:27.127177  324211 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 11:15:27.127190  324211 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:15:27.127224  324211 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 11:15:27.127262  324211 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:15:27.127300  324211 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 11:15:27.127361  324211 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:15:27.128184  324211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:15:27.151976  324211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:15:27.175488  324211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:15:27.198238  324211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:15:27.220564  324211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0916 11:15:27.243492  324211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 11:15:27.269154  324211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:15:27.294030  324211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 11:15:27.315631  324211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:15:27.338243  324211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 11:15:27.361476  324211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 11:15:27.384594  324211 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:15:27.401252  324211 ssh_runner.go:195] Run: openssl version
	I0916 11:15:27.406357  324211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:15:27.415393  324211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:15:27.418716  324211 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:15:27.418787  324211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:15:27.425214  324211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:15:27.434104  324211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 11:15:27.442519  324211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 11:15:27.445722  324211 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 11:15:27.445770  324211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 11:15:27.452098  324211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 11:15:27.461458  324211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 11:15:27.474426  324211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 11:15:27.478240  324211 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 11:15:27.478302  324211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 11:15:27.484988  324211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:15:27.494285  324211 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:15:27.497795  324211 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 11:15:27.497854  324211 kubeadm.go:392] StartCluster: {Name:auto-771611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-771611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:15:27.497923  324211 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 11:15:27.497983  324211 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:15:27.533325  324211 cri.go:89] found id: ""
	I0916 11:15:27.533392  324211 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:15:27.543113  324211 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 11:15:27.552686  324211 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 11:15:27.552740  324211 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 11:15:27.561316  324211 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 11:15:27.561336  324211 kubeadm.go:157] found existing configuration files:
	
	I0916 11:15:27.561378  324211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 11:15:27.569572  324211 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 11:15:27.569629  324211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 11:15:27.577816  324211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 11:15:27.586229  324211 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 11:15:27.586283  324211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 11:15:27.594251  324211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 11:15:27.602647  324211 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 11:15:27.602705  324211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 11:15:27.610734  324211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 11:15:27.619418  324211 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 11:15:27.619466  324211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 11:15:27.629769  324211 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 11:15:27.671400  324211 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 11:15:27.671492  324211 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 11:15:27.690443  324211 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 11:15:27.690550  324211 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 11:15:27.690631  324211 kubeadm.go:310] OS: Linux
	I0916 11:15:27.690682  324211 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 11:15:27.690727  324211 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 11:15:27.690779  324211 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 11:15:27.690849  324211 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 11:15:27.690931  324211 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 11:15:27.691006  324211 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 11:15:27.691064  324211 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 11:15:27.691126  324211 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 11:15:27.691194  324211 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 11:15:27.749410  324211 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 11:15:27.749592  324211 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 11:15:27.749775  324211 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 11:15:27.754980  324211 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 11:15:26.628980  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:29.128313  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:25.149195  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:27.648990  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:27.757296  324211 out.go:235]   - Generating certificates and keys ...
	I0916 11:15:27.757407  324211 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 11:15:27.757500  324211 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 11:15:27.887465  324211 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 11:15:27.950098  324211 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 11:15:28.088254  324211 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 11:15:28.292975  324211 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 11:15:28.550091  324211 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 11:15:28.550228  324211 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [auto-771611 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0916 11:15:28.669115  324211 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 11:15:28.669258  324211 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [auto-771611 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0916 11:15:29.144396  324211 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 11:15:29.280918  324211 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 11:15:29.651406  324211 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 11:15:29.651607  324211 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 11:15:29.734051  324211 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 11:15:29.955481  324211 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 11:15:30.338658  324211 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 11:15:30.523295  324211 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 11:15:30.593256  324211 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 11:15:30.593702  324211 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 11:15:30.596213  324211 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 11:15:30.598419  324211 out.go:235]   - Booting up control plane ...
	I0916 11:15:30.598560  324211 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 11:15:30.598676  324211 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 11:15:30.598767  324211 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 11:15:30.607927  324211 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 11:15:30.614944  324211 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 11:15:30.615041  324211 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 11:15:30.698493  324211 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 11:15:30.698654  324211 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 11:15:28.276972  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:30.277002  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:31.129017  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:33.129218  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:30.148852  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:32.149559  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:34.149781  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:31.699933  324211 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001593847s
	I0916 11:15:31.700067  324211 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 11:15:36.201348  324211 kubeadm.go:310] [api-check] The API server is healthy after 4.50139434s
	I0916 11:15:36.212563  324211 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 11:15:36.227179  324211 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 11:15:36.247845  324211 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 11:15:36.248056  324211 kubeadm.go:310] [mark-control-plane] Marking the node auto-771611 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 11:15:36.255822  324211 kubeadm.go:310] [bootstrap-token] Using token: pqx44c.sserarsl6ql0fvb6
	I0916 11:15:36.257345  324211 out.go:235]   - Configuring RBAC rules ...
	I0916 11:15:36.257494  324211 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 11:15:36.263534  324211 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 11:15:36.270253  324211 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 11:15:36.273263  324211 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 11:15:36.276543  324211 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 11:15:36.279399  324211 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 11:15:32.776951  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:34.777040  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:36.607124  324211 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 11:15:37.046557  324211 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 11:15:37.607155  324211 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 11:15:37.607999  324211 kubeadm.go:310] 
	I0916 11:15:37.608103  324211 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 11:15:37.608122  324211 kubeadm.go:310] 
	I0916 11:15:37.608219  324211 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 11:15:37.608228  324211 kubeadm.go:310] 
	I0916 11:15:37.608258  324211 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 11:15:37.608360  324211 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 11:15:37.608450  324211 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 11:15:37.608465  324211 kubeadm.go:310] 
	I0916 11:15:37.608535  324211 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 11:15:37.608543  324211 kubeadm.go:310] 
	I0916 11:15:37.608608  324211 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 11:15:37.608616  324211 kubeadm.go:310] 
	I0916 11:15:37.608682  324211 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 11:15:37.608778  324211 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 11:15:37.608867  324211 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 11:15:37.608879  324211 kubeadm.go:310] 
	I0916 11:15:37.608993  324211 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 11:15:37.609102  324211 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 11:15:37.609111  324211 kubeadm.go:310] 
	I0916 11:15:37.609213  324211 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pqx44c.sserarsl6ql0fvb6 \
	I0916 11:15:37.609353  324211 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 \
	I0916 11:15:37.609384  324211 kubeadm.go:310] 	--control-plane 
	I0916 11:15:37.609389  324211 kubeadm.go:310] 
	I0916 11:15:37.609493  324211 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 11:15:37.609501  324211 kubeadm.go:310] 
	I0916 11:15:37.609597  324211 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pqx44c.sserarsl6ql0fvb6 \
	I0916 11:15:37.609719  324211 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 
	I0916 11:15:37.613164  324211 kubeadm.go:310] W0916 11:15:27.668290    1142 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:15:37.613578  324211 kubeadm.go:310] W0916 11:15:27.668949    1142 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:15:37.613908  324211 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 11:15:37.614004  324211 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 11:15:37.614035  324211 cni.go:84] Creating CNI manager for ""
	I0916 11:15:37.614044  324211 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 11:15:37.615628  324211 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0916 11:15:35.629008  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:37.629545  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:36.649344  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:38.651044  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:37.617033  324211 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0916 11:15:37.620983  324211 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 11:15:37.621005  324211 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0916 11:15:37.641177  324211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 11:15:37.849729  324211 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 11:15:37.849824  324211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:15:37.849825  324211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-771611 minikube.k8s.io/updated_at=2024_09_16T11_15_37_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=auto-771611 minikube.k8s.io/primary=true
	I0916 11:15:37.857607  324211 ops.go:34] apiserver oom_adj: -16
	I0916 11:15:37.944662  324211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:15:38.444786  324211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:15:38.945194  324211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:15:39.445709  324211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:15:39.945691  324211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:15:40.445687  324211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:15:40.944820  324211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:15:41.445023  324211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:15:37.276482  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:39.277204  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:41.277274  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:41.945149  324211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:15:42.445171  324211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:15:42.515265  324211 kubeadm.go:1113] duration metric: took 4.665508029s to wait for elevateKubeSystemPrivileges
	I0916 11:15:42.515305  324211 kubeadm.go:394] duration metric: took 15.017453519s to StartCluster
	I0916 11:15:42.515326  324211 settings.go:142] acquiring lock: {Name:mk5f140da961bb985c566f732d611f3e238f1f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:15:42.515401  324211 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:15:42.518276  324211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:15:42.518585  324211 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 11:15:42.518606  324211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 11:15:42.518640  324211 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:15:42.518743  324211 addons.go:69] Setting storage-provisioner=true in profile "auto-771611"
	I0916 11:15:42.518754  324211 addons.go:69] Setting default-storageclass=true in profile "auto-771611"
	I0916 11:15:42.518764  324211 addons.go:234] Setting addon storage-provisioner=true in "auto-771611"
	I0916 11:15:42.518769  324211 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-771611"
	I0916 11:15:42.518801  324211 host.go:66] Checking if "auto-771611" exists ...
	I0916 11:15:42.518801  324211 config.go:182] Loaded profile config "auto-771611": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:15:42.519152  324211 cli_runner.go:164] Run: docker container inspect auto-771611 --format={{.State.Status}}
	I0916 11:15:42.519322  324211 cli_runner.go:164] Run: docker container inspect auto-771611 --format={{.State.Status}}
	I0916 11:15:42.520670  324211 out.go:177] * Verifying Kubernetes components...
	I0916 11:15:42.522612  324211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:15:42.543283  324211 addons.go:234] Setting addon default-storageclass=true in "auto-771611"
	I0916 11:15:42.543332  324211 host.go:66] Checking if "auto-771611" exists ...
	I0916 11:15:42.543805  324211 cli_runner.go:164] Run: docker container inspect auto-771611 --format={{.State.Status}}
	I0916 11:15:42.545699  324211 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:15:42.547125  324211 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:15:42.547142  324211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:15:42.547183  324211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-771611
	I0916 11:15:42.567405  324211 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:15:42.567428  324211 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:15:42.567488  324211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-771611
	I0916 11:15:42.579605  324211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/auto-771611/id_rsa Username:docker}
	I0916 11:15:42.588596  324211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/auto-771611/id_rsa Username:docker}
	I0916 11:15:42.721525  324211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 11:15:42.728087  324211 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:15:42.847602  324211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:15:42.850575  324211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:15:43.330876  324211 start.go:971] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I0916 11:15:43.333608  324211 node_ready.go:35] waiting up to 15m0s for node "auto-771611" to be "Ready" ...
	I0916 11:15:43.345193  324211 node_ready.go:49] node "auto-771611" has status "Ready":"True"
	I0916 11:15:43.345217  324211 node_ready.go:38] duration metric: took 11.580111ms for node "auto-771611" to be "Ready" ...
	I0916 11:15:43.345228  324211 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:15:43.356399  324211 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-58s64" in "kube-system" namespace to be "Ready" ...
	I0916 11:15:43.593177  324211 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0916 11:15:40.129551  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:42.130956  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:44.149156  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:41.149269  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:43.150913  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:43.594473  324211 addons.go:510] duration metric: took 1.075840274s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0916 11:15:43.836390  324211 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-771611" context rescaled to 1 replicas
	I0916 11:15:45.363957  324211 pod_ready.go:103] pod "coredns-7c65d6cfc9-58s64" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:43.776591  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:45.776744  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:46.628606  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:48.628692  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:45.648983  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:48.148613  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:47.862116  324211 pod_ready.go:103] pod "coredns-7c65d6cfc9-58s64" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:49.863088  324211 pod_ready.go:103] pod "coredns-7c65d6cfc9-58s64" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:47.777915  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:50.276765  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:50.628900  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:53.129545  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:50.149815  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:52.648437  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:54.648940  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:51.863196  324211 pod_ready.go:103] pod "coredns-7c65d6cfc9-58s64" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:54.362498  324211 pod_ready.go:103] pod "coredns-7c65d6cfc9-58s64" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:52.776358  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:54.777156  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:55.629100  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:58.129073  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:56.649183  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:58.649250  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:56.862945  324211 pod_ready.go:103] pod "coredns-7c65d6cfc9-58s64" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:59.362477  324211 pod_ready.go:93] pod "coredns-7c65d6cfc9-58s64" in "kube-system" namespace has status "Ready":"True"
	I0916 11:15:59.362498  324211 pod_ready.go:82] duration metric: took 16.006060675s for pod "coredns-7c65d6cfc9-58s64" in "kube-system" namespace to be "Ready" ...
	I0916 11:15:59.362509  324211 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-cgc9b" in "kube-system" namespace to be "Ready" ...
	I0916 11:15:59.364273  324211 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-cgc9b" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-cgc9b" not found
	I0916 11:15:59.364305  324211 pod_ready.go:82] duration metric: took 1.789744ms for pod "coredns-7c65d6cfc9-cgc9b" in "kube-system" namespace to be "Ready" ...
	E0916 11:15:59.364315  324211 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-cgc9b" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-cgc9b" not found
	I0916 11:15:59.364324  324211 pod_ready.go:79] waiting up to 15m0s for pod "etcd-auto-771611" in "kube-system" namespace to be "Ready" ...
	I0916 11:15:59.368413  324211 pod_ready.go:93] pod "etcd-auto-771611" in "kube-system" namespace has status "Ready":"True"
	I0916 11:15:59.368435  324211 pod_ready.go:82] duration metric: took 4.10499ms for pod "etcd-auto-771611" in "kube-system" namespace to be "Ready" ...
	I0916 11:15:59.368451  324211 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-auto-771611" in "kube-system" namespace to be "Ready" ...
	I0916 11:15:59.372545  324211 pod_ready.go:93] pod "kube-apiserver-auto-771611" in "kube-system" namespace has status "Ready":"True"
	I0916 11:15:59.372566  324211 pod_ready.go:82] duration metric: took 4.107605ms for pod "kube-apiserver-auto-771611" in "kube-system" namespace to be "Ready" ...
	I0916 11:15:59.372577  324211 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-auto-771611" in "kube-system" namespace to be "Ready" ...
	I0916 11:15:59.376604  324211 pod_ready.go:93] pod "kube-controller-manager-auto-771611" in "kube-system" namespace has status "Ready":"True"
	I0916 11:15:59.376624  324211 pod_ready.go:82] duration metric: took 4.040179ms for pod "kube-controller-manager-auto-771611" in "kube-system" namespace to be "Ready" ...
	I0916 11:15:59.376633  324211 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-r7t7j" in "kube-system" namespace to be "Ready" ...
	I0916 11:15:59.559518  324211 pod_ready.go:93] pod "kube-proxy-r7t7j" in "kube-system" namespace has status "Ready":"True"
	I0916 11:15:59.559544  324211 pod_ready.go:82] duration metric: took 182.904883ms for pod "kube-proxy-r7t7j" in "kube-system" namespace to be "Ready" ...
	I0916 11:15:59.559558  324211 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-auto-771611" in "kube-system" namespace to be "Ready" ...
	I0916 11:15:59.960119  324211 pod_ready.go:93] pod "kube-scheduler-auto-771611" in "kube-system" namespace has status "Ready":"True"
	I0916 11:15:59.960144  324211 pod_ready.go:82] duration metric: took 400.579946ms for pod "kube-scheduler-auto-771611" in "kube-system" namespace to be "Ready" ...
	I0916 11:15:59.960153  324211 pod_ready.go:39] duration metric: took 16.614913906s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:15:59.960167  324211 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:15:59.960220  324211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:15:59.972836  324211 api_server.go:72] duration metric: took 17.45420681s to wait for apiserver process to appear ...
	I0916 11:15:59.972859  324211 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:15:59.972879  324211 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0916 11:15:59.977873  324211 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0916 11:15:59.978915  324211 api_server.go:141] control plane version: v1.31.1
	I0916 11:15:59.978943  324211 api_server.go:131] duration metric: took 6.07637ms to wait for apiserver health ...
	I0916 11:15:59.978954  324211 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 11:16:00.163423  324211 system_pods.go:59] 8 kube-system pods found
	I0916 11:16:00.163459  324211 system_pods.go:61] "coredns-7c65d6cfc9-58s64" [91eeb8dc-ca91-4e09-9fb7-641b4c46b15c] Running
	I0916 11:16:00.163466  324211 system_pods.go:61] "etcd-auto-771611" [b375c42f-7dfe-4845-b5ac-31a1159056b2] Running
	I0916 11:16:00.163474  324211 system_pods.go:61] "kindnet-cbhv2" [ba1c0aa1-28c3-412a-bfc0-133cab2e58f6] Running
	I0916 11:16:00.163480  324211 system_pods.go:61] "kube-apiserver-auto-771611" [ae18b5e3-cc64-4e9e-9be6-076235c04b6a] Running
	I0916 11:16:00.163485  324211 system_pods.go:61] "kube-controller-manager-auto-771611" [4819b287-970c-4b95-85fb-61db63b46421] Running
	I0916 11:16:00.163491  324211 system_pods.go:61] "kube-proxy-r7t7j" [597eae7d-f5b5-4fc0-acf3-05c8336e7a5f] Running
	I0916 11:16:00.163496  324211 system_pods.go:61] "kube-scheduler-auto-771611" [47580deb-abcf-48d0-aeac-718cc612e340] Running
	I0916 11:16:00.163501  324211 system_pods.go:61] "storage-provisioner" [36a51927-99a3-4b6f-8c24-39dcda64a664] Running
	I0916 11:16:00.163509  324211 system_pods.go:74] duration metric: took 184.548239ms to wait for pod list to return data ...
	I0916 11:16:00.163522  324211 default_sa.go:34] waiting for default service account to be created ...
	I0916 11:16:00.359637  324211 default_sa.go:45] found service account: "default"
	I0916 11:16:00.359664  324211 default_sa.go:55] duration metric: took 196.134894ms for default service account to be created ...
	I0916 11:16:00.359676  324211 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 11:16:00.562344  324211 system_pods.go:86] 8 kube-system pods found
	I0916 11:16:00.562375  324211 system_pods.go:89] "coredns-7c65d6cfc9-58s64" [91eeb8dc-ca91-4e09-9fb7-641b4c46b15c] Running
	I0916 11:16:00.562380  324211 system_pods.go:89] "etcd-auto-771611" [b375c42f-7dfe-4845-b5ac-31a1159056b2] Running
	I0916 11:16:00.562384  324211 system_pods.go:89] "kindnet-cbhv2" [ba1c0aa1-28c3-412a-bfc0-133cab2e58f6] Running
	I0916 11:16:00.562388  324211 system_pods.go:89] "kube-apiserver-auto-771611" [ae18b5e3-cc64-4e9e-9be6-076235c04b6a] Running
	I0916 11:16:00.562392  324211 system_pods.go:89] "kube-controller-manager-auto-771611" [4819b287-970c-4b95-85fb-61db63b46421] Running
	I0916 11:16:00.562395  324211 system_pods.go:89] "kube-proxy-r7t7j" [597eae7d-f5b5-4fc0-acf3-05c8336e7a5f] Running
	I0916 11:16:00.562399  324211 system_pods.go:89] "kube-scheduler-auto-771611" [47580deb-abcf-48d0-aeac-718cc612e340] Running
	I0916 11:16:00.562402  324211 system_pods.go:89] "storage-provisioner" [36a51927-99a3-4b6f-8c24-39dcda64a664] Running
	I0916 11:16:00.562409  324211 system_pods.go:126] duration metric: took 202.727406ms to wait for k8s-apps to be running ...
	I0916 11:16:00.562416  324211 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 11:16:00.562471  324211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:16:00.573572  324211 system_svc.go:56] duration metric: took 11.144528ms WaitForService to wait for kubelet
	I0916 11:16:00.573607  324211 kubeadm.go:582] duration metric: took 18.054981525s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:16:00.573629  324211 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:16:00.759938  324211 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 11:16:00.759975  324211 node_conditions.go:123] node cpu capacity is 8
	I0916 11:16:00.759995  324211 node_conditions.go:105] duration metric: took 186.361771ms to run NodePressure ...
	I0916 11:16:00.760006  324211 start.go:241] waiting for startup goroutines ...
	I0916 11:16:00.760013  324211 start.go:246] waiting for cluster config update ...
	I0916 11:16:00.760023  324211 start.go:255] writing updated cluster config ...
	I0916 11:16:00.760293  324211 ssh_runner.go:195] Run: rm -f paused
	I0916 11:16:00.766716  324211 out.go:177] * Done! kubectl is now configured to use "auto-771611" cluster and "default" namespace by default
	E0916 11:16:00.768428  324211 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	I0916 11:15:57.276967  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:15:59.277179  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:01.277243  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:00.129568  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:02.628850  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:01.148702  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:03.649414  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:03.776353  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:06.277177  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:05.128710  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:07.630816  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:06.148841  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:08.649080  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:08.776034  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:10.777020  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:10.128647  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:12.628595  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:11.148637  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:13.648761  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:13.276155  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:15.276719  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:15.129279  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:17.129557  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:19.628776  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:15.648931  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:17.649106  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:17.776593  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:20.277077  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:22.129214  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:24.629099  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:20.149440  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:22.649002  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:22.776926  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:24.777009  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:26.629360  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:29.129253  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:25.149169  298514 pod_ready.go:103] pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:26.149050  298514 pod_ready.go:82] duration metric: took 4m0.005907881s for pod "metrics-server-6867b74b74-qgvl9" in "kube-system" namespace to be "Ready" ...
	E0916 11:16:26.149076  298514 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0916 11:16:26.149105  298514 pod_ready.go:39] duration metric: took 4m0.60982607s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:16:26.149124  298514 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:16:26.149159  298514 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:16:26.149218  298514 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:16:26.183635  298514 cri.go:89] found id: "cdad97219867df2ded19d219654472a41674b342b9961e00c7f6e4451fd8d72c"
	I0916 11:16:26.183662  298514 cri.go:89] found id: "debbdc082cc9ca9a3fe53ba70655a9e18f2b6df022109c93f5db8c5e17b5c30a"
	I0916 11:16:26.183668  298514 cri.go:89] found id: ""
	I0916 11:16:26.183676  298514 logs.go:276] 2 containers: [cdad97219867df2ded19d219654472a41674b342b9961e00c7f6e4451fd8d72c debbdc082cc9ca9a3fe53ba70655a9e18f2b6df022109c93f5db8c5e17b5c30a]
	I0916 11:16:26.183797  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:26.187079  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:26.190212  298514 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:16:26.190273  298514 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:16:26.225457  298514 cri.go:89] found id: "96fb65ed9d834089ed0425c6be8cb6a8e9fa0f9dd4787e4b0980d1581a756b54"
	I0916 11:16:26.225489  298514 cri.go:89] found id: "e7db7be77ed78b3a10c8d675a7803092991d0e4a8c3e81f93fe10d600f5fd3a0"
	I0916 11:16:26.225495  298514 cri.go:89] found id: ""
	I0916 11:16:26.225504  298514 logs.go:276] 2 containers: [96fb65ed9d834089ed0425c6be8cb6a8e9fa0f9dd4787e4b0980d1581a756b54 e7db7be77ed78b3a10c8d675a7803092991d0e4a8c3e81f93fe10d600f5fd3a0]
	I0916 11:16:26.225561  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:26.230074  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:26.233225  298514 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:16:26.233283  298514 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:16:26.266419  298514 cri.go:89] found id: "5d29018d4a6205f4a5647facca4407ea2acaa36c5a885457f47b7a31bdca8de0"
	I0916 11:16:26.266447  298514 cri.go:89] found id: "3dab298bfe5b5f466c279d41236e5592bfcab2429f9c5b2ce3ac5bf7c01c183d"
	I0916 11:16:26.266451  298514 cri.go:89] found id: ""
	I0916 11:16:26.266457  298514 logs.go:276] 2 containers: [5d29018d4a6205f4a5647facca4407ea2acaa36c5a885457f47b7a31bdca8de0 3dab298bfe5b5f466c279d41236e5592bfcab2429f9c5b2ce3ac5bf7c01c183d]
	I0916 11:16:26.266511  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:26.270171  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:26.274070  298514 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:16:26.274149  298514 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:16:26.309903  298514 cri.go:89] found id: "ac979bde4e227043cfb8483a4d54a734373a62012be1ef212239ef8b959912c1"
	I0916 11:16:26.309932  298514 cri.go:89] found id: "7637dc0ee3d4d0dad8e111abe371f5489a1304abc063851969b1c262fb4b2a10"
	I0916 11:16:26.309937  298514 cri.go:89] found id: ""
	I0916 11:16:26.309946  298514 logs.go:276] 2 containers: [ac979bde4e227043cfb8483a4d54a734373a62012be1ef212239ef8b959912c1 7637dc0ee3d4d0dad8e111abe371f5489a1304abc063851969b1c262fb4b2a10]
	I0916 11:16:26.309994  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:26.313448  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:26.316524  298514 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:16:26.316587  298514 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:16:26.352329  298514 cri.go:89] found id: "7993c3f14244f58c4d9163c2b8d1fa3eb2846ccde92cdbb33932dc072a1106c3"
	I0916 11:16:26.352355  298514 cri.go:89] found id: "c182b9d7c07dff5e07ae6d7a2a6ee44903b7d16b719cbe713e148ddee7a32eae"
	I0916 11:16:26.352362  298514 cri.go:89] found id: ""
	I0916 11:16:26.352370  298514 logs.go:276] 2 containers: [7993c3f14244f58c4d9163c2b8d1fa3eb2846ccde92cdbb33932dc072a1106c3 c182b9d7c07dff5e07ae6d7a2a6ee44903b7d16b719cbe713e148ddee7a32eae]
	I0916 11:16:26.352427  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:26.357174  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:26.361478  298514 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:16:26.361573  298514 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:16:26.401338  298514 cri.go:89] found id: "e263620e71d778a5dfa8791096a3df362173cc1ea97067ed4faf292a6c5581ba"
	I0916 11:16:26.401364  298514 cri.go:89] found id: "98ba0135cf4f3961fbf8ace4f9fb6ca27a2883eb5d31aa7115bb9c9828c71f32"
	I0916 11:16:26.401493  298514 cri.go:89] found id: ""
	I0916 11:16:26.401514  298514 logs.go:276] 2 containers: [e263620e71d778a5dfa8791096a3df362173cc1ea97067ed4faf292a6c5581ba 98ba0135cf4f3961fbf8ace4f9fb6ca27a2883eb5d31aa7115bb9c9828c71f32]
	I0916 11:16:26.401572  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:26.405224  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:26.408533  298514 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:16:26.408606  298514 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:16:26.443415  298514 cri.go:89] found id: "bce0bf0824d9328eb0646e3110f5ca1b80318b72666f29555d3ed24d7aa08d1c"
	I0916 11:16:26.443434  298514 cri.go:89] found id: "2dbb170a519e89c32bdecca493bd2479c29f546b0a9ba488f7768c7e7cdd8cc6"
	I0916 11:16:26.443438  298514 cri.go:89] found id: ""
	I0916 11:16:26.443444  298514 logs.go:276] 2 containers: [bce0bf0824d9328eb0646e3110f5ca1b80318b72666f29555d3ed24d7aa08d1c 2dbb170a519e89c32bdecca493bd2479c29f546b0a9ba488f7768c7e7cdd8cc6]
	I0916 11:16:26.443488  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:26.447259  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:26.450535  298514 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:16:26.450593  298514 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:16:26.487310  298514 cri.go:89] found id: "a004cb2b980056e835fc7f24bc7be1594c5efffc763e0e1af45bb0862aa43203"
	I0916 11:16:26.487332  298514 cri.go:89] found id: "b8cfe956c68340395c79fe3267d07b586671c86715a48a65e559747f45140168"
	I0916 11:16:26.487337  298514 cri.go:89] found id: ""
	I0916 11:16:26.487343  298514 logs.go:276] 2 containers: [a004cb2b980056e835fc7f24bc7be1594c5efffc763e0e1af45bb0862aa43203 b8cfe956c68340395c79fe3267d07b586671c86715a48a65e559747f45140168]
	I0916 11:16:26.487385  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:26.490910  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:26.494376  298514 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0916 11:16:26.494443  298514 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0916 11:16:26.530219  298514 cri.go:89] found id: "3c7eb2cb404402dd137d29ee9ac327b9c01414947d1986518ce39867d5a0d233"
	I0916 11:16:26.530244  298514 cri.go:89] found id: ""
	I0916 11:16:26.530252  298514 logs.go:276] 1 containers: [3c7eb2cb404402dd137d29ee9ac327b9c01414947d1986518ce39867d5a0d233]
	I0916 11:16:26.530312  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:26.533776  298514 logs.go:123] Gathering logs for dmesg ...
	I0916 11:16:26.533802  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:16:26.557489  298514 logs.go:123] Gathering logs for etcd [96fb65ed9d834089ed0425c6be8cb6a8e9fa0f9dd4787e4b0980d1581a756b54] ...
	I0916 11:16:26.557525  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96fb65ed9d834089ed0425c6be8cb6a8e9fa0f9dd4787e4b0980d1581a756b54"
	I0916 11:16:26.599504  298514 logs.go:123] Gathering logs for kube-scheduler [7637dc0ee3d4d0dad8e111abe371f5489a1304abc063851969b1c262fb4b2a10] ...
	I0916 11:16:26.599537  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7637dc0ee3d4d0dad8e111abe371f5489a1304abc063851969b1c262fb4b2a10"
	I0916 11:16:26.642595  298514 logs.go:123] Gathering logs for kindnet [bce0bf0824d9328eb0646e3110f5ca1b80318b72666f29555d3ed24d7aa08d1c] ...
	I0916 11:16:26.642628  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bce0bf0824d9328eb0646e3110f5ca1b80318b72666f29555d3ed24d7aa08d1c"
	I0916 11:16:26.679293  298514 logs.go:123] Gathering logs for kindnet [2dbb170a519e89c32bdecca493bd2479c29f546b0a9ba488f7768c7e7cdd8cc6] ...
	I0916 11:16:26.679321  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2dbb170a519e89c32bdecca493bd2479c29f546b0a9ba488f7768c7e7cdd8cc6"
	I0916 11:16:26.713971  298514 logs.go:123] Gathering logs for storage-provisioner [a004cb2b980056e835fc7f24bc7be1594c5efffc763e0e1af45bb0862aa43203] ...
	I0916 11:16:26.714001  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a004cb2b980056e835fc7f24bc7be1594c5efffc763e0e1af45bb0862aa43203"
	I0916 11:16:26.749437  298514 logs.go:123] Gathering logs for containerd ...
	I0916 11:16:26.749466  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:16:26.805447  298514 logs.go:123] Gathering logs for coredns [5d29018d4a6205f4a5647facca4407ea2acaa36c5a885457f47b7a31bdca8de0] ...
	I0916 11:16:26.805480  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d29018d4a6205f4a5647facca4407ea2acaa36c5a885457f47b7a31bdca8de0"
	I0916 11:16:26.842656  298514 logs.go:123] Gathering logs for kube-scheduler [ac979bde4e227043cfb8483a4d54a734373a62012be1ef212239ef8b959912c1] ...
	I0916 11:16:26.842684  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac979bde4e227043cfb8483a4d54a734373a62012be1ef212239ef8b959912c1"
	I0916 11:16:26.877598  298514 logs.go:123] Gathering logs for kube-proxy [c182b9d7c07dff5e07ae6d7a2a6ee44903b7d16b719cbe713e148ddee7a32eae] ...
	I0916 11:16:26.877625  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c182b9d7c07dff5e07ae6d7a2a6ee44903b7d16b719cbe713e148ddee7a32eae"
	I0916 11:16:26.912216  298514 logs.go:123] Gathering logs for kube-controller-manager [e263620e71d778a5dfa8791096a3df362173cc1ea97067ed4faf292a6c5581ba] ...
	I0916 11:16:26.912250  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e263620e71d778a5dfa8791096a3df362173cc1ea97067ed4faf292a6c5581ba"
	I0916 11:16:26.966506  298514 logs.go:123] Gathering logs for storage-provisioner [b8cfe956c68340395c79fe3267d07b586671c86715a48a65e559747f45140168] ...
	I0916 11:16:26.966551  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8cfe956c68340395c79fe3267d07b586671c86715a48a65e559747f45140168"
	I0916 11:16:27.001861  298514 logs.go:123] Gathering logs for container status ...
	I0916 11:16:27.001887  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:16:27.041915  298514 logs.go:123] Gathering logs for kubelet ...
	I0916 11:16:27.041942  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:16:27.102994  298514 logs.go:123] Gathering logs for kube-apiserver [debbdc082cc9ca9a3fe53ba70655a9e18f2b6df022109c93f5db8c5e17b5c30a] ...
	I0916 11:16:27.103038  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 debbdc082cc9ca9a3fe53ba70655a9e18f2b6df022109c93f5db8c5e17b5c30a"
	I0916 11:16:27.149499  298514 logs.go:123] Gathering logs for etcd [e7db7be77ed78b3a10c8d675a7803092991d0e4a8c3e81f93fe10d600f5fd3a0] ...
	I0916 11:16:27.149538  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7db7be77ed78b3a10c8d675a7803092991d0e4a8c3e81f93fe10d600f5fd3a0"
	I0916 11:16:27.188713  298514 logs.go:123] Gathering logs for coredns [3dab298bfe5b5f466c279d41236e5592bfcab2429f9c5b2ce3ac5bf7c01c183d] ...
	I0916 11:16:27.188743  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3dab298bfe5b5f466c279d41236e5592bfcab2429f9c5b2ce3ac5bf7c01c183d"
	I0916 11:16:27.226713  298514 logs.go:123] Gathering logs for kube-proxy [7993c3f14244f58c4d9163c2b8d1fa3eb2846ccde92cdbb33932dc072a1106c3] ...
	I0916 11:16:27.226753  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7993c3f14244f58c4d9163c2b8d1fa3eb2846ccde92cdbb33932dc072a1106c3"
	I0916 11:16:27.262708  298514 logs.go:123] Gathering logs for kube-controller-manager [98ba0135cf4f3961fbf8ace4f9fb6ca27a2883eb5d31aa7115bb9c9828c71f32] ...
	I0916 11:16:27.262734  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98ba0135cf4f3961fbf8ace4f9fb6ca27a2883eb5d31aa7115bb9c9828c71f32"
	I0916 11:16:27.313170  298514 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:16:27.313213  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:16:27.408418  298514 logs.go:123] Gathering logs for kube-apiserver [cdad97219867df2ded19d219654472a41674b342b9961e00c7f6e4451fd8d72c] ...
	I0916 11:16:27.408449  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdad97219867df2ded19d219654472a41674b342b9961e00c7f6e4451fd8d72c"
	I0916 11:16:27.450793  298514 logs.go:123] Gathering logs for kubernetes-dashboard [3c7eb2cb404402dd137d29ee9ac327b9c01414947d1986518ce39867d5a0d233] ...
	I0916 11:16:27.450827  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c7eb2cb404402dd137d29ee9ac327b9c01414947d1986518ce39867d5a0d233"
	I0916 11:16:29.988111  298514 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:16:30.000271  298514 api_server.go:72] duration metric: took 4m8.265710851s to wait for apiserver process to appear ...
	I0916 11:16:30.000296  298514 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:16:30.000338  298514 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:16:30.000380  298514 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:16:30.034505  298514 cri.go:89] found id: "cdad97219867df2ded19d219654472a41674b342b9961e00c7f6e4451fd8d72c"
	I0916 11:16:30.034530  298514 cri.go:89] found id: "debbdc082cc9ca9a3fe53ba70655a9e18f2b6df022109c93f5db8c5e17b5c30a"
	I0916 11:16:30.034535  298514 cri.go:89] found id: ""
	I0916 11:16:30.034543  298514 logs.go:276] 2 containers: [cdad97219867df2ded19d219654472a41674b342b9961e00c7f6e4451fd8d72c debbdc082cc9ca9a3fe53ba70655a9e18f2b6df022109c93f5db8c5e17b5c30a]
	I0916 11:16:30.034590  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:30.038165  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:30.041488  298514 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:16:30.041555  298514 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:16:27.277860  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:29.776485  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:31.777071  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:31.129617  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:33.628636  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:30.075096  298514 cri.go:89] found id: "96fb65ed9d834089ed0425c6be8cb6a8e9fa0f9dd4787e4b0980d1581a756b54"
	I0916 11:16:30.075119  298514 cri.go:89] found id: "e7db7be77ed78b3a10c8d675a7803092991d0e4a8c3e81f93fe10d600f5fd3a0"
	I0916 11:16:30.075125  298514 cri.go:89] found id: ""
	I0916 11:16:30.075133  298514 logs.go:276] 2 containers: [96fb65ed9d834089ed0425c6be8cb6a8e9fa0f9dd4787e4b0980d1581a756b54 e7db7be77ed78b3a10c8d675a7803092991d0e4a8c3e81f93fe10d600f5fd3a0]
	I0916 11:16:30.075186  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:30.078881  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:30.082205  298514 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:16:30.082259  298514 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:16:30.115916  298514 cri.go:89] found id: "5d29018d4a6205f4a5647facca4407ea2acaa36c5a885457f47b7a31bdca8de0"
	I0916 11:16:30.115939  298514 cri.go:89] found id: "3dab298bfe5b5f466c279d41236e5592bfcab2429f9c5b2ce3ac5bf7c01c183d"
	I0916 11:16:30.115943  298514 cri.go:89] found id: ""
	I0916 11:16:30.115950  298514 logs.go:276] 2 containers: [5d29018d4a6205f4a5647facca4407ea2acaa36c5a885457f47b7a31bdca8de0 3dab298bfe5b5f466c279d41236e5592bfcab2429f9c5b2ce3ac5bf7c01c183d]
	I0916 11:16:30.115992  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:30.119417  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:30.122694  298514 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:16:30.122754  298514 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:16:30.155234  298514 cri.go:89] found id: "ac979bde4e227043cfb8483a4d54a734373a62012be1ef212239ef8b959912c1"
	I0916 11:16:30.155264  298514 cri.go:89] found id: "7637dc0ee3d4d0dad8e111abe371f5489a1304abc063851969b1c262fb4b2a10"
	I0916 11:16:30.155269  298514 cri.go:89] found id: ""
	I0916 11:16:30.155277  298514 logs.go:276] 2 containers: [ac979bde4e227043cfb8483a4d54a734373a62012be1ef212239ef8b959912c1 7637dc0ee3d4d0dad8e111abe371f5489a1304abc063851969b1c262fb4b2a10]
	I0916 11:16:30.155323  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:30.158914  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:30.162356  298514 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:16:30.162417  298514 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:16:30.195512  298514 cri.go:89] found id: "7993c3f14244f58c4d9163c2b8d1fa3eb2846ccde92cdbb33932dc072a1106c3"
	I0916 11:16:30.195532  298514 cri.go:89] found id: "c182b9d7c07dff5e07ae6d7a2a6ee44903b7d16b719cbe713e148ddee7a32eae"
	I0916 11:16:30.195541  298514 cri.go:89] found id: ""
	I0916 11:16:30.195547  298514 logs.go:276] 2 containers: [7993c3f14244f58c4d9163c2b8d1fa3eb2846ccde92cdbb33932dc072a1106c3 c182b9d7c07dff5e07ae6d7a2a6ee44903b7d16b719cbe713e148ddee7a32eae]
	I0916 11:16:30.195589  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:30.199010  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:30.202238  298514 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:16:30.202294  298514 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:16:30.236715  298514 cri.go:89] found id: "e263620e71d778a5dfa8791096a3df362173cc1ea97067ed4faf292a6c5581ba"
	I0916 11:16:30.236741  298514 cri.go:89] found id: "98ba0135cf4f3961fbf8ace4f9fb6ca27a2883eb5d31aa7115bb9c9828c71f32"
	I0916 11:16:30.236747  298514 cri.go:89] found id: ""
	I0916 11:16:30.236755  298514 logs.go:276] 2 containers: [e263620e71d778a5dfa8791096a3df362173cc1ea97067ed4faf292a6c5581ba 98ba0135cf4f3961fbf8ace4f9fb6ca27a2883eb5d31aa7115bb9c9828c71f32]
	I0916 11:16:30.236812  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:30.240313  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:30.243437  298514 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:16:30.243493  298514 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:16:30.278160  298514 cri.go:89] found id: "bce0bf0824d9328eb0646e3110f5ca1b80318b72666f29555d3ed24d7aa08d1c"
	I0916 11:16:30.278186  298514 cri.go:89] found id: "2dbb170a519e89c32bdecca493bd2479c29f546b0a9ba488f7768c7e7cdd8cc6"
	I0916 11:16:30.278191  298514 cri.go:89] found id: ""
	I0916 11:16:30.278200  298514 logs.go:276] 2 containers: [bce0bf0824d9328eb0646e3110f5ca1b80318b72666f29555d3ed24d7aa08d1c 2dbb170a519e89c32bdecca493bd2479c29f546b0a9ba488f7768c7e7cdd8cc6]
	I0916 11:16:30.278283  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:30.281817  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:30.285240  298514 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:16:30.285301  298514 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:16:30.320028  298514 cri.go:89] found id: "a004cb2b980056e835fc7f24bc7be1594c5efffc763e0e1af45bb0862aa43203"
	I0916 11:16:30.320051  298514 cri.go:89] found id: "b8cfe956c68340395c79fe3267d07b586671c86715a48a65e559747f45140168"
	I0916 11:16:30.320054  298514 cri.go:89] found id: ""
	I0916 11:16:30.320062  298514 logs.go:276] 2 containers: [a004cb2b980056e835fc7f24bc7be1594c5efffc763e0e1af45bb0862aa43203 b8cfe956c68340395c79fe3267d07b586671c86715a48a65e559747f45140168]
	I0916 11:16:30.320104  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:30.323558  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:30.326958  298514 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0916 11:16:30.327019  298514 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0916 11:16:30.360954  298514 cri.go:89] found id: "3c7eb2cb404402dd137d29ee9ac327b9c01414947d1986518ce39867d5a0d233"
	I0916 11:16:30.360974  298514 cri.go:89] found id: ""
	I0916 11:16:30.360981  298514 logs.go:276] 1 containers: [3c7eb2cb404402dd137d29ee9ac327b9c01414947d1986518ce39867d5a0d233]
	I0916 11:16:30.361029  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:30.364833  298514 logs.go:123] Gathering logs for coredns [3dab298bfe5b5f466c279d41236e5592bfcab2429f9c5b2ce3ac5bf7c01c183d] ...
	I0916 11:16:30.364862  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3dab298bfe5b5f466c279d41236e5592bfcab2429f9c5b2ce3ac5bf7c01c183d"
	I0916 11:16:30.399153  298514 logs.go:123] Gathering logs for kube-scheduler [ac979bde4e227043cfb8483a4d54a734373a62012be1ef212239ef8b959912c1] ...
	I0916 11:16:30.399185  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac979bde4e227043cfb8483a4d54a734373a62012be1ef212239ef8b959912c1"
	I0916 11:16:30.434133  298514 logs.go:123] Gathering logs for kube-proxy [c182b9d7c07dff5e07ae6d7a2a6ee44903b7d16b719cbe713e148ddee7a32eae] ...
	I0916 11:16:30.434163  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c182b9d7c07dff5e07ae6d7a2a6ee44903b7d16b719cbe713e148ddee7a32eae"
	I0916 11:16:30.470397  298514 logs.go:123] Gathering logs for kube-controller-manager [e263620e71d778a5dfa8791096a3df362173cc1ea97067ed4faf292a6c5581ba] ...
	I0916 11:16:30.470424  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e263620e71d778a5dfa8791096a3df362173cc1ea97067ed4faf292a6c5581ba"
	I0916 11:16:30.527288  298514 logs.go:123] Gathering logs for kindnet [2dbb170a519e89c32bdecca493bd2479c29f546b0a9ba488f7768c7e7cdd8cc6] ...
	I0916 11:16:30.527328  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2dbb170a519e89c32bdecca493bd2479c29f546b0a9ba488f7768c7e7cdd8cc6"
	I0916 11:16:30.563297  298514 logs.go:123] Gathering logs for kubelet ...
	I0916 11:16:30.563329  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:16:30.620871  298514 logs.go:123] Gathering logs for dmesg ...
	I0916 11:16:30.620911  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:16:30.647118  298514 logs.go:123] Gathering logs for etcd [96fb65ed9d834089ed0425c6be8cb6a8e9fa0f9dd4787e4b0980d1581a756b54] ...
	I0916 11:16:30.647151  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96fb65ed9d834089ed0425c6be8cb6a8e9fa0f9dd4787e4b0980d1581a756b54"
	I0916 11:16:30.686150  298514 logs.go:123] Gathering logs for etcd [e7db7be77ed78b3a10c8d675a7803092991d0e4a8c3e81f93fe10d600f5fd3a0] ...
	I0916 11:16:30.686182  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7db7be77ed78b3a10c8d675a7803092991d0e4a8c3e81f93fe10d600f5fd3a0"
	I0916 11:16:30.727586  298514 logs.go:123] Gathering logs for coredns [5d29018d4a6205f4a5647facca4407ea2acaa36c5a885457f47b7a31bdca8de0] ...
	I0916 11:16:30.727620  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d29018d4a6205f4a5647facca4407ea2acaa36c5a885457f47b7a31bdca8de0"
	I0916 11:16:30.763676  298514 logs.go:123] Gathering logs for kindnet [bce0bf0824d9328eb0646e3110f5ca1b80318b72666f29555d3ed24d7aa08d1c] ...
	I0916 11:16:30.763705  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bce0bf0824d9328eb0646e3110f5ca1b80318b72666f29555d3ed24d7aa08d1c"
	I0916 11:16:30.801234  298514 logs.go:123] Gathering logs for kubernetes-dashboard [3c7eb2cb404402dd137d29ee9ac327b9c01414947d1986518ce39867d5a0d233] ...
	I0916 11:16:30.801271  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c7eb2cb404402dd137d29ee9ac327b9c01414947d1986518ce39867d5a0d233"
	I0916 11:16:30.837420  298514 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:16:30.837457  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:16:30.935124  298514 logs.go:123] Gathering logs for kube-apiserver [debbdc082cc9ca9a3fe53ba70655a9e18f2b6df022109c93f5db8c5e17b5c30a] ...
	I0916 11:16:30.935156  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 debbdc082cc9ca9a3fe53ba70655a9e18f2b6df022109c93f5db8c5e17b5c30a"
	I0916 11:16:30.979024  298514 logs.go:123] Gathering logs for kube-proxy [7993c3f14244f58c4d9163c2b8d1fa3eb2846ccde92cdbb33932dc072a1106c3] ...
	I0916 11:16:30.979057  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7993c3f14244f58c4d9163c2b8d1fa3eb2846ccde92cdbb33932dc072a1106c3"
	I0916 11:16:31.014932  298514 logs.go:123] Gathering logs for kube-controller-manager [98ba0135cf4f3961fbf8ace4f9fb6ca27a2883eb5d31aa7115bb9c9828c71f32] ...
	I0916 11:16:31.014960  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98ba0135cf4f3961fbf8ace4f9fb6ca27a2883eb5d31aa7115bb9c9828c71f32"
	I0916 11:16:31.065233  298514 logs.go:123] Gathering logs for storage-provisioner [a004cb2b980056e835fc7f24bc7be1594c5efffc763e0e1af45bb0862aa43203] ...
	I0916 11:16:31.065269  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a004cb2b980056e835fc7f24bc7be1594c5efffc763e0e1af45bb0862aa43203"
	I0916 11:16:31.101366  298514 logs.go:123] Gathering logs for containerd ...
	I0916 11:16:31.101397  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:16:31.156125  298514 logs.go:123] Gathering logs for kube-apiserver [cdad97219867df2ded19d219654472a41674b342b9961e00c7f6e4451fd8d72c] ...
	I0916 11:16:31.156167  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdad97219867df2ded19d219654472a41674b342b9961e00c7f6e4451fd8d72c"
	I0916 11:16:31.199054  298514 logs.go:123] Gathering logs for kube-scheduler [7637dc0ee3d4d0dad8e111abe371f5489a1304abc063851969b1c262fb4b2a10] ...
	I0916 11:16:31.199092  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7637dc0ee3d4d0dad8e111abe371f5489a1304abc063851969b1c262fb4b2a10"
	I0916 11:16:31.242057  298514 logs.go:123] Gathering logs for storage-provisioner [b8cfe956c68340395c79fe3267d07b586671c86715a48a65e559747f45140168] ...
	I0916 11:16:31.242094  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8cfe956c68340395c79fe3267d07b586671c86715a48a65e559747f45140168"
	I0916 11:16:31.276728  298514 logs.go:123] Gathering logs for container status ...
	I0916 11:16:31.276755  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:16:33.816882  298514 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0916 11:16:33.821611  298514 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0916 11:16:33.822767  298514 api_server.go:141] control plane version: v1.31.1
	I0916 11:16:33.822813  298514 api_server.go:131] duration metric: took 3.822508665s to wait for apiserver health ...
	I0916 11:16:33.822824  298514 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 11:16:33.822857  298514 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:16:33.822915  298514 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:16:33.857709  298514 cri.go:89] found id: "cdad97219867df2ded19d219654472a41674b342b9961e00c7f6e4451fd8d72c"
	I0916 11:16:33.857730  298514 cri.go:89] found id: "debbdc082cc9ca9a3fe53ba70655a9e18f2b6df022109c93f5db8c5e17b5c30a"
	I0916 11:16:33.857734  298514 cri.go:89] found id: ""
	I0916 11:16:33.857741  298514 logs.go:276] 2 containers: [cdad97219867df2ded19d219654472a41674b342b9961e00c7f6e4451fd8d72c debbdc082cc9ca9a3fe53ba70655a9e18f2b6df022109c93f5db8c5e17b5c30a]
	I0916 11:16:33.857786  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:33.861596  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:33.865033  298514 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:16:33.865098  298514 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:16:33.902573  298514 cri.go:89] found id: "96fb65ed9d834089ed0425c6be8cb6a8e9fa0f9dd4787e4b0980d1581a756b54"
	I0916 11:16:33.902598  298514 cri.go:89] found id: "e7db7be77ed78b3a10c8d675a7803092991d0e4a8c3e81f93fe10d600f5fd3a0"
	I0916 11:16:33.902604  298514 cri.go:89] found id: ""
	I0916 11:16:33.902613  298514 logs.go:276] 2 containers: [96fb65ed9d834089ed0425c6be8cb6a8e9fa0f9dd4787e4b0980d1581a756b54 e7db7be77ed78b3a10c8d675a7803092991d0e4a8c3e81f93fe10d600f5fd3a0]
	I0916 11:16:33.902675  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:33.906377  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:33.909806  298514 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:16:33.909872  298514 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:16:33.944825  298514 cri.go:89] found id: "5d29018d4a6205f4a5647facca4407ea2acaa36c5a885457f47b7a31bdca8de0"
	I0916 11:16:33.944846  298514 cri.go:89] found id: "3dab298bfe5b5f466c279d41236e5592bfcab2429f9c5b2ce3ac5bf7c01c183d"
	I0916 11:16:33.944851  298514 cri.go:89] found id: ""
	I0916 11:16:33.944859  298514 logs.go:276] 2 containers: [5d29018d4a6205f4a5647facca4407ea2acaa36c5a885457f47b7a31bdca8de0 3dab298bfe5b5f466c279d41236e5592bfcab2429f9c5b2ce3ac5bf7c01c183d]
	I0916 11:16:33.944915  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:33.948509  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:33.951726  298514 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:16:33.951857  298514 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:16:33.987162  298514 cri.go:89] found id: "ac979bde4e227043cfb8483a4d54a734373a62012be1ef212239ef8b959912c1"
	I0916 11:16:33.987185  298514 cri.go:89] found id: "7637dc0ee3d4d0dad8e111abe371f5489a1304abc063851969b1c262fb4b2a10"
	I0916 11:16:33.987189  298514 cri.go:89] found id: ""
	I0916 11:16:33.987196  298514 logs.go:276] 2 containers: [ac979bde4e227043cfb8483a4d54a734373a62012be1ef212239ef8b959912c1 7637dc0ee3d4d0dad8e111abe371f5489a1304abc063851969b1c262fb4b2a10]
	I0916 11:16:33.987252  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:33.991483  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:33.995577  298514 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:16:33.995646  298514 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:16:34.029480  298514 cri.go:89] found id: "7993c3f14244f58c4d9163c2b8d1fa3eb2846ccde92cdbb33932dc072a1106c3"
	I0916 11:16:34.029505  298514 cri.go:89] found id: "c182b9d7c07dff5e07ae6d7a2a6ee44903b7d16b719cbe713e148ddee7a32eae"
	I0916 11:16:34.029511  298514 cri.go:89] found id: ""
	I0916 11:16:34.029522  298514 logs.go:276] 2 containers: [7993c3f14244f58c4d9163c2b8d1fa3eb2846ccde92cdbb33932dc072a1106c3 c182b9d7c07dff5e07ae6d7a2a6ee44903b7d16b719cbe713e148ddee7a32eae]
	I0916 11:16:34.029580  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:34.033356  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:34.036884  298514 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:16:34.036966  298514 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:16:34.071008  298514 cri.go:89] found id: "e263620e71d778a5dfa8791096a3df362173cc1ea97067ed4faf292a6c5581ba"
	I0916 11:16:34.071029  298514 cri.go:89] found id: "98ba0135cf4f3961fbf8ace4f9fb6ca27a2883eb5d31aa7115bb9c9828c71f32"
	I0916 11:16:34.071032  298514 cri.go:89] found id: ""
	I0916 11:16:34.071038  298514 logs.go:276] 2 containers: [e263620e71d778a5dfa8791096a3df362173cc1ea97067ed4faf292a6c5581ba 98ba0135cf4f3961fbf8ace4f9fb6ca27a2883eb5d31aa7115bb9c9828c71f32]
	I0916 11:16:34.071093  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:34.074636  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:34.078626  298514 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:16:34.078700  298514 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:16:34.112159  298514 cri.go:89] found id: "bce0bf0824d9328eb0646e3110f5ca1b80318b72666f29555d3ed24d7aa08d1c"
	I0916 11:16:34.112186  298514 cri.go:89] found id: "2dbb170a519e89c32bdecca493bd2479c29f546b0a9ba488f7768c7e7cdd8cc6"
	I0916 11:16:34.112193  298514 cri.go:89] found id: ""
	I0916 11:16:34.112202  298514 logs.go:276] 2 containers: [bce0bf0824d9328eb0646e3110f5ca1b80318b72666f29555d3ed24d7aa08d1c 2dbb170a519e89c32bdecca493bd2479c29f546b0a9ba488f7768c7e7cdd8cc6]
	I0916 11:16:34.112252  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:34.115829  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:34.119101  298514 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0916 11:16:34.119189  298514 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0916 11:16:34.154559  298514 cri.go:89] found id: "3c7eb2cb404402dd137d29ee9ac327b9c01414947d1986518ce39867d5a0d233"
	I0916 11:16:34.154586  298514 cri.go:89] found id: ""
	I0916 11:16:34.154603  298514 logs.go:276] 1 containers: [3c7eb2cb404402dd137d29ee9ac327b9c01414947d1986518ce39867d5a0d233]
	I0916 11:16:34.154655  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:34.158128  298514 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:16:34.158200  298514 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:16:34.192357  298514 cri.go:89] found id: "a004cb2b980056e835fc7f24bc7be1594c5efffc763e0e1af45bb0862aa43203"
	I0916 11:16:34.192384  298514 cri.go:89] found id: "b8cfe956c68340395c79fe3267d07b586671c86715a48a65e559747f45140168"
	I0916 11:16:34.192392  298514 cri.go:89] found id: ""
	I0916 11:16:34.192402  298514 logs.go:276] 2 containers: [a004cb2b980056e835fc7f24bc7be1594c5efffc763e0e1af45bb0862aa43203 b8cfe956c68340395c79fe3267d07b586671c86715a48a65e559747f45140168]
	I0916 11:16:34.192450  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:34.195844  298514 ssh_runner.go:195] Run: which crictl
	I0916 11:16:34.198982  298514 logs.go:123] Gathering logs for kubelet ...
	I0916 11:16:34.199003  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:16:34.254338  298514 logs.go:123] Gathering logs for etcd [e7db7be77ed78b3a10c8d675a7803092991d0e4a8c3e81f93fe10d600f5fd3a0] ...
	I0916 11:16:34.254376  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7db7be77ed78b3a10c8d675a7803092991d0e4a8c3e81f93fe10d600f5fd3a0"
	I0916 11:16:34.294709  298514 logs.go:123] Gathering logs for kube-proxy [c182b9d7c07dff5e07ae6d7a2a6ee44903b7d16b719cbe713e148ddee7a32eae] ...
	I0916 11:16:34.294741  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c182b9d7c07dff5e07ae6d7a2a6ee44903b7d16b719cbe713e148ddee7a32eae"
	I0916 11:16:34.331687  298514 logs.go:123] Gathering logs for kube-controller-manager [e263620e71d778a5dfa8791096a3df362173cc1ea97067ed4faf292a6c5581ba] ...
	I0916 11:16:34.331714  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e263620e71d778a5dfa8791096a3df362173cc1ea97067ed4faf292a6c5581ba"
	I0916 11:16:34.383519  298514 logs.go:123] Gathering logs for kindnet [bce0bf0824d9328eb0646e3110f5ca1b80318b72666f29555d3ed24d7aa08d1c] ...
	I0916 11:16:34.383552  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bce0bf0824d9328eb0646e3110f5ca1b80318b72666f29555d3ed24d7aa08d1c"
	I0916 11:16:34.419900  298514 logs.go:123] Gathering logs for kubernetes-dashboard [3c7eb2cb404402dd137d29ee9ac327b9c01414947d1986518ce39867d5a0d233] ...
	I0916 11:16:34.419928  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c7eb2cb404402dd137d29ee9ac327b9c01414947d1986518ce39867d5a0d233"
	I0916 11:16:34.454310  298514 logs.go:123] Gathering logs for kube-apiserver [debbdc082cc9ca9a3fe53ba70655a9e18f2b6df022109c93f5db8c5e17b5c30a] ...
	I0916 11:16:34.454336  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 debbdc082cc9ca9a3fe53ba70655a9e18f2b6df022109c93f5db8c5e17b5c30a"
	I0916 11:16:34.498499  298514 logs.go:123] Gathering logs for etcd [96fb65ed9d834089ed0425c6be8cb6a8e9fa0f9dd4787e4b0980d1581a756b54] ...
	I0916 11:16:34.498534  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 96fb65ed9d834089ed0425c6be8cb6a8e9fa0f9dd4787e4b0980d1581a756b54"
	I0916 11:16:34.540118  298514 logs.go:123] Gathering logs for coredns [3dab298bfe5b5f466c279d41236e5592bfcab2429f9c5b2ce3ac5bf7c01c183d] ...
	I0916 11:16:34.540146  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3dab298bfe5b5f466c279d41236e5592bfcab2429f9c5b2ce3ac5bf7c01c183d"
	I0916 11:16:34.574427  298514 logs.go:123] Gathering logs for storage-provisioner [a004cb2b980056e835fc7f24bc7be1594c5efffc763e0e1af45bb0862aa43203] ...
	I0916 11:16:34.574454  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a004cb2b980056e835fc7f24bc7be1594c5efffc763e0e1af45bb0862aa43203"
	I0916 11:16:34.608414  298514 logs.go:123] Gathering logs for storage-provisioner [b8cfe956c68340395c79fe3267d07b586671c86715a48a65e559747f45140168] ...
	I0916 11:16:34.608444  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8cfe956c68340395c79fe3267d07b586671c86715a48a65e559747f45140168"
	I0916 11:16:34.645104  298514 logs.go:123] Gathering logs for containerd ...
	I0916 11:16:34.645131  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:16:34.695681  298514 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:16:34.695716  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:16:34.793257  298514 logs.go:123] Gathering logs for coredns [5d29018d4a6205f4a5647facca4407ea2acaa36c5a885457f47b7a31bdca8de0] ...
	I0916 11:16:34.793296  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5d29018d4a6205f4a5647facca4407ea2acaa36c5a885457f47b7a31bdca8de0"
	I0916 11:16:34.829034  298514 logs.go:123] Gathering logs for kube-scheduler [ac979bde4e227043cfb8483a4d54a734373a62012be1ef212239ef8b959912c1] ...
	I0916 11:16:34.829063  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac979bde4e227043cfb8483a4d54a734373a62012be1ef212239ef8b959912c1"
	I0916 11:16:34.864119  298514 logs.go:123] Gathering logs for kindnet [2dbb170a519e89c32bdecca493bd2479c29f546b0a9ba488f7768c7e7cdd8cc6] ...
	I0916 11:16:34.864148  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2dbb170a519e89c32bdecca493bd2479c29f546b0a9ba488f7768c7e7cdd8cc6"
	I0916 11:16:34.898040  298514 logs.go:123] Gathering logs for dmesg ...
	I0916 11:16:34.898065  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:16:34.921324  298514 logs.go:123] Gathering logs for kube-apiserver [cdad97219867df2ded19d219654472a41674b342b9961e00c7f6e4451fd8d72c] ...
	I0916 11:16:34.921374  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdad97219867df2ded19d219654472a41674b342b9961e00c7f6e4451fd8d72c"
	I0916 11:16:34.965269  298514 logs.go:123] Gathering logs for kube-scheduler [7637dc0ee3d4d0dad8e111abe371f5489a1304abc063851969b1c262fb4b2a10] ...
	I0916 11:16:34.965307  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7637dc0ee3d4d0dad8e111abe371f5489a1304abc063851969b1c262fb4b2a10"
	I0916 11:16:35.007052  298514 logs.go:123] Gathering logs for kube-proxy [7993c3f14244f58c4d9163c2b8d1fa3eb2846ccde92cdbb33932dc072a1106c3] ...
	I0916 11:16:35.007083  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7993c3f14244f58c4d9163c2b8d1fa3eb2846ccde92cdbb33932dc072a1106c3"
	I0916 11:16:35.042678  298514 logs.go:123] Gathering logs for kube-controller-manager [98ba0135cf4f3961fbf8ace4f9fb6ca27a2883eb5d31aa7115bb9c9828c71f32] ...
	I0916 11:16:35.042712  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 98ba0135cf4f3961fbf8ace4f9fb6ca27a2883eb5d31aa7115bb9c9828c71f32"
	I0916 11:16:34.277127  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:36.777191  283294 pod_ready.go:103] pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:35.093891  298514 logs.go:123] Gathering logs for container status ...
	I0916 11:16:35.093928  298514 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:16:37.645750  298514 system_pods.go:59] 9 kube-system pods found
	I0916 11:16:37.645785  298514 system_pods.go:61] "coredns-7c65d6cfc9-dmv6t" [95a9589e-1385-4fb0-8b68-fb26098daf01] Running
	I0916 11:16:37.645792  298514 system_pods.go:61] "etcd-embed-certs-679624" [b351fd38-6c30-4fa6-b5da-159580232c88] Running
	I0916 11:16:37.645798  298514 system_pods.go:61] "kindnet-78kp5" [795aad3a-f96f-4477-9bd5-49a233890f1e] Running
	I0916 11:16:37.645803  298514 system_pods.go:61] "kube-apiserver-embed-certs-679624" [858cb7c8-8e66-4952-bf89-228525cffafb] Running
	I0916 11:16:37.645809  298514 system_pods.go:61] "kube-controller-manager-embed-certs-679624" [f6aaf262-2f80-4875-bbbf-1b5918a23787] Running
	I0916 11:16:37.645814  298514 system_pods.go:61] "kube-proxy-bt6k2" [cae0a1e2-c041-4c6c-8772-978a2c544879] Running
	I0916 11:16:37.645820  298514 system_pods.go:61] "kube-scheduler-embed-certs-679624" [3eb8a130-09a9-4ae6-b12a-6dff3309cbae] Running
	I0916 11:16:37.645834  298514 system_pods.go:61] "metrics-server-6867b74b74-qgvl9" [b0d684f3-ff91-4996-8d9d-23936b12c814] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 11:16:37.645841  298514 system_pods.go:61] "storage-provisioner" [3b5477b8-ac39-4acc-9e16-a13a7b1d3e10] Running
	I0916 11:16:37.645854  298514 system_pods.go:74] duration metric: took 3.823021734s to wait for pod list to return data ...
	I0916 11:16:37.645868  298514 default_sa.go:34] waiting for default service account to be created ...
	I0916 11:16:37.649067  298514 default_sa.go:45] found service account: "default"
	I0916 11:16:37.649101  298514 default_sa.go:55] duration metric: took 3.225173ms for default service account to be created ...
	I0916 11:16:37.649112  298514 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 11:16:37.653867  298514 system_pods.go:86] 9 kube-system pods found
	I0916 11:16:37.653894  298514 system_pods.go:89] "coredns-7c65d6cfc9-dmv6t" [95a9589e-1385-4fb0-8b68-fb26098daf01] Running
	I0916 11:16:37.653900  298514 system_pods.go:89] "etcd-embed-certs-679624" [b351fd38-6c30-4fa6-b5da-159580232c88] Running
	I0916 11:16:37.653904  298514 system_pods.go:89] "kindnet-78kp5" [795aad3a-f96f-4477-9bd5-49a233890f1e] Running
	I0916 11:16:37.653908  298514 system_pods.go:89] "kube-apiserver-embed-certs-679624" [858cb7c8-8e66-4952-bf89-228525cffafb] Running
	I0916 11:16:37.653911  298514 system_pods.go:89] "kube-controller-manager-embed-certs-679624" [f6aaf262-2f80-4875-bbbf-1b5918a23787] Running
	I0916 11:16:37.653914  298514 system_pods.go:89] "kube-proxy-bt6k2" [cae0a1e2-c041-4c6c-8772-978a2c544879] Running
	I0916 11:16:37.653918  298514 system_pods.go:89] "kube-scheduler-embed-certs-679624" [3eb8a130-09a9-4ae6-b12a-6dff3309cbae] Running
	I0916 11:16:37.653924  298514 system_pods.go:89] "metrics-server-6867b74b74-qgvl9" [b0d684f3-ff91-4996-8d9d-23936b12c814] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 11:16:37.653928  298514 system_pods.go:89] "storage-provisioner" [3b5477b8-ac39-4acc-9e16-a13a7b1d3e10] Running
	I0916 11:16:37.653935  298514 system_pods.go:126] duration metric: took 4.818579ms to wait for k8s-apps to be running ...
	I0916 11:16:37.653943  298514 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 11:16:37.653988  298514 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:16:37.665862  298514 system_svc.go:56] duration metric: took 11.909294ms WaitForService to wait for kubelet
	I0916 11:16:37.665892  298514 kubeadm.go:582] duration metric: took 4m15.931339337s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:16:37.665912  298514 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:16:37.669073  298514 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 11:16:37.669112  298514 node_conditions.go:123] node cpu capacity is 8
	I0916 11:16:37.669124  298514 node_conditions.go:105] duration metric: took 3.208293ms to run NodePressure ...
	I0916 11:16:37.669134  298514 start.go:241] waiting for startup goroutines ...
	I0916 11:16:37.669140  298514 start.go:246] waiting for cluster config update ...
	I0916 11:16:37.669151  298514 start.go:255] writing updated cluster config ...
	I0916 11:16:37.669404  298514 ssh_runner.go:195] Run: rm -f paused
	I0916 11:16:37.675701  298514 out.go:177] * Done! kubectl is now configured to use "embed-certs-679624" cluster and "default" namespace by default
	E0916 11:16:37.677144  298514 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	I0916 11:16:36.128843  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:38.129599  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:37.777631  283294 pod_ready.go:82] duration metric: took 4m0.006929261s for pod "metrics-server-9975d5f86-4f2jl" in "kube-system" namespace to be "Ready" ...
	E0916 11:16:37.777659  283294 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0916 11:16:37.777669  283294 pod_ready.go:39] duration metric: took 5m29.041786645s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:16:37.777689  283294 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:16:37.777725  283294 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:16:37.777777  283294 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:16:37.813044  283294 cri.go:89] found id: "b8b7a2952008364403b8b3a7f0ec5cca63e9cd244427d95edd2b992a7dff815d"
	I0916 11:16:37.813063  283294 cri.go:89] found id: ""
	I0916 11:16:37.813072  283294 logs.go:276] 1 containers: [b8b7a2952008364403b8b3a7f0ec5cca63e9cd244427d95edd2b992a7dff815d]
	I0916 11:16:37.813135  283294 ssh_runner.go:195] Run: which crictl
	I0916 11:16:37.816744  283294 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:16:37.816813  283294 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:16:37.850919  283294 cri.go:89] found id: "6f10dd2ab1448676f85f2e14b088b767bcca2ef3807aaa62f0370cbf81e594dc"
	I0916 11:16:37.850942  283294 cri.go:89] found id: ""
	I0916 11:16:37.850950  283294 logs.go:276] 1 containers: [6f10dd2ab1448676f85f2e14b088b767bcca2ef3807aaa62f0370cbf81e594dc]
	I0916 11:16:37.850994  283294 ssh_runner.go:195] Run: which crictl
	I0916 11:16:37.854457  283294 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:16:37.854527  283294 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:16:37.888938  283294 cri.go:89] found id: "7e01e437eafa1ace1f5ee4f4bfd2759ea78df9316412a17e9a0e8ae750c20523"
	I0916 11:16:37.888964  283294 cri.go:89] found id: ""
	I0916 11:16:37.888974  283294 logs.go:276] 1 containers: [7e01e437eafa1ace1f5ee4f4bfd2759ea78df9316412a17e9a0e8ae750c20523]
	I0916 11:16:37.889027  283294 ssh_runner.go:195] Run: which crictl
	I0916 11:16:37.892481  283294 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:16:37.892565  283294 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:16:37.929995  283294 cri.go:89] found id: "d4e91e6acd99eff1a1f03c740be536efde73dc3441d0ce21da107f92e07c7aa8"
	I0916 11:16:37.930019  283294 cri.go:89] found id: ""
	I0916 11:16:37.930027  283294 logs.go:276] 1 containers: [d4e91e6acd99eff1a1f03c740be536efde73dc3441d0ce21da107f92e07c7aa8]
	I0916 11:16:37.930073  283294 ssh_runner.go:195] Run: which crictl
	I0916 11:16:37.935700  283294 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:16:37.935855  283294 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:16:37.976752  283294 cri.go:89] found id: "1145c23e87dee884c7c908196bd5c7de96e94947ae439d9c145f0c4dba1630fc"
	I0916 11:16:37.976791  283294 cri.go:89] found id: ""
	I0916 11:16:37.976801  283294 logs.go:276] 1 containers: [1145c23e87dee884c7c908196bd5c7de96e94947ae439d9c145f0c4dba1630fc]
	I0916 11:16:37.976851  283294 ssh_runner.go:195] Run: which crictl
	I0916 11:16:37.980760  283294 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:16:37.980824  283294 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:16:38.018626  283294 cri.go:89] found id: "8dedfda17aef64d94e616f000f9e226b29f8f0ee8b642400d83681bf9b9f8330"
	I0916 11:16:38.018652  283294 cri.go:89] found id: ""
	I0916 11:16:38.018663  283294 logs.go:276] 1 containers: [8dedfda17aef64d94e616f000f9e226b29f8f0ee8b642400d83681bf9b9f8330]
	I0916 11:16:38.018722  283294 ssh_runner.go:195] Run: which crictl
	I0916 11:16:38.022766  283294 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:16:38.022840  283294 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:16:38.056878  283294 cri.go:89] found id: "e812c7a8976381cd9cfc69cfc3ec18d19286650ecfda38b6e08f3545c8c27c65"
	I0916 11:16:38.056897  283294 cri.go:89] found id: ""
	I0916 11:16:38.056904  283294 logs.go:276] 1 containers: [e812c7a8976381cd9cfc69cfc3ec18d19286650ecfda38b6e08f3545c8c27c65]
	I0916 11:16:38.056953  283294 ssh_runner.go:195] Run: which crictl
	I0916 11:16:38.060382  283294 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:16:38.060442  283294 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:16:38.095340  283294 cri.go:89] found id: "4b7e57072db41f8a563f7872a8c96af29447848dc1694db9b8392a1b688b1de4"
	I0916 11:16:38.095365  283294 cri.go:89] found id: "b6e65b347883fafb9798059fa58b334346a8b29c61fe40422f2cf04c355fcd71"
	I0916 11:16:38.095372  283294 cri.go:89] found id: ""
	I0916 11:16:38.095380  283294 logs.go:276] 2 containers: [4b7e57072db41f8a563f7872a8c96af29447848dc1694db9b8392a1b688b1de4 b6e65b347883fafb9798059fa58b334346a8b29c61fe40422f2cf04c355fcd71]
	I0916 11:16:38.095447  283294 ssh_runner.go:195] Run: which crictl
	I0916 11:16:38.099232  283294 ssh_runner.go:195] Run: which crictl
	I0916 11:16:38.102484  283294 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0916 11:16:38.102551  283294 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0916 11:16:38.136765  283294 cri.go:89] found id: "b8894a1f49c45a030260a48d3dd12ddc81b2ba27c652688d77576697c81124e8"
	I0916 11:16:38.136790  283294 cri.go:89] found id: ""
	I0916 11:16:38.136799  283294 logs.go:276] 1 containers: [b8894a1f49c45a030260a48d3dd12ddc81b2ba27c652688d77576697c81124e8]
	I0916 11:16:38.136858  283294 ssh_runner.go:195] Run: which crictl
	I0916 11:16:38.140437  283294 logs.go:123] Gathering logs for kube-apiserver [b8b7a2952008364403b8b3a7f0ec5cca63e9cd244427d95edd2b992a7dff815d] ...
	I0916 11:16:38.140461  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8b7a2952008364403b8b3a7f0ec5cca63e9cd244427d95edd2b992a7dff815d"
	I0916 11:16:38.198162  283294 logs.go:123] Gathering logs for kube-proxy [1145c23e87dee884c7c908196bd5c7de96e94947ae439d9c145f0c4dba1630fc] ...
	I0916 11:16:38.198196  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1145c23e87dee884c7c908196bd5c7de96e94947ae439d9c145f0c4dba1630fc"
	I0916 11:16:38.232396  283294 logs.go:123] Gathering logs for kubernetes-dashboard [b8894a1f49c45a030260a48d3dd12ddc81b2ba27c652688d77576697c81124e8] ...
	I0916 11:16:38.232431  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b8894a1f49c45a030260a48d3dd12ddc81b2ba27c652688d77576697c81124e8"
	I0916 11:16:38.269217  283294 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:16:38.269273  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:16:38.372168  283294 logs.go:123] Gathering logs for coredns [7e01e437eafa1ace1f5ee4f4bfd2759ea78df9316412a17e9a0e8ae750c20523] ...
	I0916 11:16:38.372197  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e01e437eafa1ace1f5ee4f4bfd2759ea78df9316412a17e9a0e8ae750c20523"
	I0916 11:16:38.405501  283294 logs.go:123] Gathering logs for kube-controller-manager [8dedfda17aef64d94e616f000f9e226b29f8f0ee8b642400d83681bf9b9f8330] ...
	I0916 11:16:38.405534  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8dedfda17aef64d94e616f000f9e226b29f8f0ee8b642400d83681bf9b9f8330"
	I0916 11:16:38.460683  283294 logs.go:123] Gathering logs for containerd ...
	I0916 11:16:38.460721  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:16:38.521937  283294 logs.go:123] Gathering logs for dmesg ...
	I0916 11:16:38.521975  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:16:38.545968  283294 logs.go:123] Gathering logs for storage-provisioner [4b7e57072db41f8a563f7872a8c96af29447848dc1694db9b8392a1b688b1de4] ...
	I0916 11:16:38.546005  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b7e57072db41f8a563f7872a8c96af29447848dc1694db9b8392a1b688b1de4"
	I0916 11:16:38.580893  283294 logs.go:123] Gathering logs for storage-provisioner [b6e65b347883fafb9798059fa58b334346a8b29c61fe40422f2cf04c355fcd71] ...
	I0916 11:16:38.580918  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b6e65b347883fafb9798059fa58b334346a8b29c61fe40422f2cf04c355fcd71"
	I0916 11:16:38.614428  283294 logs.go:123] Gathering logs for kindnet [e812c7a8976381cd9cfc69cfc3ec18d19286650ecfda38b6e08f3545c8c27c65] ...
	I0916 11:16:38.614453  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e812c7a8976381cd9cfc69cfc3ec18d19286650ecfda38b6e08f3545c8c27c65"
	I0916 11:16:38.654390  283294 logs.go:123] Gathering logs for container status ...
	I0916 11:16:38.654427  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:16:38.692272  283294 logs.go:123] Gathering logs for kubelet ...
	I0916 11:16:38.692302  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0916 11:16:38.731349  283294 logs.go:138] Found kubelet problem: Sep 16 11:11:08 old-k8s-version-371039 kubelet[1070]: E0916 11:11:08.526119    1070 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-371039" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-371039' and this object
	W0916 11:16:38.731520  283294 logs.go:138] Found kubelet problem: Sep 16 11:11:08 old-k8s-version-371039 kubelet[1070]: E0916 11:11:08.526479    1070 reflector.go:138] object-"kube-system"/"kindnet-token-xjzl9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-xjzl9" is forbidden: User "system:node:old-k8s-version-371039" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-371039' and this object
	W0916 11:16:38.731678  283294 logs.go:138] Found kubelet problem: Sep 16 11:11:08 old-k8s-version-371039 kubelet[1070]: E0916 11:11:08.526594    1070 reflector.go:138] object-"kube-system"/"coredns-token-vcrsr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-vcrsr" is forbidden: User "system:node:old-k8s-version-371039" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-371039' and this object
	W0916 11:16:38.736629  283294 logs.go:138] Found kubelet problem: Sep 16 11:11:10 old-k8s-version-371039 kubelet[1070]: E0916 11:11:10.329499    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	W0916 11:16:38.736777  283294 logs.go:138] Found kubelet problem: Sep 16 11:11:10 old-k8s-version-371039 kubelet[1070]: E0916 11:11:10.534036    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.738562  283294 logs.go:138] Found kubelet problem: Sep 16 11:11:29 old-k8s-version-371039 kubelet[1070]: E0916 11:11:29.636809    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.738818  283294 logs.go:138] Found kubelet problem: Sep 16 11:11:30 old-k8s-version-371039 kubelet[1070]: E0916 11:11:30.640739    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.740585  283294 logs.go:138] Found kubelet problem: Sep 16 11:11:33 old-k8s-version-371039 kubelet[1070]: E0916 11:11:33.064066    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	W0916 11:16:38.741255  283294 logs.go:138] Found kubelet problem: Sep 16 11:11:40 old-k8s-version-371039 kubelet[1070]: E0916 11:11:40.667028    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.741482  283294 logs.go:138] Found kubelet problem: Sep 16 11:11:43 old-k8s-version-371039 kubelet[1070]: E0916 11:11:43.355910    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.741956  283294 logs.go:138] Found kubelet problem: Sep 16 11:11:49 old-k8s-version-371039 kubelet[1070]: E0916 11:11:49.245550    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.743944  283294 logs.go:138] Found kubelet problem: Sep 16 11:11:58 old-k8s-version-371039 kubelet[1070]: E0916 11:11:58.392988    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	W0916 11:16:38.744367  283294 logs.go:138] Found kubelet problem: Sep 16 11:12:03 old-k8s-version-371039 kubelet[1070]: E0916 11:12:03.723963    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.744606  283294 logs.go:138] Found kubelet problem: Sep 16 11:12:09 old-k8s-version-371039 kubelet[1070]: E0916 11:12:09.246379    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.744750  283294 logs.go:138] Found kubelet problem: Sep 16 11:12:11 old-k8s-version-371039 kubelet[1070]: E0916 11:12:11.355765    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.744986  283294 logs.go:138] Found kubelet problem: Sep 16 11:12:20 old-k8s-version-371039 kubelet[1070]: E0916 11:12:20.355410    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.745118  283294 logs.go:138] Found kubelet problem: Sep 16 11:12:25 old-k8s-version-371039 kubelet[1070]: E0916 11:12:25.356081    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.745366  283294 logs.go:138] Found kubelet problem: Sep 16 11:12:32 old-k8s-version-371039 kubelet[1070]: E0916 11:12:32.355402    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.745517  283294 logs.go:138] Found kubelet problem: Sep 16 11:12:38 old-k8s-version-371039 kubelet[1070]: E0916 11:12:38.355913    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.745945  283294 logs.go:138] Found kubelet problem: Sep 16 11:12:44 old-k8s-version-371039 kubelet[1070]: E0916 11:12:44.821826    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.746182  283294 logs.go:138] Found kubelet problem: Sep 16 11:12:49 old-k8s-version-371039 kubelet[1070]: E0916 11:12:49.245692    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.748175  283294 logs.go:138] Found kubelet problem: Sep 16 11:12:50 old-k8s-version-371039 kubelet[1070]: E0916 11:12:50.378557    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	W0916 11:16:38.748435  283294 logs.go:138] Found kubelet problem: Sep 16 11:13:01 old-k8s-version-371039 kubelet[1070]: E0916 11:13:01.355538    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.748572  283294 logs.go:138] Found kubelet problem: Sep 16 11:13:05 old-k8s-version-371039 kubelet[1070]: E0916 11:13:05.355797    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.748825  283294 logs.go:138] Found kubelet problem: Sep 16 11:13:16 old-k8s-version-371039 kubelet[1070]: E0916 11:13:16.355393    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.748981  283294 logs.go:138] Found kubelet problem: Sep 16 11:13:20 old-k8s-version-371039 kubelet[1070]: E0916 11:13:20.355904    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.749251  283294 logs.go:138] Found kubelet problem: Sep 16 11:13:27 old-k8s-version-371039 kubelet[1070]: E0916 11:13:27.355256    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.749389  283294 logs.go:138] Found kubelet problem: Sep 16 11:13:32 old-k8s-version-371039 kubelet[1070]: E0916 11:13:32.355649    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.749628  283294 logs.go:138] Found kubelet problem: Sep 16 11:13:41 old-k8s-version-371039 kubelet[1070]: E0916 11:13:41.355315    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.749762  283294 logs.go:138] Found kubelet problem: Sep 16 11:13:46 old-k8s-version-371039 kubelet[1070]: E0916 11:13:46.355690    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.750052  283294 logs.go:138] Found kubelet problem: Sep 16 11:13:55 old-k8s-version-371039 kubelet[1070]: E0916 11:13:55.355528    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.750249  283294 logs.go:138] Found kubelet problem: Sep 16 11:13:59 old-k8s-version-371039 kubelet[1070]: E0916 11:13:59.355804    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.750706  283294 logs.go:138] Found kubelet problem: Sep 16 11:14:08 old-k8s-version-371039 kubelet[1070]: E0916 11:14:08.004821    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.750944  283294 logs.go:138] Found kubelet problem: Sep 16 11:14:09 old-k8s-version-371039 kubelet[1070]: E0916 11:14:09.245710    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.752864  283294 logs.go:138] Found kubelet problem: Sep 16 11:14:14 old-k8s-version-371039 kubelet[1070]: E0916 11:14:14.388385    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	W0916 11:16:38.753258  283294 logs.go:138] Found kubelet problem: Sep 16 11:14:21 old-k8s-version-371039 kubelet[1070]: E0916 11:14:21.355284    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.753424  283294 logs.go:138] Found kubelet problem: Sep 16 11:14:25 old-k8s-version-371039 kubelet[1070]: E0916 11:14:25.355879    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.753662  283294 logs.go:138] Found kubelet problem: Sep 16 11:14:34 old-k8s-version-371039 kubelet[1070]: E0916 11:14:34.355467    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.753794  283294 logs.go:138] Found kubelet problem: Sep 16 11:14:40 old-k8s-version-371039 kubelet[1070]: E0916 11:14:40.355902    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.754077  283294 logs.go:138] Found kubelet problem: Sep 16 11:14:49 old-k8s-version-371039 kubelet[1070]: E0916 11:14:49.355426    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.754266  283294 logs.go:138] Found kubelet problem: Sep 16 11:14:54 old-k8s-version-371039 kubelet[1070]: E0916 11:14:54.355668    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.754609  283294 logs.go:138] Found kubelet problem: Sep 16 11:15:00 old-k8s-version-371039 kubelet[1070]: E0916 11:15:00.355224    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.754744  283294 logs.go:138] Found kubelet problem: Sep 16 11:15:08 old-k8s-version-371039 kubelet[1070]: E0916 11:15:08.355679    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.754979  283294 logs.go:138] Found kubelet problem: Sep 16 11:15:12 old-k8s-version-371039 kubelet[1070]: E0916 11:15:12.355397    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.755114  283294 logs.go:138] Found kubelet problem: Sep 16 11:15:22 old-k8s-version-371039 kubelet[1070]: E0916 11:15:22.355653    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.755355  283294 logs.go:138] Found kubelet problem: Sep 16 11:15:25 old-k8s-version-371039 kubelet[1070]: E0916 11:15:25.355528    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.755489  283294 logs.go:138] Found kubelet problem: Sep 16 11:15:34 old-k8s-version-371039 kubelet[1070]: E0916 11:15:34.355954    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.755723  283294 logs.go:138] Found kubelet problem: Sep 16 11:15:38 old-k8s-version-371039 kubelet[1070]: E0916 11:15:38.355218    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.755957  283294 logs.go:138] Found kubelet problem: Sep 16 11:15:48 old-k8s-version-371039 kubelet[1070]: E0916 11:15:48.355603    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.756205  283294 logs.go:138] Found kubelet problem: Sep 16 11:15:49 old-k8s-version-371039 kubelet[1070]: E0916 11:15:49.355370    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.756342  283294 logs.go:138] Found kubelet problem: Sep 16 11:15:59 old-k8s-version-371039 kubelet[1070]: E0916 11:15:59.355864    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.756582  283294 logs.go:138] Found kubelet problem: Sep 16 11:16:03 old-k8s-version-371039 kubelet[1070]: E0916 11:16:03.355623    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.756737  283294 logs.go:138] Found kubelet problem: Sep 16 11:16:12 old-k8s-version-371039 kubelet[1070]: E0916 11:16:12.355692    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.756988  283294 logs.go:138] Found kubelet problem: Sep 16 11:16:14 old-k8s-version-371039 kubelet[1070]: E0916 11:16:14.355205    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.757133  283294 logs.go:138] Found kubelet problem: Sep 16 11:16:26 old-k8s-version-371039 kubelet[1070]: E0916 11:16:26.355705    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.757384  283294 logs.go:138] Found kubelet problem: Sep 16 11:16:29 old-k8s-version-371039 kubelet[1070]: E0916 11:16:29.355289    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	I0916 11:16:38.757401  283294 logs.go:123] Gathering logs for etcd [6f10dd2ab1448676f85f2e14b088b767bcca2ef3807aaa62f0370cbf81e594dc] ...
	I0916 11:16:38.757423  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f10dd2ab1448676f85f2e14b088b767bcca2ef3807aaa62f0370cbf81e594dc"
	I0916 11:16:38.801272  283294 logs.go:123] Gathering logs for kube-scheduler [d4e91e6acd99eff1a1f03c740be536efde73dc3441d0ce21da107f92e07c7aa8] ...
	I0916 11:16:38.801305  283294 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4e91e6acd99eff1a1f03c740be536efde73dc3441d0ce21da107f92e07c7aa8"
	I0916 11:16:38.840644  283294 out.go:358] Setting ErrFile to fd 2...
	I0916 11:16:38.840680  283294 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0916 11:16:38.840747  283294 out.go:270] X Problems detected in kubelet:
	W0916 11:16:38.840761  283294 out.go:270]   Sep 16 11:16:03 old-k8s-version-371039 kubelet[1070]: E0916 11:16:03.355623    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.840774  283294 out.go:270]   Sep 16 11:16:12 old-k8s-version-371039 kubelet[1070]: E0916 11:16:12.355692    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.840782  283294 out.go:270]   Sep 16 11:16:14 old-k8s-version-371039 kubelet[1070]: E0916 11:16:14.355205    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	W0916 11:16:38.840787  283294 out.go:270]   Sep 16 11:16:26 old-k8s-version-371039 kubelet[1070]: E0916 11:16:26.355705    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0916 11:16:38.840794  283294 out.go:270]   Sep 16 11:16:29 old-k8s-version-371039 kubelet[1070]: E0916 11:16:29.355289    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	I0916 11:16:38.840800  283294 out.go:358] Setting ErrFile to fd 2...
	I0916 11:16:38.840807  283294 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:16:40.628844  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:42.628910  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	b1971a0348b56       523cad1a4df73       About a minute ago   Exited              dashboard-metrics-scraper   5                   ccf51ecdf3b25       dashboard-metrics-scraper-7c96f5b85b-jdtz5
	a004cb2b98005       6e38f40d628db       3 minutes ago        Running             storage-provisioner         2                   5e51942090d7c       storage-provisioner
	3c7eb2cb40440       07655ddf2eebe       4 minutes ago        Running             kubernetes-dashboard        0                   bccd8eff00010       kubernetes-dashboard-695b96c756-tzbgn
	5d29018d4a620       c69fa2e9cbf5f       4 minutes ago        Running             coredns                     1                   dbb6cef09829e       coredns-7c65d6cfc9-dmv6t
	b8cfe956c6834       6e38f40d628db       4 minutes ago        Exited              storage-provisioner         1                   5e51942090d7c       storage-provisioner
	bce0bf0824d93       12968670680f4       4 minutes ago        Running             kindnet-cni                 1                   2e84cbe3dd149       kindnet-78kp5
	7993c3f14244f       60c005f310ff3       4 minutes ago        Running             kube-proxy                  1                   37092b7f94ae1       kube-proxy-bt6k2
	96fb65ed9d834       2e96e5913fc06       4 minutes ago        Running             etcd                        1                   ae6821997509e       etcd-embed-certs-679624
	cdad97219867d       6bab7719df100       4 minutes ago        Running             kube-apiserver              1                   798456ee3c66e       kube-apiserver-embed-certs-679624
	ac979bde4e227       9aa1fad941575       4 minutes ago        Running             kube-scheduler              1                   5c52118169f55       kube-scheduler-embed-certs-679624
	e263620e71d77       175ffd71cce3d       4 minutes ago        Running             kube-controller-manager     1                   4701b5a741060       kube-controller-manager-embed-certs-679624
	3dab298bfe5b5       c69fa2e9cbf5f       4 minutes ago        Exited              coredns                     0                   c9b661400e384       coredns-7c65d6cfc9-dmv6t
	2dbb170a519e8       12968670680f4       5 minutes ago        Exited              kindnet-cni                 0                   06e595c1fc81f       kindnet-78kp5
	c182b9d7c07df       60c005f310ff3       5 minutes ago        Exited              kube-proxy                  0                   d47fcd0c3fa57       kube-proxy-bt6k2
	debbdc082cc9c       6bab7719df100       5 minutes ago        Exited              kube-apiserver              0                   9df038a9105dc       kube-apiserver-embed-certs-679624
	7637dc0ee3d4d       9aa1fad941575       5 minutes ago        Exited              kube-scheduler              0                   ba28ed2ba4c4a       kube-scheduler-embed-certs-679624
	98ba0135cf4f3       175ffd71cce3d       5 minutes ago        Exited              kube-controller-manager     0                   ab668cab99a4f       kube-controller-manager-embed-certs-679624
	e7db7be77ed78       2e96e5913fc06       5 minutes ago        Exited              etcd                        0                   c206875f93f94       etcd-embed-certs-679624
	
	
	==> containerd <==
	Sep 16 11:13:52 embed-certs-679624 containerd[594]: time="2024-09-16T11:13:52.284113079Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Sep 16 11:13:52 embed-certs-679624 containerd[594]: time="2024-09-16T11:13:52.285707267Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 16 11:13:52 embed-certs-679624 containerd[594]: time="2024-09-16T11:13:52.285796514Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 16 11:14:05 embed-certs-679624 containerd[594]: time="2024-09-16T11:14:05.251175378Z" level=info msg="CreateContainer within sandbox \"ccf51ecdf3b25b86ad4f9b47bc401d0dbacbddb7baf3dcf79f36a8e97eb1c19f\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:4,}"
	Sep 16 11:14:05 embed-certs-679624 containerd[594]: time="2024-09-16T11:14:05.263899500Z" level=info msg="CreateContainer within sandbox \"ccf51ecdf3b25b86ad4f9b47bc401d0dbacbddb7baf3dcf79f36a8e97eb1c19f\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:4,} returns container id \"2c81d3fc22eb803f567302ee83ead8e0763d32c2be9a3b104a30fc36a72e9418\""
	Sep 16 11:14:05 embed-certs-679624 containerd[594]: time="2024-09-16T11:14:05.264571916Z" level=info msg="StartContainer for \"2c81d3fc22eb803f567302ee83ead8e0763d32c2be9a3b104a30fc36a72e9418\""
	Sep 16 11:14:05 embed-certs-679624 containerd[594]: time="2024-09-16T11:14:05.309283071Z" level=info msg="StartContainer for \"2c81d3fc22eb803f567302ee83ead8e0763d32c2be9a3b104a30fc36a72e9418\" returns successfully"
	Sep 16 11:14:05 embed-certs-679624 containerd[594]: time="2024-09-16T11:14:05.342789041Z" level=info msg="shim disconnected" id=2c81d3fc22eb803f567302ee83ead8e0763d32c2be9a3b104a30fc36a72e9418 namespace=k8s.io
	Sep 16 11:14:05 embed-certs-679624 containerd[594]: time="2024-09-16T11:14:05.342861562Z" level=warning msg="cleaning up after shim disconnected" id=2c81d3fc22eb803f567302ee83ead8e0763d32c2be9a3b104a30fc36a72e9418 namespace=k8s.io
	Sep 16 11:14:05 embed-certs-679624 containerd[594]: time="2024-09-16T11:14:05.342875059Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 16 11:14:05 embed-certs-679624 containerd[594]: time="2024-09-16T11:14:05.887080385Z" level=info msg="RemoveContainer for \"5d1d5138fbe4f523e3306e84fc2b1c8c504711aa0258b2b32ac802c4a924e486\""
	Sep 16 11:14:05 embed-certs-679624 containerd[594]: time="2024-09-16T11:14:05.892872559Z" level=info msg="RemoveContainer for \"5d1d5138fbe4f523e3306e84fc2b1c8c504711aa0258b2b32ac802c4a924e486\" returns successfully"
	Sep 16 11:15:16 embed-certs-679624 containerd[594]: time="2024-09-16T11:15:16.249704798Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 11:15:16 embed-certs-679624 containerd[594]: time="2024-09-16T11:15:16.285078455Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Sep 16 11:15:16 embed-certs-679624 containerd[594]: time="2024-09-16T11:15:16.286242543Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 16 11:15:16 embed-certs-679624 containerd[594]: time="2024-09-16T11:15:16.286335515Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 16 11:15:34 embed-certs-679624 containerd[594]: time="2024-09-16T11:15:34.250963473Z" level=info msg="CreateContainer within sandbox \"ccf51ecdf3b25b86ad4f9b47bc401d0dbacbddb7baf3dcf79f36a8e97eb1c19f\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:5,}"
	Sep 16 11:15:34 embed-certs-679624 containerd[594]: time="2024-09-16T11:15:34.267709400Z" level=info msg="CreateContainer within sandbox \"ccf51ecdf3b25b86ad4f9b47bc401d0dbacbddb7baf3dcf79f36a8e97eb1c19f\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:5,} returns container id \"b1971a0348b56706de13cb8c5053f0ac67ec2f360b3015676684380fcacf5c2a\""
	Sep 16 11:15:34 embed-certs-679624 containerd[594]: time="2024-09-16T11:15:34.268502477Z" level=info msg="StartContainer for \"b1971a0348b56706de13cb8c5053f0ac67ec2f360b3015676684380fcacf5c2a\""
	Sep 16 11:15:34 embed-certs-679624 containerd[594]: time="2024-09-16T11:15:34.315030925Z" level=info msg="StartContainer for \"b1971a0348b56706de13cb8c5053f0ac67ec2f360b3015676684380fcacf5c2a\" returns successfully"
	Sep 16 11:15:34 embed-certs-679624 containerd[594]: time="2024-09-16T11:15:34.352781322Z" level=info msg="shim disconnected" id=b1971a0348b56706de13cb8c5053f0ac67ec2f360b3015676684380fcacf5c2a namespace=k8s.io
	Sep 16 11:15:34 embed-certs-679624 containerd[594]: time="2024-09-16T11:15:34.352852589Z" level=warning msg="cleaning up after shim disconnected" id=b1971a0348b56706de13cb8c5053f0ac67ec2f360b3015676684380fcacf5c2a namespace=k8s.io
	Sep 16 11:15:34 embed-certs-679624 containerd[594]: time="2024-09-16T11:15:34.352863014Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 16 11:15:35 embed-certs-679624 containerd[594]: time="2024-09-16T11:15:35.094887312Z" level=info msg="RemoveContainer for \"2c81d3fc22eb803f567302ee83ead8e0763d32c2be9a3b104a30fc36a72e9418\""
	Sep 16 11:15:35 embed-certs-679624 containerd[594]: time="2024-09-16T11:15:35.100503746Z" level=info msg="RemoveContainer for \"2c81d3fc22eb803f567302ee83ead8e0763d32c2be9a3b104a30fc36a72e9418\" returns successfully"
	
	
	==> coredns [3dab298bfe5b5f466c279d41236e5592bfcab2429f9c5b2ce3ac5bf7c01c183d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:55078 - 62834 "HINFO IN 5079472268666806265.2239314299196871410. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008456339s
	
	
	==> coredns [5d29018d4a6205f4a5647facca4407ea2acaa36c5a885457f47b7a31bdca8de0] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:39385 - 64169 "HINFO IN 651880180349265054.7339591395469713489. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010914698s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[377369149]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 11:12:28.127) (total time: 30000ms):
	Trace[377369149]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (11:12:58.128)
	Trace[377369149]: [30.000914334s] [30.000914334s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1137813045]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 11:12:28.127) (total time: 30001ms):
	Trace[1137813045]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (11:12:58.128)
	Trace[1137813045]: [30.001035772s] [30.001035772s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1668934216]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 11:12:28.128) (total time: 30000ms):
	Trace[1668934216]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (11:12:58.128)
	Trace[1668934216]: [30.000423151s] [30.000423151s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               embed-certs-679624
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-679624
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=embed-certs-679624
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T11_11_41_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:11:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-679624
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:16:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:12:56 +0000   Mon, 16 Sep 2024 11:11:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:12:56 +0000   Mon, 16 Sep 2024 11:11:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:12:56 +0000   Mon, 16 Sep 2024 11:11:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:12:56 +0000   Mon, 16 Sep 2024 11:11:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-679624
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 51d410484bdb4fb7b99b919264cd860d
	  System UUID:                cc7366e5-b963-44cb-99a5-daef6ab18709
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-dmv6t                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     5m5s
	  kube-system                 etcd-embed-certs-679624                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m9s
	  kube-system                 kindnet-78kp5                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      5m5s
	  kube-system                 kube-apiserver-embed-certs-679624             250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m11s
	  kube-system                 kube-controller-manager-embed-certs-679624    200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 kube-proxy-bt6k2                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  kube-system                 kube-scheduler-embed-certs-679624             100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m9s
	  kube-system                 metrics-server-6867b74b74-qgvl9               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m43s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m3s
	  kubernetes-dashboard        dashboard-metrics-scraper-7c96f5b85b-jdtz5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m21s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-tzbgn         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m3s                   kube-proxy       
	  Normal   Starting                 4m22s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  5m16s (x8 over 5m16s)  kubelet          Node embed-certs-679624 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m16s (x7 over 5m16s)  kubelet          Node embed-certs-679624 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m16s (x7 over 5m16s)  kubelet          Node embed-certs-679624 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 5m10s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m10s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  5m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    5m9s                   kubelet          Node embed-certs-679624 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m9s                   kubelet          Node embed-certs-679624 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  5m9s                   kubelet          Node embed-certs-679624 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           5m6s                   node-controller  Node embed-certs-679624 event: Registered Node embed-certs-679624 in Controller
	  Normal   Starting                 4m29s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m29s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  4m29s (x8 over 4m29s)  kubelet          Node embed-certs-679624 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m29s (x7 over 4m29s)  kubelet          Node embed-certs-679624 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m29s (x7 over 4m29s)  kubelet          Node embed-certs-679624 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  4m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           4m21s                  node-controller  Node embed-certs-679624 event: Registered Node embed-certs-679624 in Controller
	
	
	==> dmesg <==
	[  +0.000004] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +1.024015] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000007] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000005] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000001] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +2.015813] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000006] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000002] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +4.063624] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000002] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +0.000004] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000002] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +8.191266] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000006] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000002] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	
	
	==> etcd [96fb65ed9d834089ed0425c6be8cb6a8e9fa0f9dd4787e4b0980d1581a756b54] <==
	{"level":"info","ts":"2024-09-16T11:12:22.827714Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:12:22.830199Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T11:12:22.830502Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T11:12:22.830537Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T11:12:22.830635Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2024-09-16T11:12:22.830646Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2024-09-16T11:12:23.928817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-16T11:12:23.928870Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T11:12:23.928919Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2024-09-16T11:12:23.928940Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T11:12:23.928951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2024-09-16T11:12:23.928973Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2024-09-16T11:12:23.928986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2024-09-16T11:12:23.930692Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:12:23.930714Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:12:23.930693Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:embed-certs-679624 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T11:12:23.930953Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T11:12:23.931020Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T11:12:23.931898Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:12:23.932739Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2024-09-16T11:12:23.933439Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:12:23.934503Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T11:12:38.848452Z","caller":"traceutil/trace.go:171","msg":"trace[749882123] transaction","detail":"{read_only:false; response_revision:663; number_of_response:1; }","duration":"120.264103ms","start":"2024-09-16T11:12:38.727846Z","end":"2024-09-16T11:12:38.848110Z","steps":["trace[749882123] 'process raft request'  (duration: 120.051255ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T11:12:38.849547Z","caller":"traceutil/trace.go:171","msg":"trace[493553935] transaction","detail":"{read_only:false; response_revision:664; number_of_response:1; }","duration":"119.514412ms","start":"2024-09-16T11:12:38.729997Z","end":"2024-09-16T11:12:38.849512Z","steps":["trace[493553935] 'process raft request'  (duration: 118.471075ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-16T11:12:39.052294Z","caller":"traceutil/trace.go:171","msg":"trace[1386390585] transaction","detail":"{read_only:false; response_revision:667; number_of_response:1; }","duration":"146.327858ms","start":"2024-09-16T11:12:38.905943Z","end":"2024-09-16T11:12:39.052271Z","steps":["trace[1386390585] 'process raft request'  (duration: 89.540915ms)","trace[1386390585] 'compare'  (duration: 56.564622ms)"],"step_count":2}
	
	
	==> etcd [e7db7be77ed78b3a10c8d675a7803092991d0e4a8c3e81f93fe10d600f5fd3a0] <==
	{"level":"info","ts":"2024-09-16T11:11:35.660657Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T11:11:35.660927Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T11:11:35.660956Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T11:11:35.661023Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2024-09-16T11:11:35.661042Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2024-09-16T11:11:36.545011Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T11:11:36.545063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T11:11:36.545117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2024-09-16T11:11:36.545137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T11:11:36.545150Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2024-09-16T11:11:36.545165Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2024-09-16T11:11:36.545180Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2024-09-16T11:11:36.546198Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:11:36.546663Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:11:36.546683Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:11:36.546665Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:embed-certs-679624 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T11:11:36.546933Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T11:11:36.546964Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T11:11:36.547066Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:11:36.547154Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:11:36.547183Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:11:36.548000Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:11:36.548092Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:11:36.548901Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2024-09-16T11:11:36.549253Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 11:16:50 up 59 min,  0 users,  load average: 1.71, 2.45, 2.19
	Linux embed-certs-679624 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [2dbb170a519e89c32bdecca493bd2479c29f546b0a9ba488f7768c7e7cdd8cc6] <==
	I0916 11:11:47.021998       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 11:11:47.023989       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0916 11:11:47.024566       1 main.go:148] setting mtu 1500 for CNI 
	I0916 11:11:47.025534       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 11:11:47.025627       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 11:11:47.420585       1 controller.go:334] Starting controller kube-network-policies
	I0916 11:11:47.421021       1 controller.go:338] Waiting for informer caches to sync
	I0916 11:11:47.421117       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 11:11:47.627002       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 11:11:47.627034       1 metrics.go:61] Registering metrics
	I0916 11:11:47.627087       1 controller.go:374] Syncing nftables rules
	I0916 11:11:57.424285       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0916 11:11:57.424361       1 main.go:299] handling current node
	I0916 11:12:07.420876       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0916 11:12:07.420920       1 main.go:299] handling current node
	
	
	==> kindnet [bce0bf0824d9328eb0646e3110f5ca1b80318b72666f29555d3ed24d7aa08d1c] <==
	I0916 11:14:48.559926       1 main.go:299] handling current node
	I0916 11:14:58.563834       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0916 11:14:58.563878       1 main.go:299] handling current node
	I0916 11:15:08.559844       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0916 11:15:08.559878       1 main.go:299] handling current node
	I0916 11:15:18.563833       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0916 11:15:18.563874       1 main.go:299] handling current node
	I0916 11:15:28.554921       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0916 11:15:28.554961       1 main.go:299] handling current node
	I0916 11:15:38.555839       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0916 11:15:38.555884       1 main.go:299] handling current node
	I0916 11:15:48.559835       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0916 11:15:48.559879       1 main.go:299] handling current node
	I0916 11:15:58.554774       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0916 11:15:58.554814       1 main.go:299] handling current node
	I0916 11:16:08.563838       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0916 11:16:08.563871       1 main.go:299] handling current node
	I0916 11:16:18.560137       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0916 11:16:18.560177       1 main.go:299] handling current node
	I0916 11:16:28.554967       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0916 11:16:28.555006       1 main.go:299] handling current node
	I0916 11:16:38.555114       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0916 11:16:38.555178       1 main.go:299] handling current node
	I0916 11:16:48.554870       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0916 11:16:48.554930       1 main.go:299] handling current node
	
	
	==> kube-apiserver [cdad97219867df2ded19d219654472a41674b342b9961e00c7f6e4451fd8d72c] <==
	I0916 11:12:27.847424       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 11:12:28.036760       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.48.159"}
	I0916 11:12:28.120532       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.203.230"}
	I0916 11:12:29.000032       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 11:12:29.199887       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0916 11:12:29.500454       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 11:12:29.500454       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	W0916 11:13:26.628557       1 handler_proxy.go:99] no RequestInfo found in the context
	W0916 11:13:26.628558       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 11:13:26.628639       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0916 11:13:26.628657       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0916 11:13:26.629762       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0916 11:13:26.629785       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0916 11:15:26.630123       1 handler_proxy.go:99] no RequestInfo found in the context
	W0916 11:15:26.630140       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 11:15:26.630189       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0916 11:15:26.630258       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0916 11:15:26.631335       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0916 11:15:26.631368       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [debbdc082cc9ca9a3fe53ba70655a9e18f2b6df022109c93f5db8c5e17b5c30a] <==
	E0916 11:12:07.209407       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0916 11:12:07.210538       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I0916 11:12:07.270691       1 alloc.go:330] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.98.201.195"}
	W0916 11:12:07.320987       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 11:12:07.321061       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0916 11:12:07.327950       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 11:12:07.328018       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0916 11:12:08.207093       1 handler_proxy.go:99] no RequestInfo found in the context
	W0916 11:12:08.207093       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 11:12:08.207163       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0916 11:12:08.207195       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0916 11:12:08.208211       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0916 11:12:08.208230       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [98ba0135cf4f3961fbf8ace4f9fb6ca27a2883eb5d31aa7115bb9c9828c71f32] <==
	I0916 11:11:44.931380       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 11:11:44.937542       1 shared_informer.go:320] Caches are synced for deployment
	I0916 11:11:44.943920       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0916 11:11:45.325629       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:11:45.408258       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:11:45.408287       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 11:11:45.828842       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="111.978923ms"
	I0916 11:11:45.842449       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="13.539417ms"
	I0916 11:11:45.842559       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="62.208µs"
	I0916 11:11:45.843676       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="53.216µs"
	I0916 11:11:46.851046       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="15.841905ms"
	I0916 11:11:46.858766       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="7.657412ms"
	I0916 11:11:46.859483       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="165.208µs"
	I0916 11:11:47.957358       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="66.062µs"
	I0916 11:11:47.964349       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="64.093µs"
	I0916 11:11:47.965886       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="68.029µs"
	I0916 11:11:51.248649       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-679624"
	I0916 11:12:00.965845       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="117.386µs"
	I0916 11:12:00.983957       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.090341ms"
	I0916 11:12:00.984089       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="84.88µs"
	I0916 11:12:07.235725       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="15.628399ms"
	I0916 11:12:07.245675       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="9.846104ms"
	I0916 11:12:07.245772       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="46.356µs"
	I0916 11:12:07.253072       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="49.079µs"
	I0916 11:12:07.979844       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="134.858µs"
	
	
	==> kube-controller-manager [e263620e71d778a5dfa8791096a3df362173cc1ea97067ed4faf292a6c5581ba] <==
	I0916 11:13:23.258977       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="69.965µs"
	I0916 11:13:26.908902       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="56.716µs"
	E0916 11:13:29.113859       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 11:13:29.537847       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0916 11:13:38.260495       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="89.15µs"
	E0916 11:13:59.119563       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 11:13:59.546077       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0916 11:14:05.897943       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="71.218µs"
	I0916 11:14:06.908375       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="78.363µs"
	I0916 11:14:07.260437       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="69.705µs"
	I0916 11:14:20.271146       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="75.829µs"
	E0916 11:14:29.126428       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 11:14:29.553695       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0916 11:14:59.132239       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 11:14:59.561828       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0916 11:15:27.259279       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="95.802µs"
	E0916 11:15:29.137670       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 11:15:29.569471       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0916 11:15:35.105995       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="73.107µs"
	I0916 11:15:36.909025       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="68.843µs"
	I0916 11:15:42.258698       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="74.418µs"
	E0916 11:15:59.142846       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 11:15:59.577214       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0916 11:16:29.148177       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 11:16:29.583713       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [7993c3f14244f58c4d9163c2b8d1fa3eb2846ccde92cdbb33932dc072a1106c3] <==
	I0916 11:12:27.845400       1 server_linux.go:66] "Using iptables proxy"
	I0916 11:12:28.154324       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.85.2"]
	E0916 11:12:28.154387       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 11:12:28.175859       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 11:12:28.175933       1 server_linux.go:169] "Using iptables Proxier"
	I0916 11:12:28.177863       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 11:12:28.178338       1 server.go:483] "Version info" version="v1.31.1"
	I0916 11:12:28.178370       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:12:28.179677       1 config.go:105] "Starting endpoint slice config controller"
	I0916 11:12:28.180080       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 11:12:28.179711       1 config.go:328] "Starting node config controller"
	I0916 11:12:28.180119       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 11:12:28.180033       1 config.go:199] "Starting service config controller"
	I0916 11:12:28.180131       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 11:12:28.283450       1 shared_informer.go:320] Caches are synced for service config
	I0916 11:12:28.283489       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 11:12:28.283503       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [c182b9d7c07dff5e07ae6d7a2a6ee44903b7d16b719cbe713e148ddee7a32eae] <==
	I0916 11:11:46.629316       1 server_linux.go:66] "Using iptables proxy"
	I0916 11:11:46.830532       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.85.2"]
	E0916 11:11:46.830628       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 11:11:46.926994       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 11:11:46.927247       1 server_linux.go:169] "Using iptables Proxier"
	I0916 11:11:46.930151       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 11:11:46.930796       1 server.go:483] "Version info" version="v1.31.1"
	I0916 11:11:46.930829       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:11:46.932160       1 config.go:199] "Starting service config controller"
	I0916 11:11:46.932195       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 11:11:46.932254       1 config.go:328] "Starting node config controller"
	I0916 11:11:46.932264       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 11:11:46.932283       1 config.go:105] "Starting endpoint slice config controller"
	I0916 11:11:46.932300       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 11:11:47.033501       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 11:11:47.033621       1 shared_informer.go:320] Caches are synced for service config
	I0916 11:11:47.033942       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7637dc0ee3d4d0dad8e111abe371f5489a1304abc063851969b1c262fb4b2a10] <==
	W0916 11:11:38.120528       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 11:11:38.120569       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 11:11:38.120569       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0916 11:11:38.120569       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 11:11:38.120610       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	E0916 11:11:38.120610       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:11:38.120674       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 11:11:38.120697       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:11:38.918573       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 11:11:38.918616       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:11:39.040886       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 11:11:39.040945       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:11:39.113732       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 11:11:39.113779       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:11:39.119266       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0916 11:11:39.119303       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:11:39.126330       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 11:11:39.126368       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:11:39.133675       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 11:11:39.133725       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:11:39.158407       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 11:11:39.158460       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:11:39.324525       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:11:39.324580       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 11:11:41.243501       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [ac979bde4e227043cfb8483a4d54a734373a62012be1ef212239ef8b959912c1] <==
	I0916 11:12:23.710179       1 serving.go:386] Generated self-signed cert in-memory
	I0916 11:12:25.630022       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 11:12:25.630059       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:12:25.637793       1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController
	I0916 11:12:25.637858       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0916 11:12:25.637952       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 11:12:25.637972       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 11:12:25.638013       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0916 11:12:25.638028       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0916 11:12:25.638040       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 11:12:25.638143       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 11:12:25.738897       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0916 11:12:25.738941       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0916 11:12:25.738961       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 11:15:22 embed-certs-679624 kubelet[726]: I0916 11:15:22.248336     726 scope.go:117] "RemoveContainer" containerID="2c81d3fc22eb803f567302ee83ead8e0763d32c2be9a3b104a30fc36a72e9418"
	Sep 16 11:15:22 embed-certs-679624 kubelet[726]: E0916 11:15:22.248559     726 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7c96f5b85b-jdtz5_kubernetes-dashboard(3727674c-29aa-4c09-9660-0a4f50cea168)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-jdtz5" podUID="3727674c-29aa-4c09-9660-0a4f50cea168"
	Sep 16 11:15:27 embed-certs-679624 kubelet[726]: E0916 11:15:27.249019     726 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-qgvl9" podUID="b0d684f3-ff91-4996-8d9d-23936b12c814"
	Sep 16 11:15:34 embed-certs-679624 kubelet[726]: I0916 11:15:34.248331     726 scope.go:117] "RemoveContainer" containerID="2c81d3fc22eb803f567302ee83ead8e0763d32c2be9a3b104a30fc36a72e9418"
	Sep 16 11:15:35 embed-certs-679624 kubelet[726]: I0916 11:15:35.093705     726 scope.go:117] "RemoveContainer" containerID="2c81d3fc22eb803f567302ee83ead8e0763d32c2be9a3b104a30fc36a72e9418"
	Sep 16 11:15:35 embed-certs-679624 kubelet[726]: I0916 11:15:35.094092     726 scope.go:117] "RemoveContainer" containerID="b1971a0348b56706de13cb8c5053f0ac67ec2f360b3015676684380fcacf5c2a"
	Sep 16 11:15:35 embed-certs-679624 kubelet[726]: E0916 11:15:35.094292     726 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7c96f5b85b-jdtz5_kubernetes-dashboard(3727674c-29aa-4c09-9660-0a4f50cea168)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-jdtz5" podUID="3727674c-29aa-4c09-9660-0a4f50cea168"
	Sep 16 11:15:36 embed-certs-679624 kubelet[726]: I0916 11:15:36.898733     726 scope.go:117] "RemoveContainer" containerID="b1971a0348b56706de13cb8c5053f0ac67ec2f360b3015676684380fcacf5c2a"
	Sep 16 11:15:36 embed-certs-679624 kubelet[726]: E0916 11:15:36.898922     726 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7c96f5b85b-jdtz5_kubernetes-dashboard(3727674c-29aa-4c09-9660-0a4f50cea168)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-jdtz5" podUID="3727674c-29aa-4c09-9660-0a4f50cea168"
	Sep 16 11:15:42 embed-certs-679624 kubelet[726]: E0916 11:15:42.249430     726 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-qgvl9" podUID="b0d684f3-ff91-4996-8d9d-23936b12c814"
	Sep 16 11:15:48 embed-certs-679624 kubelet[726]: I0916 11:15:48.248941     726 scope.go:117] "RemoveContainer" containerID="b1971a0348b56706de13cb8c5053f0ac67ec2f360b3015676684380fcacf5c2a"
	Sep 16 11:15:48 embed-certs-679624 kubelet[726]: E0916 11:15:48.249158     726 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7c96f5b85b-jdtz5_kubernetes-dashboard(3727674c-29aa-4c09-9660-0a4f50cea168)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-jdtz5" podUID="3727674c-29aa-4c09-9660-0a4f50cea168"
	Sep 16 11:15:55 embed-certs-679624 kubelet[726]: E0916 11:15:55.249734     726 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-qgvl9" podUID="b0d684f3-ff91-4996-8d9d-23936b12c814"
	Sep 16 11:15:59 embed-certs-679624 kubelet[726]: I0916 11:15:59.248893     726 scope.go:117] "RemoveContainer" containerID="b1971a0348b56706de13cb8c5053f0ac67ec2f360b3015676684380fcacf5c2a"
	Sep 16 11:15:59 embed-certs-679624 kubelet[726]: E0916 11:15:59.249095     726 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7c96f5b85b-jdtz5_kubernetes-dashboard(3727674c-29aa-4c09-9660-0a4f50cea168)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-jdtz5" podUID="3727674c-29aa-4c09-9660-0a4f50cea168"
	Sep 16 11:16:07 embed-certs-679624 kubelet[726]: E0916 11:16:07.249461     726 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-qgvl9" podUID="b0d684f3-ff91-4996-8d9d-23936b12c814"
	Sep 16 11:16:12 embed-certs-679624 kubelet[726]: I0916 11:16:12.248344     726 scope.go:117] "RemoveContainer" containerID="b1971a0348b56706de13cb8c5053f0ac67ec2f360b3015676684380fcacf5c2a"
	Sep 16 11:16:12 embed-certs-679624 kubelet[726]: E0916 11:16:12.248544     726 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7c96f5b85b-jdtz5_kubernetes-dashboard(3727674c-29aa-4c09-9660-0a4f50cea168)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-jdtz5" podUID="3727674c-29aa-4c09-9660-0a4f50cea168"
	Sep 16 11:16:18 embed-certs-679624 kubelet[726]: E0916 11:16:18.249600     726 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-qgvl9" podUID="b0d684f3-ff91-4996-8d9d-23936b12c814"
	Sep 16 11:16:26 embed-certs-679624 kubelet[726]: I0916 11:16:26.247918     726 scope.go:117] "RemoveContainer" containerID="b1971a0348b56706de13cb8c5053f0ac67ec2f360b3015676684380fcacf5c2a"
	Sep 16 11:16:26 embed-certs-679624 kubelet[726]: E0916 11:16:26.248114     726 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7c96f5b85b-jdtz5_kubernetes-dashboard(3727674c-29aa-4c09-9660-0a4f50cea168)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-jdtz5" podUID="3727674c-29aa-4c09-9660-0a4f50cea168"
	Sep 16 11:16:31 embed-certs-679624 kubelet[726]: E0916 11:16:31.249732     726 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-qgvl9" podUID="b0d684f3-ff91-4996-8d9d-23936b12c814"
	Sep 16 11:16:40 embed-certs-679624 kubelet[726]: I0916 11:16:40.248828     726 scope.go:117] "RemoveContainer" containerID="b1971a0348b56706de13cb8c5053f0ac67ec2f360b3015676684380fcacf5c2a"
	Sep 16 11:16:40 embed-certs-679624 kubelet[726]: E0916 11:16:40.249022     726 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7c96f5b85b-jdtz5_kubernetes-dashboard(3727674c-29aa-4c09-9660-0a4f50cea168)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-jdtz5" podUID="3727674c-29aa-4c09-9660-0a4f50cea168"
	Sep 16 11:16:43 embed-certs-679624 kubelet[726]: E0916 11:16:43.249876     726 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-qgvl9" podUID="b0d684f3-ff91-4996-8d9d-23936b12c814"
	
	
	==> kubernetes-dashboard [3c7eb2cb404402dd137d29ee9ac327b9c01414947d1986518ce39867d5a0d233] <==
	2024/09/16 11:12:34 Using namespace: kubernetes-dashboard
	2024/09/16 11:12:34 Using in-cluster config to connect to apiserver
	2024/09/16 11:12:34 Using secret token for csrf signing
	2024/09/16 11:12:34 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/16 11:12:35 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/16 11:12:35 Successful initial request to the apiserver, version: v1.31.1
	2024/09/16 11:12:35 Generating JWE encryption key
	2024/09/16 11:12:35 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/16 11:12:35 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/16 11:12:35 Initializing JWE encryption key from synchronized object
	2024/09/16 11:12:35 Creating in-cluster Sidecar client
	2024/09/16 11:12:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:12:35 Serving insecurely on HTTP port: 9090
	2024/09/16 11:13:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:13:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:14:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:14:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:15:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:15:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:16:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:16:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:12:34 Starting overwatch
	
	
	==> storage-provisioner [a004cb2b980056e835fc7f24bc7be1594c5efffc763e0e1af45bb0862aa43203] <==
	I0916 11:13:10.314607       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 11:13:10.321934       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 11:13:10.321973       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 11:13:27.716835       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 11:13:27.717009       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-679624_e0be23f4-e2e6-451b-bfa1-04a265c80961!
	I0916 11:13:27.716970       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"af47b140-7661-4805-8791-5af1e81aebf7", APIVersion:"v1", ResourceVersion:"738", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-679624_e0be23f4-e2e6-451b-bfa1-04a265c80961 became leader
	I0916 11:13:27.817248       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-679624_e0be23f4-e2e6-451b-bfa1-04a265c80961!
	
	
	==> storage-provisioner [b8cfe956c68340395c79fe3267d07b586671c86715a48a65e559747f45140168] <==
	I0916 11:12:28.033605       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0916 11:12:58.039242       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-679624 -n embed-certs-679624
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-679624 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context embed-certs-679624 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (506.542µs)
helpers_test.go:263: kubectl --context embed-certs-679624 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (7.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (7.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-9sr9v" [261ef398-46a5-41c5-bf4d-763c5bc263c3] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004475625s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-371039 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-371039 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: fork/exec /usr/local/bin/kubectl: exec format error (560.512µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-371039 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": fork/exec /usr/local/bin/kubectl: exec format error
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-371039
helpers_test.go:235: (dbg) docker inspect old-k8s-version-371039:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9e01fb8ba8f9390279ece7fa8549e0985cb4ee5a2abca405a94db6651b123b23",
	        "Created": "2024-09-16T11:08:26.808717426Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 283619,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T11:10:47.182625379Z",
	            "FinishedAt": "2024-09-16T11:10:46.3068422Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/9e01fb8ba8f9390279ece7fa8549e0985cb4ee5a2abca405a94db6651b123b23/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9e01fb8ba8f9390279ece7fa8549e0985cb4ee5a2abca405a94db6651b123b23/hostname",
	        "HostsPath": "/var/lib/docker/containers/9e01fb8ba8f9390279ece7fa8549e0985cb4ee5a2abca405a94db6651b123b23/hosts",
	        "LogPath": "/var/lib/docker/containers/9e01fb8ba8f9390279ece7fa8549e0985cb4ee5a2abca405a94db6651b123b23/9e01fb8ba8f9390279ece7fa8549e0985cb4ee5a2abca405a94db6651b123b23-json.log",
	        "Name": "/old-k8s-version-371039",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-371039:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-371039",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a4c36d1f47b4f755139012e0a43ea6f04287ebb716fa5c832223a9a5a773adaf-init/diff:/var/lib/docker/overlay2/e391a31d00d345931d18da68f8a7d7338830f6d9b98fe203461225d03a65d594/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a4c36d1f47b4f755139012e0a43ea6f04287ebb716fa5c832223a9a5a773adaf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a4c36d1f47b4f755139012e0a43ea6f04287ebb716fa5c832223a9a5a773adaf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a4c36d1f47b4f755139012e0a43ea6f04287ebb716fa5c832223a9a5a773adaf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-371039",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-371039/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-371039",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-371039",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-371039",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "edb89f5d0f1b14778bc6503c7122826ccde192142507f982d72042ac23f8d31f",
	            "SandboxKey": "/var/run/docker/netns/edb89f5d0f1b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-371039": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:67:02",
	                    "DriverOpts": null,
	                    "NetworkID": "617bc0338b3b0f6ed38b0b21b091e38e1d6c95398d3e053128c978435134833f",
	                    "EndpointID": "5143d6e6c759ce273e967be48af66c83d163fb8c953ab200f9e3b0c27528cf34",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-371039",
	                        "9e01fb8ba8f9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-371039 -n old-k8s-version-371039
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-371039 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-371039 logs -n 25: (1.339717083s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p default-k8s-diff-port-006978       | default-k8s-diff-port-006978 | jenkins | v1.34.0 | 16 Sep 24 11:13 UTC | 16 Sep 24 11:13 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-006978 | jenkins | v1.34.0 | 16 Sep 24 11:13 UTC |                     |
	|         | default-k8s-diff-port-006978                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | no-preload-349453 image list                           | no-preload-349453            | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC | 16 Sep 24 11:14 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p no-preload-349453                                   | no-preload-349453            | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC | 16 Sep 24 11:14 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-349453                                   | no-preload-349453            | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC | 16 Sep 24 11:14 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-349453                                   | no-preload-349453            | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC | 16 Sep 24 11:14 UTC |
	| delete  | -p no-preload-349453                                   | no-preload-349453            | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC | 16 Sep 24 11:14 UTC |
	| start   | -p newest-cni-802652 --memory=2200 --alsologtostderr   | newest-cni-802652            | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC | 16 Sep 24 11:14 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-802652             | newest-cni-802652            | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC | 16 Sep 24 11:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-802652                                   | newest-cni-802652            | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC | 16 Sep 24 11:14 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-802652                  | newest-cni-802652            | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC | 16 Sep 24 11:14 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-802652 --memory=2200 --alsologtostderr   | newest-cni-802652            | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC | 16 Sep 24 11:15 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd        |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | newest-cni-802652 image list                           | newest-cni-802652            | jenkins | v1.34.0 | 16 Sep 24 11:15 UTC | 16 Sep 24 11:15 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-802652                                   | newest-cni-802652            | jenkins | v1.34.0 | 16 Sep 24 11:15 UTC | 16 Sep 24 11:15 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-802652                                   | newest-cni-802652            | jenkins | v1.34.0 | 16 Sep 24 11:15 UTC | 16 Sep 24 11:15 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-802652                                   | newest-cni-802652            | jenkins | v1.34.0 | 16 Sep 24 11:15 UTC | 16 Sep 24 11:15 UTC |
	| delete  | -p newest-cni-802652                                   | newest-cni-802652            | jenkins | v1.34.0 | 16 Sep 24 11:15 UTC | 16 Sep 24 11:15 UTC |
	| start   | -p auto-771611 --memory=3072                           | auto-771611                  | jenkins | v1.34.0 | 16 Sep 24 11:15 UTC | 16 Sep 24 11:16 UTC |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	| ssh     | -p auto-771611 pgrep -a                                | auto-771611                  | jenkins | v1.34.0 | 16 Sep 24 11:16 UTC | 16 Sep 24 11:16 UTC |
	|         | kubelet                                                |                              |         |         |                     |                     |
	| image   | embed-certs-679624 image list                          | embed-certs-679624           | jenkins | v1.34.0 | 16 Sep 24 11:16 UTC | 16 Sep 24 11:16 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p embed-certs-679624                                  | embed-certs-679624           | jenkins | v1.34.0 | 16 Sep 24 11:16 UTC | 16 Sep 24 11:16 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-679624                                  | embed-certs-679624           | jenkins | v1.34.0 | 16 Sep 24 11:16 UTC | 16 Sep 24 11:16 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-679624                                  | embed-certs-679624           | jenkins | v1.34.0 | 16 Sep 24 11:16 UTC | 16 Sep 24 11:16 UTC |
	| delete  | -p embed-certs-679624                                  | embed-certs-679624           | jenkins | v1.34.0 | 16 Sep 24 11:16 UTC | 16 Sep 24 11:16 UTC |
	| start   | -p kindnet-771611                                      | kindnet-771611               | jenkins | v1.34.0 | 16 Sep 24 11:16 UTC |                     |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=docker                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:16:56
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:16:56.933321  332275 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:16:56.933419  332275 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:16:56.933425  332275 out.go:358] Setting ErrFile to fd 2...
	I0916 11:16:56.933429  332275 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:16:56.933652  332275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 11:16:56.934241  332275 out.go:352] Setting JSON to false
	I0916 11:16:56.935579  332275 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3561,"bootTime":1726481856,"procs":343,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:16:56.935690  332275 start.go:139] virtualization: kvm guest
	I0916 11:16:56.937954  332275 out.go:177] * [kindnet-771611] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:16:56.939413  332275 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:16:56.939464  332275 notify.go:220] Checking for updates...
	I0916 11:16:56.942255  332275 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:16:56.943874  332275 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:16:56.945387  332275 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 11:16:56.946730  332275 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:16:56.948073  332275 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:16:56.949882  332275 config.go:182] Loaded profile config "auto-771611": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:16:56.949998  332275 config.go:182] Loaded profile config "default-k8s-diff-port-006978": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:16:56.950102  332275 config.go:182] Loaded profile config "old-k8s-version-371039": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0916 11:16:56.950214  332275 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:16:56.974378  332275 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 11:16:56.974499  332275 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:16:57.023146  332275 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:true NGoroutines:84 SystemTime:2024-09-16 11:16:57.012761712 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:16:57.023252  332275 docker.go:318] overlay module found
	I0916 11:16:57.024846  332275 out.go:177] * Using the docker driver based on user configuration
	I0916 11:16:57.026028  332275 start.go:297] selected driver: docker
	I0916 11:16:57.026046  332275 start.go:901] validating driver "docker" against <nil>
	I0916 11:16:57.026060  332275 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:16:57.026962  332275 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:16:57.077056  332275 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:true NGoroutines:84 SystemTime:2024-09-16 11:16:57.067870315 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:16:57.077199  332275 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 11:16:57.077430  332275 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:16:57.079092  332275 out.go:177] * Using Docker driver with root privileges
	I0916 11:16:57.080508  332275 cni.go:84] Creating CNI manager for "kindnet"
	I0916 11:16:57.080531  332275 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 11:16:57.080611  332275 start.go:340] cluster config:
	{Name:kindnet-771611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-771611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:16:57.082117  332275 out.go:177] * Starting "kindnet-771611" primary control-plane node in "kindnet-771611" cluster
	I0916 11:16:57.083542  332275 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 11:16:57.084962  332275 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 11:16:57.086118  332275 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:16:57.086161  332275 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4
	I0916 11:16:57.086171  332275 cache.go:56] Caching tarball of preloaded images
	I0916 11:16:57.086260  332275 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 11:16:57.086257  332275 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 11:16:57.086274  332275 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 11:16:57.086401  332275 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/config.json ...
	I0916 11:16:57.086426  332275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/config.json: {Name:mk85d1c52f772c780df10ed18ec6ee82497f4665 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0916 11:16:57.107611  332275 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 11:16:57.107628  332275 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 11:16:57.107698  332275 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 11:16:57.107713  332275 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 11:16:57.107717  332275 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 11:16:57.107724  332275 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 11:16:57.107730  332275 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 11:16:57.166277  332275 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 11:16:57.166320  332275 cache.go:194] Successfully downloaded all kic artifacts
	I0916 11:16:57.166353  332275 start.go:360] acquireMachinesLock for kindnet-771611: {Name:mk5409d440397cb7d3d0472cf5d14b2bfbc751d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:16:57.166453  332275 start.go:364] duration metric: took 80.88µs to acquireMachinesLock for "kindnet-771611"
	I0916 11:16:57.166477  332275 start.go:93] Provisioning new machine with config: &{Name:kindnet-771611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-771611 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 11:16:57.166565  332275 start.go:125] createHost starting for "" (driver="docker")
	I0916 11:16:56.129710  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:58.629215  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:16:59.995885  283294 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0916 11:17:00.001438  283294 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I0916 11:17:00.004070  283294 out.go:201] 
	W0916 11:17:00.005556  283294 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0916 11:17:00.005591  283294 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0916 11:17:00.005611  283294 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0916 11:17:00.005618  283294 out.go:270] * 
	W0916 11:17:00.006550  283294 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0916 11:17:00.008068  283294 out.go:201] 
	I0916 11:16:57.169873  332275 out.go:235] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0916 11:16:57.170122  332275 start.go:159] libmachine.API.Create for "kindnet-771611" (driver="docker")
	I0916 11:16:57.170155  332275 client.go:168] LocalClient.Create starting
	I0916 11:16:57.170251  332275 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem
	I0916 11:16:57.170284  332275 main.go:141] libmachine: Decoding PEM data...
	I0916 11:16:57.170301  332275 main.go:141] libmachine: Parsing certificate...
	I0916 11:16:57.170350  332275 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem
	I0916 11:16:57.170369  332275 main.go:141] libmachine: Decoding PEM data...
	I0916 11:16:57.170379  332275 main.go:141] libmachine: Parsing certificate...
	I0916 11:16:57.170706  332275 cli_runner.go:164] Run: docker network inspect kindnet-771611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 11:16:57.188023  332275 cli_runner.go:211] docker network inspect kindnet-771611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 11:16:57.188099  332275 network_create.go:284] running [docker network inspect kindnet-771611] to gather additional debugging logs...
	I0916 11:16:57.188120  332275 cli_runner.go:164] Run: docker network inspect kindnet-771611
	W0916 11:16:57.204438  332275 cli_runner.go:211] docker network inspect kindnet-771611 returned with exit code 1
	I0916 11:16:57.204469  332275 network_create.go:287] error running [docker network inspect kindnet-771611]: docker network inspect kindnet-771611: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-771611 not found
	I0916 11:16:57.204494  332275 network_create.go:289] output of [docker network inspect kindnet-771611]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-771611 not found
	
	** /stderr **
	I0916 11:16:57.204608  332275 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:16:57.222063  332275 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c95c64bb41bd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:bd:76:ab:c4} reservation:<nil>}
	I0916 11:16:57.222986  332275 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fad43aa9929b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:94:3f:61:fd} reservation:<nil>}
	I0916 11:16:57.223963  332275 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-49585fce923a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:ad:45:94:54} reservation:<nil>}
	I0916 11:16:57.224796  332275 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-77357235afce IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:c7:d5:e1:f1} reservation:<nil>}
	I0916 11:16:57.225785  332275 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001bfd620}
	I0916 11:16:57.225811  332275 network_create.go:124] attempt to create docker network kindnet-771611 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0916 11:16:57.225885  332275 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-771611 kindnet-771611
	I0916 11:16:57.290318  332275 network_create.go:108] docker network kindnet-771611 192.168.85.0/24 created
	I0916 11:16:57.290374  332275 kic.go:121] calculated static IP "192.168.85.2" for the "kindnet-771611" container
	I0916 11:16:57.290455  332275 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 11:16:57.310777  332275 cli_runner.go:164] Run: docker volume create kindnet-771611 --label name.minikube.sigs.k8s.io=kindnet-771611 --label created_by.minikube.sigs.k8s.io=true
	I0916 11:16:57.329338  332275 oci.go:103] Successfully created a docker volume kindnet-771611
	I0916 11:16:57.329447  332275 cli_runner.go:164] Run: docker run --rm --name kindnet-771611-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-771611 --entrypoint /usr/bin/test -v kindnet-771611:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 11:16:57.871703  332275 oci.go:107] Successfully prepared a docker volume kindnet-771611
	I0916 11:16:57.871811  332275 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:16:57.871835  332275 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 11:16:57.871919  332275 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-771611:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 11:17:01.129362  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:17:03.630166  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:17:03.524276  332275 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-771611:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (5.652303275s)
	I0916 11:17:03.524333  332275 kic.go:203] duration metric: took 5.652492576s to extract preloaded images to volume ...
	W0916 11:17:03.524503  332275 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 11:17:03.524622  332275 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 11:17:03.586676  332275 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-771611 --name kindnet-771611 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-771611 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-771611 --network kindnet-771611 --ip 192.168.85.2 --volume kindnet-771611:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 11:17:03.911339  332275 cli_runner.go:164] Run: docker container inspect kindnet-771611 --format={{.State.Running}}
	I0916 11:17:03.933086  332275 cli_runner.go:164] Run: docker container inspect kindnet-771611 --format={{.State.Status}}
	I0916 11:17:03.955856  332275 cli_runner.go:164] Run: docker exec kindnet-771611 stat /var/lib/dpkg/alternatives/iptables
	I0916 11:17:04.005947  332275 oci.go:144] the created container "kindnet-771611" has a running status.
	I0916 11:17:04.005989  332275 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/kindnet-771611/id_rsa...
	I0916 11:17:04.229002  332275 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3687/.minikube/machines/kindnet-771611/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 11:17:04.257613  332275 cli_runner.go:164] Run: docker container inspect kindnet-771611 --format={{.State.Status}}
	I0916 11:17:04.279082  332275 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 11:17:04.279106  332275 kic_runner.go:114] Args: [docker exec --privileged kindnet-771611 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 11:17:04.336799  332275 cli_runner.go:164] Run: docker container inspect kindnet-771611 --format={{.State.Status}}
	I0916 11:17:04.359550  332275 machine.go:93] provisionDockerMachine start ...
	I0916 11:17:04.359660  332275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-771611
	I0916 11:17:04.378273  332275 main.go:141] libmachine: Using SSH client type: native
	I0916 11:17:04.378528  332275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0916 11:17:04.378546  332275 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 11:17:04.615113  332275 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-771611
	
	I0916 11:17:04.615145  332275 ubuntu.go:169] provisioning hostname "kindnet-771611"
	I0916 11:17:04.615204  332275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-771611
	I0916 11:17:04.635816  332275 main.go:141] libmachine: Using SSH client type: native
	I0916 11:17:04.636041  332275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0916 11:17:04.636058  332275 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-771611 && echo "kindnet-771611" | sudo tee /etc/hostname
	I0916 11:17:04.780108  332275 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-771611
	
	I0916 11:17:04.780190  332275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-771611
	I0916 11:17:04.799864  332275 main.go:141] libmachine: Using SSH client type: native
	I0916 11:17:04.800067  332275 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I0916 11:17:04.800093  332275 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-771611' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-771611/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-771611' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:17:04.936019  332275 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:17:04.936052  332275 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 11:17:04.936087  332275 ubuntu.go:177] setting up certificates
	I0916 11:17:04.936107  332275 provision.go:84] configureAuth start
	I0916 11:17:04.936161  332275 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-771611
	I0916 11:17:04.954677  332275 provision.go:143] copyHostCerts
	I0916 11:17:04.954740  332275 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 11:17:04.954751  332275 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 11:17:04.954821  332275 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 11:17:04.954902  332275 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 11:17:04.954910  332275 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 11:17:04.954936  332275 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 11:17:04.954988  332275 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 11:17:04.954995  332275 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 11:17:04.955023  332275 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 11:17:04.955070  332275 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.kindnet-771611 san=[127.0.0.1 192.168.85.2 kindnet-771611 localhost minikube]
	I0916 11:17:05.044161  332275 provision.go:177] copyRemoteCerts
	I0916 11:17:05.044229  332275 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:17:05.044266  332275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-771611
	I0916 11:17:05.062757  332275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/kindnet-771611/id_rsa Username:docker}
	I0916 11:17:05.164798  332275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 11:17:05.188318  332275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0916 11:17:05.211204  332275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0916 11:17:05.234054  332275 provision.go:87] duration metric: took 297.931863ms to configureAuth
	I0916 11:17:05.234080  332275 ubuntu.go:193] setting minikube options for container-runtime
	I0916 11:17:05.234232  332275 config.go:182] Loaded profile config "kindnet-771611": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:17:05.234241  332275 machine.go:96] duration metric: took 874.667307ms to provisionDockerMachine
	I0916 11:17:05.234247  332275 client.go:171] duration metric: took 8.064087278s to LocalClient.Create
	I0916 11:17:05.234264  332275 start.go:167] duration metric: took 8.064144501s to libmachine.API.Create "kindnet-771611"
	I0916 11:17:05.234274  332275 start.go:293] postStartSetup for "kindnet-771611" (driver="docker")
	I0916 11:17:05.234285  332275 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:17:05.234326  332275 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:17:05.234361  332275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-771611
	I0916 11:17:05.254514  332275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/kindnet-771611/id_rsa Username:docker}
	I0916 11:17:05.357278  332275 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:17:05.361035  332275 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 11:17:05.361078  332275 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 11:17:05.361091  332275 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 11:17:05.361098  332275 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 11:17:05.361114  332275 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 11:17:05.361181  332275 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 11:17:05.361284  332275 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 11:17:05.361432  332275 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:17:05.370036  332275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:17:05.393506  332275 start.go:296] duration metric: took 159.216703ms for postStartSetup
	I0916 11:17:05.393831  332275 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-771611
	I0916 11:17:05.411683  332275 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/config.json ...
	I0916 11:17:05.412032  332275 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 11:17:05.412078  332275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-771611
	I0916 11:17:05.429956  332275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/kindnet-771611/id_rsa Username:docker}
	I0916 11:17:05.520583  332275 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 11:17:05.525306  332275 start.go:128] duration metric: took 8.35872555s to createHost
	I0916 11:17:05.525337  332275 start.go:83] releasing machines lock for "kindnet-771611", held for 8.358872703s
	I0916 11:17:05.525417  332275 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-771611
	I0916 11:17:05.542973  332275 ssh_runner.go:195] Run: cat /version.json
	I0916 11:17:05.543018  332275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-771611
	I0916 11:17:05.543050  332275 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:17:05.543115  332275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-771611
	I0916 11:17:05.562116  332275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/kindnet-771611/id_rsa Username:docker}
	I0916 11:17:05.562127  332275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/kindnet-771611/id_rsa Username:docker}
	I0916 11:17:05.734116  332275 ssh_runner.go:195] Run: systemctl --version
	I0916 11:17:05.738430  332275 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:17:05.742545  332275 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 11:17:05.766049  332275 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 11:17:05.766116  332275 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:17:05.793776  332275 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 11:17:05.793801  332275 start.go:495] detecting cgroup driver to use...
	I0916 11:17:05.793830  332275 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 11:17:05.793869  332275 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 11:17:05.805691  332275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 11:17:05.816402  332275 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:17:05.816461  332275 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:17:05.830411  332275 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:17:05.844447  332275 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:17:05.926926  332275 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:17:06.013121  332275 docker.go:233] disabling docker service ...
	I0916 11:17:06.013202  332275 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:17:06.032305  332275 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:17:06.044539  332275 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:17:06.134398  332275 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:17:06.210691  332275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:17:06.221110  332275 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:17:06.237466  332275 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 11:17:06.246658  332275 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 11:17:06.255860  332275 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 11:17:06.255926  332275 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 11:17:06.265669  332275 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 11:17:06.275398  332275 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 11:17:06.284716  332275 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 11:17:06.293932  332275 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:17:06.302426  332275 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 11:17:06.311871  332275 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 11:17:06.321625  332275 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 11:17:06.331111  332275 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:17:06.339025  332275 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:17:06.347076  332275 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:17:06.423981  332275 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 11:17:06.540701  332275 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 11:17:06.540769  332275 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 11:17:06.544472  332275 start.go:563] Will wait 60s for crictl version
	I0916 11:17:06.544528  332275 ssh_runner.go:195] Run: which crictl
	I0916 11:17:06.547599  332275 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:17:06.581515  332275 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 11:17:06.581583  332275 ssh_runner.go:195] Run: containerd --version
	I0916 11:17:06.603953  332275 ssh_runner.go:195] Run: containerd --version
	I0916 11:17:06.630079  332275 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 11:17:06.631488  332275 cli_runner.go:164] Run: docker network inspect kindnet-771611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:17:06.648474  332275 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0916 11:17:06.652283  332275 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:17:06.663052  332275 kubeadm.go:883] updating cluster {Name:kindnet-771611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-771611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:17:06.663174  332275 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:17:06.663221  332275 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:17:06.695207  332275 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 11:17:06.695229  332275 containerd.go:534] Images already preloaded, skipping extraction
	I0916 11:17:06.695277  332275 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:17:06.728153  332275 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 11:17:06.728175  332275 cache_images.go:84] Images are preloaded, skipping loading
	I0916 11:17:06.728181  332275 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.31.1 containerd true true} ...
	I0916 11:17:06.728263  332275 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-771611 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:kindnet-771611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0916 11:17:06.728317  332275 ssh_runner.go:195] Run: sudo crictl info
	I0916 11:17:06.762472  332275 cni.go:84] Creating CNI manager for "kindnet"
	I0916 11:17:06.762502  332275 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:17:06.762522  332275 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-771611 NodeName:kindnet-771611 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 11:17:06.762632  332275 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "kindnet-771611"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:17:06.762686  332275 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 11:17:06.771506  332275 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 11:17:06.771567  332275 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:17:06.780132  332275 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0916 11:17:06.797062  332275 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:17:06.814087  332275 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2168 bytes)
	I0916 11:17:06.831210  332275 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0916 11:17:06.834600  332275 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:17:06.845103  332275 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:17:06.922842  332275 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:17:06.937484  332275 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611 for IP: 192.168.85.2
	I0916 11:17:06.937510  332275 certs.go:194] generating shared ca certs ...
	I0916 11:17:06.937530  332275 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:17:06.937711  332275 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 11:17:06.937777  332275 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 11:17:06.937792  332275 certs.go:256] generating profile certs ...
	I0916 11:17:06.937866  332275 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/client.key
	I0916 11:17:06.937891  332275 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/client.crt with IP's: []
	I0916 11:17:07.250407  332275 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/client.crt ...
	I0916 11:17:07.250436  332275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/client.crt: {Name:mk0ec132d938d12d558bacf6577bd29e07b34e08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:17:07.250610  332275 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/client.key ...
	I0916 11:17:07.250628  332275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/client.key: {Name:mk7ffc2585bf51df5098477619cc5b73477f96a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:17:07.250706  332275 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/apiserver.key.ebe41fda
	I0916 11:17:07.250722  332275 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/apiserver.crt.ebe41fda with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0916 11:17:07.387864  332275 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/apiserver.crt.ebe41fda ...
	I0916 11:17:07.387895  332275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/apiserver.crt.ebe41fda: {Name:mk6bc269870534fc78db398202cbeb10afda0484 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:17:07.388070  332275 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/apiserver.key.ebe41fda ...
	I0916 11:17:07.388081  332275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/apiserver.key.ebe41fda: {Name:mk13a6ef01d5b02700a11caa7a7132b638f5a515 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:17:07.388151  332275 certs.go:381] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/apiserver.crt.ebe41fda -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/apiserver.crt
	I0916 11:17:07.388221  332275 certs.go:385] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/apiserver.key.ebe41fda -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/apiserver.key
	I0916 11:17:07.388273  332275 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/proxy-client.key
	I0916 11:17:07.388287  332275 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/proxy-client.crt with IP's: []
	I0916 11:17:07.556558  332275 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/proxy-client.crt ...
	I0916 11:17:07.556586  332275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/proxy-client.crt: {Name:mk4c9f8dceb274b90d4a837bc419612a723dec10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:17:07.556761  332275 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/proxy-client.key ...
	I0916 11:17:07.556771  332275 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/proxy-client.key: {Name:mk28a9b6670d05e68a2b3e2aef8b8808203b836e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:17:07.556934  332275 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 11:17:07.556970  332275 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 11:17:07.556979  332275 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:17:07.557010  332275 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 11:17:07.557033  332275 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:17:07.557056  332275 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 11:17:07.557094  332275 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:17:07.557710  332275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:17:07.581423  332275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:17:07.604126  332275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:17:07.627967  332275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:17:07.651172  332275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0916 11:17:07.674394  332275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0916 11:17:07.697631  332275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:17:07.720528  332275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0916 11:17:07.743601  332275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:17:07.766415  332275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 11:17:07.791267  332275 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 11:17:07.816279  332275 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:17:07.833641  332275 ssh_runner.go:195] Run: openssl version
	I0916 11:17:07.838947  332275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:17:07.848324  332275 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:17:07.852059  332275 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:17:07.852131  332275 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:17:07.858682  332275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:17:07.868152  332275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 11:17:07.880418  332275 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 11:17:07.884016  332275 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 11:17:07.884072  332275 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 11:17:07.890770  332275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 11:17:07.899942  332275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 11:17:07.909046  332275 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 11:17:07.912234  332275 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 11:17:07.912293  332275 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 11:17:07.918599  332275 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:17:07.928174  332275 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:17:07.931438  332275 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 11:17:07.931501  332275 kubeadm.go:392] StartCluster: {Name:kindnet-771611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-771611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:17:07.931600  332275 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 11:17:07.931654  332275 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:17:07.966610  332275 cri.go:89] found id: ""
	I0916 11:17:07.966700  332275 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:17:07.975614  332275 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 11:17:07.984663  332275 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 11:17:07.984711  332275 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 11:17:07.993838  332275 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 11:17:07.993859  332275 kubeadm.go:157] found existing configuration files:
	
	I0916 11:17:07.993900  332275 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 11:17:08.002537  332275 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 11:17:08.002612  332275 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 11:17:08.010988  332275 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 11:17:08.019545  332275 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 11:17:08.019599  332275 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 11:17:08.027789  332275 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 11:17:08.036161  332275 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 11:17:08.036220  332275 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 11:17:08.044432  332275 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 11:17:08.054314  332275 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 11:17:08.054378  332275 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 11:17:08.063228  332275 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 11:17:08.101394  332275 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 11:17:08.101492  332275 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 11:17:08.118158  332275 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 11:17:08.118250  332275 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 11:17:08.118326  332275 kubeadm.go:310] OS: Linux
	I0916 11:17:08.118407  332275 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 11:17:08.118464  332275 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 11:17:08.118555  332275 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 11:17:08.118638  332275 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 11:17:08.118708  332275 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 11:17:08.118784  332275 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 11:17:08.118857  332275 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 11:17:08.118925  332275 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 11:17:08.119003  332275 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 11:17:08.177228  332275 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 11:17:08.177377  332275 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 11:17:08.177530  332275 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 11:17:08.182783  332275 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 11:17:06.129813  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:17:08.629598  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:17:08.185559  332275 out.go:235]   - Generating certificates and keys ...
	I0916 11:17:08.185670  332275 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 11:17:08.185788  332275 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 11:17:08.266427  332275 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 11:17:08.412235  332275 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 11:17:08.655547  332275 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 11:17:08.791206  332275 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 11:17:08.932431  332275 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 11:17:08.932580  332275 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kindnet-771611 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0916 11:17:09.029650  332275 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 11:17:09.029802  332275 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kindnet-771611 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0916 11:17:09.478097  332275 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 11:17:09.641027  332275 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 11:17:09.728325  332275 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 11:17:09.728510  332275 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 11:17:09.849638  332275 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 11:17:09.989957  332275 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 11:17:10.268702  332275 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 11:17:10.589487  332275 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 11:17:10.831816  332275 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 11:17:10.832377  332275 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 11:17:10.835955  332275 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 11:17:10.839184  332275 out.go:235]   - Booting up control plane ...
	I0916 11:17:10.839327  332275 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 11:17:10.839440  332275 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 11:17:10.839544  332275 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 11:17:10.848006  332275 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 11:17:10.853376  332275 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 11:17:10.853469  332275 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 11:17:10.942152  332275 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 11:17:10.942313  332275 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 11:17:11.443550  332275 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.502345ms
	I0916 11:17:11.443656  332275 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 11:17:11.129370  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:17:13.130053  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	069b3bead1bc3       523cad1a4df73       21 seconds ago      Exited              dashboard-metrics-scraper   6                   7270699f18202       dashboard-metrics-scraper-8d5bb5db8-7m6td
	4b7e57072db41       6e38f40d628db       5 minutes ago       Running             storage-provisioner         1                   93b8c8cd7b4b6       storage-provisioner
	b8894a1f49c45       07655ddf2eebe       5 minutes ago       Running             kubernetes-dashboard        0                   bd5ff588c3d01       kubernetes-dashboard-cd95d586-9sr9v
	e812c7a897638       12968670680f4       6 minutes ago       Running             kindnet-cni                 0                   f0a3b63ad532f       kindnet-txszz
	7e01e437eafa1       bfe3a36ebd252       6 minutes ago       Running             coredns                     0                   88f52d4838bfd       coredns-74ff55c5b-78djj
	b6e65b347883f       6e38f40d628db       6 minutes ago       Exited              storage-provisioner         0                   93b8c8cd7b4b6       storage-provisioner
	1145c23e87dee       10cc881966cfd       6 minutes ago       Running             kube-proxy                  0                   d0c6dbe1595c1       kube-proxy-w2kp4
	8dedfda17aef6       b9fa1895dcaa6       6 minutes ago       Running             kube-controller-manager     0                   b225af49e9834       kube-controller-manager-old-k8s-version-371039
	d4e91e6acd99e       3138b6e3d4712       6 minutes ago       Running             kube-scheduler              0                   24563282539af       kube-scheduler-old-k8s-version-371039
	b8b7a29520083       ca9843d3b5454       6 minutes ago       Running             kube-apiserver              0                   5e272dc6257d1       kube-apiserver-old-k8s-version-371039
	6f10dd2ab1448       0369cf4303ffd       6 minutes ago       Running             etcd                        0                   830e4d974c2e1       etcd-old-k8s-version-371039
	
	
	==> containerd <==
	Sep 16 11:14:07 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:14:07.370708306Z" level=info msg="CreateContainer within sandbox \"7270699f18202a68b3cbfaeec615880970ca87013f8548b8f1c66ace9d6d464d\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"35d9b8aed85a04d0180e3da8bde05bf29e0abe35658db319fca445550605e571\""
	Sep 16 11:14:07 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:14:07.371345476Z" level=info msg="StartContainer for \"35d9b8aed85a04d0180e3da8bde05bf29e0abe35658db319fca445550605e571\""
	Sep 16 11:14:07 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:14:07.435930874Z" level=info msg="StartContainer for \"35d9b8aed85a04d0180e3da8bde05bf29e0abe35658db319fca445550605e571\" returns successfully"
	Sep 16 11:14:07 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:14:07.470709851Z" level=info msg="shim disconnected" id=35d9b8aed85a04d0180e3da8bde05bf29e0abe35658db319fca445550605e571 namespace=k8s.io
	Sep 16 11:14:07 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:14:07.470789162Z" level=warning msg="cleaning up after shim disconnected" id=35d9b8aed85a04d0180e3da8bde05bf29e0abe35658db319fca445550605e571 namespace=k8s.io
	Sep 16 11:14:07 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:14:07.470805125Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 16 11:14:08 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:14:08.005210445Z" level=info msg="RemoveContainer for \"501fec326410d790b90fb4a561dbbab9cb5e9dcc6ee52f2f601a8885550b154f\""
	Sep 16 11:14:08 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:14:08.010435003Z" level=info msg="RemoveContainer for \"501fec326410d790b90fb4a561dbbab9cb5e9dcc6ee52f2f601a8885550b154f\" returns successfully"
	Sep 16 11:14:14 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:14:14.355942813Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 11:14:14 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:14:14.386362237Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host" host=fake.domain
	Sep 16 11:14:14 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:14:14.387842439Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 16 11:14:14 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:14:14.387881509Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 16 11:16:54 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:16:54.357509929Z" level=info msg="CreateContainer within sandbox \"7270699f18202a68b3cbfaeec615880970ca87013f8548b8f1c66ace9d6d464d\" for container name:\"dashboard-metrics-scraper\" attempt:6"
	Sep 16 11:16:54 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:16:54.368387979Z" level=info msg="CreateContainer within sandbox \"7270699f18202a68b3cbfaeec615880970ca87013f8548b8f1c66ace9d6d464d\" for name:\"dashboard-metrics-scraper\" attempt:6 returns container id \"069b3bead1bc30b9d5eb89b2d9db4d250c2257e24b86e1b134c8dfb0382854f9\""
	Sep 16 11:16:54 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:16:54.368930848Z" level=info msg="StartContainer for \"069b3bead1bc30b9d5eb89b2d9db4d250c2257e24b86e1b134c8dfb0382854f9\""
	Sep 16 11:16:54 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:16:54.432707800Z" level=info msg="StartContainer for \"069b3bead1bc30b9d5eb89b2d9db4d250c2257e24b86e1b134c8dfb0382854f9\" returns successfully"
	Sep 16 11:16:54 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:16:54.468458251Z" level=info msg="shim disconnected" id=069b3bead1bc30b9d5eb89b2d9db4d250c2257e24b86e1b134c8dfb0382854f9 namespace=k8s.io
	Sep 16 11:16:54 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:16:54.468521639Z" level=warning msg="cleaning up after shim disconnected" id=069b3bead1bc30b9d5eb89b2d9db4d250c2257e24b86e1b134c8dfb0382854f9 namespace=k8s.io
	Sep 16 11:16:54 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:16:54.468534100Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 16 11:16:55 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:16:55.338587831Z" level=info msg="RemoveContainer for \"35d9b8aed85a04d0180e3da8bde05bf29e0abe35658db319fca445550605e571\""
	Sep 16 11:16:55 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:16:55.342908393Z" level=info msg="RemoveContainer for \"35d9b8aed85a04d0180e3da8bde05bf29e0abe35658db319fca445550605e571\" returns successfully"
	Sep 16 11:17:06 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:17:06.356105103Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 11:17:06 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:17:06.377961024Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host" host=fake.domain
	Sep 16 11:17:06 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:17:06.379632897Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 16 11:17:06 old-k8s-version-371039 containerd[690]: time="2024-09-16T11:17:06.379717781Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [7e01e437eafa1ace1f5ee4f4bfd2759ea78df9316412a17e9a0e8ae750c20523] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 77006bb880cc2529c537c33b8dc6eabc
	CoreDNS-1.7.0
	linux/amd64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:49206 - 19492 "HINFO IN 2568215532487827892.8058846988098566839. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014231723s
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 77006bb880cc2529c537c33b8dc6eabc
	CoreDNS-1.7.0
	linux/amd64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:35595 - 5399 "HINFO IN 2418305322430051184.4287842096554552965. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01004514s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-371039
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-371039
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=old-k8s-version-371039
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T11_08_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:08:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-371039
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:17:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:16:39 +0000   Mon, 16 Sep 2024 11:08:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:16:39 +0000   Mon, 16 Sep 2024 11:08:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:16:39 +0000   Mon, 16 Sep 2024 11:08:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:16:39 +0000   Mon, 16 Sep 2024 11:09:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-371039
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 1ae9c883339d4b4e909ea43ab97b9195
	  System UUID:                5a808ec9-2d43-4212-9e81-7580afba2fbc
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-74ff55c5b-78djj                           100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     8m4s
	  kube-system                 etcd-old-k8s-version-371039                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m14s
	  kube-system                 kindnet-txszz                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      8m4s
	  kube-system                 kube-apiserver-old-k8s-version-371039             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8m14s
	  kube-system                 kube-controller-manager-old-k8s-version-371039    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m14s
	  kube-system                 kube-proxy-w2kp4                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m4s
	  kube-system                 kube-scheduler-old-k8s-version-371039             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m14s
	  kube-system                 metrics-server-9975d5f86-4f2jl                    100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         6m37s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m3s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-7m6td         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-9sr9v               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 8m30s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m30s (x5 over 8m30s)  kubelet     Node old-k8s-version-371039 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m30s (x4 over 8m30s)  kubelet     Node old-k8s-version-371039 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m30s (x3 over 8m30s)  kubelet     Node old-k8s-version-371039 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m30s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 8m15s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m15s                  kubelet     Node old-k8s-version-371039 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m15s                  kubelet     Node old-k8s-version-371039 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m15s                  kubelet     Node old-k8s-version-371039 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m14s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m5s                   kubelet     Node old-k8s-version-371039 status is now: NodeReady
	  Normal  Starting                 8m3s                   kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m13s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m13s (x8 over 6m13s)  kubelet     Node old-k8s-version-371039 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m13s (x8 over 6m13s)  kubelet     Node old-k8s-version-371039 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m13s (x7 over 6m13s)  kubelet     Node old-k8s-version-371039 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m13s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 6m6s                   kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.000004] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +1.024015] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000007] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000005] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000001] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +2.015813] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000006] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000002] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +4.063624] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000002] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +0.000004] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000002] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +8.191266] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000006] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000002] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	
	
	==> etcd [6f10dd2ab1448676f85f2e14b088b767bcca2ef3807aaa62f0370cbf81e594dc] <==
	2024-09-16 11:13:12.151941 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:13:22.152319 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:13:32.151944 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:13:42.151914 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:13:52.151984 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:14:02.151938 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:14:12.151969 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:14:22.151968 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:14:32.151948 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:14:42.152139 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:14:52.151958 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:15:02.152175 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:15:12.152124 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:15:22.152313 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:15:32.152521 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:15:42.152307 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:15:52.152157 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:16:02.151916 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:16:12.151936 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:16:22.151998 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:16:32.151834 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:16:42.152195 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:16:52.152029 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:17:02.151947 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-16 11:17:12.152333 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 11:17:16 up 59 min,  0 users,  load average: 1.51, 2.35, 2.16
	Linux old-k8s-version-371039 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [e812c7a8976381cd9cfc69cfc3ec18d19286650ecfda38b6e08f3545c8c27c65] <==
	I0916 11:15:13.841123       1 main.go:299] handling current node
	I0916 11:15:23.840912       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:15:23.840971       1 main.go:299] handling current node
	I0916 11:15:33.847873       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:15:33.847922       1 main.go:299] handling current node
	I0916 11:15:43.848168       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:15:43.848200       1 main.go:299] handling current node
	I0916 11:15:53.849561       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:15:53.849604       1 main.go:299] handling current node
	I0916 11:16:03.849195       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:16:03.849234       1 main.go:299] handling current node
	I0916 11:16:13.841357       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:16:13.841391       1 main.go:299] handling current node
	I0916 11:16:23.847851       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:16:23.847885       1 main.go:299] handling current node
	I0916 11:16:33.841045       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:16:33.841083       1 main.go:299] handling current node
	I0916 11:16:43.840919       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:16:43.840966       1 main.go:299] handling current node
	I0916 11:16:53.841545       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:16:53.841604       1 main.go:299] handling current node
	I0916 11:17:03.843816       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:17:03.843881       1 main.go:299] handling current node
	I0916 11:17:13.840781       1 main.go:295] Handling node with IPs: map[192.168.103.2:{}]
	I0916 11:17:13.840821       1 main.go:299] handling current node
	
	
	==> kube-apiserver [b8b7a2952008364403b8b3a7f0ec5cca63e9cd244427d95edd2b992a7dff815d] <==
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0916 11:14:11.149855       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0916 11:14:25.124806       1 client.go:360] parsed scheme: "passthrough"
	I0916 11:14:25.124851       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 11:14:25.124871       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0916 11:15:09.927590       1 client.go:360] parsed scheme: "passthrough"
	I0916 11:15:09.927653       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 11:15:09.927664       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0916 11:15:41.597147       1 client.go:360] parsed scheme: "passthrough"
	I0916 11:15:41.597188       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 11:15:41.597195       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0916 11:16:09.524614       1 handler_proxy.go:102] no RequestInfo found in the context
	E0916 11:16:09.524684       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0916 11:16:09.524692       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0916 11:16:26.215211       1 client.go:360] parsed scheme: "passthrough"
	I0916 11:16:26.215257       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 11:16:26.215266       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0916 11:17:09.419923       1 client.go:360] parsed scheme: "passthrough"
	I0916 11:17:09.419966       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0916 11:17:09.419974       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0916 11:17:09.524899       1 handler_proxy.go:102] no RequestInfo found in the context
	E0916 11:17:09.524959       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0916 11:17:09.524968       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [8dedfda17aef64d94e616f000f9e226b29f8f0ee8b642400d83681bf9b9f8330] <==
	E0916 11:12:57.526990       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0916 11:13:03.801581       1 request.go:655] Throttling request took 1.048545814s, request: GET:https://192.168.103.2:8443/apis/policy/v1beta1?timeout=32s
	W0916 11:13:04.652762       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0916 11:13:28.028551       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0916 11:13:36.303211       1 request.go:655] Throttling request took 1.048431296s, request: GET:https://192.168.103.2:8443/apis/node.k8s.io/v1?timeout=32s
	W0916 11:13:37.154412       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0916 11:13:58.530051       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0916 11:14:08.804527       1 request.go:655] Throttling request took 1.048482129s, request: GET:https://192.168.103.2:8443/apis/certificates.k8s.io/v1?timeout=32s
	W0916 11:14:09.655557       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0916 11:14:29.032133       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0916 11:14:41.305844       1 request.go:655] Throttling request took 1.048724819s, request: GET:https://192.168.103.2:8443/apis/scheduling.k8s.io/v1beta1?timeout=32s
	W0916 11:14:42.157112       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0916 11:14:59.534104       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0916 11:15:13.807004       1 request.go:655] Throttling request took 1.048695794s, request: GET:https://192.168.103.2:8443/apis/apiregistration.k8s.io/v1?timeout=32s
	W0916 11:15:14.658651       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0916 11:15:30.036352       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0916 11:15:46.308676       1 request.go:655] Throttling request took 1.048425709s, request: GET:https://192.168.103.2:8443/apis/apps/v1?timeout=32s
	W0916 11:15:47.159901       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0916 11:16:00.538130       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0916 11:16:18.810155       1 request.go:655] Throttling request took 1.048608181s, request: GET:https://192.168.103.2:8443/apis/authentication.k8s.io/v1beta1?timeout=32s
	W0916 11:16:19.661202       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0916 11:16:31.039982       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0916 11:16:51.311292       1 request.go:655] Throttling request took 1.048464492s, request: GET:https://192.168.103.2:8443/apis/events.k8s.io/v1beta1?timeout=32s
	W0916 11:16:52.162356       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0916 11:17:01.541679       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [1145c23e87dee884c7c908196bd5c7de96e94947ae439d9c145f0c4dba1630fc] <==
	I0916 11:09:13.322536       1 node.go:172] Successfully retrieved node IP: 192.168.103.2
	I0916 11:09:13.322732       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.103.2), assume IPv4 operation
	W0916 11:09:13.345840       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0916 11:09:13.345951       1 server_others.go:185] Using iptables Proxier.
	I0916 11:09:13.346284       1 server.go:650] Version: v1.20.0
	I0916 11:09:13.347687       1 config.go:315] Starting service config controller
	I0916 11:09:13.349932       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0916 11:09:13.347841       1 config.go:224] Starting endpoint slice config controller
	I0916 11:09:13.420415       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0916 11:09:13.420676       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0916 11:09:13.450370       1 shared_informer.go:247] Caches are synced for service config 
	I0916 11:11:10.475965       1 node.go:172] Successfully retrieved node IP: 192.168.103.2
	I0916 11:11:10.476023       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.103.2), assume IPv4 operation
	W0916 11:11:10.493087       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0916 11:11:10.493201       1 server_others.go:185] Using iptables Proxier.
	I0916 11:11:10.493552       1 server.go:650] Version: v1.20.0
	I0916 11:11:10.494094       1 config.go:224] Starting endpoint slice config controller
	I0916 11:11:10.494117       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0916 11:11:10.494542       1 config.go:315] Starting service config controller
	I0916 11:11:10.494837       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0916 11:11:10.594963       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0916 11:11:10.595513       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [d4e91e6acd99eff1a1f03c740be536efde73dc3441d0ce21da107f92e07c7aa8] <==
	E0916 11:08:53.447999       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 11:08:53.448116       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:08:53.448269       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 11:08:53.448478       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 11:08:53.448860       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0916 11:08:53.448864       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 11:08:53.449019       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0916 11:08:53.449164       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 11:08:53.450105       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 11:08:53.450247       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:08:54.410511       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0916 11:08:54.433702       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0916 11:08:54.472291       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:08:54.592362       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0916 11:08:56.246004       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	I0916 11:11:05.236372       1 serving.go:331] Generated self-signed cert in-memory
	W0916 11:11:08.422847       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 11:11:08.422945       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 11:11:08.423021       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 11:11:08.423069       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 11:11:08.539159       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 11:11:08.539206       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 11:11:08.539648       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0916 11:11:08.539706       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0916 11:11:08.639920       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Sep 16 11:15:59 old-k8s-version-371039 kubelet[1070]: E0916 11:15:59.355864    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 11:16:03 old-k8s-version-371039 kubelet[1070]: I0916 11:16:03.355196    1070 scope.go:95] [topologymanager] RemoveContainer - Container ID: 35d9b8aed85a04d0180e3da8bde05bf29e0abe35658db319fca445550605e571
	Sep 16 11:16:03 old-k8s-version-371039 kubelet[1070]: E0916 11:16:03.355623    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	Sep 16 11:16:12 old-k8s-version-371039 kubelet[1070]: E0916 11:16:12.355692    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 11:16:14 old-k8s-version-371039 kubelet[1070]: I0916 11:16:14.354920    1070 scope.go:95] [topologymanager] RemoveContainer - Container ID: 35d9b8aed85a04d0180e3da8bde05bf29e0abe35658db319fca445550605e571
	Sep 16 11:16:14 old-k8s-version-371039 kubelet[1070]: E0916 11:16:14.355205    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	Sep 16 11:16:26 old-k8s-version-371039 kubelet[1070]: E0916 11:16:26.355705    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 11:16:29 old-k8s-version-371039 kubelet[1070]: I0916 11:16:29.355001    1070 scope.go:95] [topologymanager] RemoveContainer - Container ID: 35d9b8aed85a04d0180e3da8bde05bf29e0abe35658db319fca445550605e571
	Sep 16 11:16:29 old-k8s-version-371039 kubelet[1070]: E0916 11:16:29.355289    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	Sep 16 11:16:39 old-k8s-version-371039 kubelet[1070]: E0916 11:16:39.355981    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 11:16:40 old-k8s-version-371039 kubelet[1070]: I0916 11:16:40.354912    1070 scope.go:95] [topologymanager] RemoveContainer - Container ID: 35d9b8aed85a04d0180e3da8bde05bf29e0abe35658db319fca445550605e571
	Sep 16 11:16:40 old-k8s-version-371039 kubelet[1070]: E0916 11:16:40.355189    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	Sep 16 11:16:53 old-k8s-version-371039 kubelet[1070]: E0916 11:16:53.355923    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 11:16:54 old-k8s-version-371039 kubelet[1070]: I0916 11:16:54.354979    1070 scope.go:95] [topologymanager] RemoveContainer - Container ID: 35d9b8aed85a04d0180e3da8bde05bf29e0abe35658db319fca445550605e571
	Sep 16 11:16:55 old-k8s-version-371039 kubelet[1070]: I0916 11:16:55.337389    1070 scope.go:95] [topologymanager] RemoveContainer - Container ID: 35d9b8aed85a04d0180e3da8bde05bf29e0abe35658db319fca445550605e571
	Sep 16 11:16:55 old-k8s-version-371039 kubelet[1070]: I0916 11:16:55.337750    1070 scope.go:95] [topologymanager] RemoveContainer - Container ID: 069b3bead1bc30b9d5eb89b2d9db4d250c2257e24b86e1b134c8dfb0382854f9
	Sep 16 11:16:55 old-k8s-version-371039 kubelet[1070]: E0916 11:16:55.338092    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	Sep 16 11:16:59 old-k8s-version-371039 kubelet[1070]: I0916 11:16:59.245269    1070 scope.go:95] [topologymanager] RemoveContainer - Container ID: 069b3bead1bc30b9d5eb89b2d9db4d250c2257e24b86e1b134c8dfb0382854f9
	Sep 16 11:16:59 old-k8s-version-371039 kubelet[1070]: E0916 11:16:59.245633    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	Sep 16 11:17:06 old-k8s-version-371039 kubelet[1070]: E0916 11:17:06.380026    1070 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host
	Sep 16 11:17:06 old-k8s-version-371039 kubelet[1070]: E0916 11:17:06.380131    1070 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host
	Sep 16 11:17:06 old-k8s-version-371039 kubelet[1070]: E0916 11:17:06.380360    1070 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-cs7nf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exe
c:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-4f2jl_kube-system(480e29
07-201f-461f-aa3d-d24598e679d1): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host
	Sep 16 11:17:06 old-k8s-version-371039 kubelet[1070]: E0916 11:17:06.380413    1070 pod_workers.go:191] Error syncing pod 480e2907-201f-461f-aa3d-d24598e679d1 ("metrics-server-9975d5f86-4f2jl_kube-system(480e2907-201f-461f-aa3d-d24598e679d1)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 16 11:17:11 old-k8s-version-371039 kubelet[1070]: I0916 11:17:11.355063    1070 scope.go:95] [topologymanager] RemoveContainer - Container ID: 069b3bead1bc30b9d5eb89b2d9db4d250c2257e24b86e1b134c8dfb0382854f9
	Sep 16 11:17:11 old-k8s-version-371039 kubelet[1070]: E0916 11:17:11.355602    1070 pod_workers.go:191] Error syncing pod f18ecf6d-08f7-42b4-adea-6cb1b71a82ee ("dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 5m0s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-7m6td_kubernetes-dashboard(f18ecf6d-08f7-42b4-adea-6cb1b71a82ee)"
	
	
	==> kubernetes-dashboard [b8894a1f49c45a030260a48d3dd12ddc81b2ba27c652688d77576697c81124e8] <==
	2024/09/16 11:11:33 Starting overwatch
	2024/09/16 11:11:33 Using namespace: kubernetes-dashboard
	2024/09/16 11:11:33 Using in-cluster config to connect to apiserver
	2024/09/16 11:11:33 Using secret token for csrf signing
	2024/09/16 11:11:33 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/16 11:11:33 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/16 11:11:33 Successful initial request to the apiserver, version: v1.20.0
	2024/09/16 11:11:33 Generating JWE encryption key
	2024/09/16 11:11:33 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/16 11:11:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/16 11:11:33 Initializing JWE encryption key from synchronized object
	2024/09/16 11:11:33 Creating in-cluster Sidecar client
	2024/09/16 11:11:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:11:33 Serving insecurely on HTTP port: 9090
	2024/09/16 11:12:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:12:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:13:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:13:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:14:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:14:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:15:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:15:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:16:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:16:33 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:17:03 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [4b7e57072db41f8a563f7872a8c96af29447848dc1694db9b8392a1b688b1de4] <==
	I0916 11:11:40.848372       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 11:11:40.891731       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 11:11:40.894031       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 11:11:58.320853       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 11:11:58.321666       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-371039_cd48ed3a-7bb7-4816-ad2b-a773b03c9c79!
	I0916 11:11:58.321659       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3df43ad2-abd4-4d32-b26b-91fa0eea8673", APIVersion:"v1", ResourceVersion:"800", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-371039_cd48ed3a-7bb7-4816-ad2b-a773b03c9c79 became leader
	I0916 11:11:58.423431       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-371039_cd48ed3a-7bb7-4816-ad2b-a773b03c9c79!
	
	
	==> storage-provisioner [b6e65b347883fafb9798059fa58b334346a8b29c61fe40422f2cf04c355fcd71] <==
	I0916 11:09:13.972762       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 11:09:13.980679       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 11:09:13.980724       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 11:09:13.987659       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 11:09:13.987719       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3df43ad2-abd4-4d32-b26b-91fa0eea8673", APIVersion:"v1", ResourceVersion:"473", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-371039_ba54aacd-67bb-46f0-b4ba-d01754da79ef became leader
	I0916 11:09:13.987846       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-371039_ba54aacd-67bb-46f0-b4ba-d01754da79ef!
	I0916 11:09:14.088020       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-371039_ba54aacd-67bb-46f0-b4ba-d01754da79ef!
	I0916 11:11:10.425227       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0916 11:11:40.437499       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-371039 -n old-k8s-version-371039
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-371039 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context old-k8s-version-371039 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (547.481µs)
helpers_test.go:263: kubectl --context old-k8s-version-371039 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (7.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (1800.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-771611 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Non-zero exit: kubectl --context kindnet-771611 replace --force -f testdata/netcat-deployment.yaml: fork/exec /usr/local/bin/kubectl: exec format error (553.242µs)
net_test.go:151: failed to apply netcat manifest: fork/exec /usr/local/bin/kubectl: exec format error
net_test.go:160: failed waiting for netcat deployment to stabilize: timed out waiting for the condition
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
E0916 11:32:52.829455   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestNetworkPlugins/group/kindnet/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: client rate limiter Wait returned an error: context deadline exceeded
net_test.go:163: ***** TestNetworkPlugins/group/kindnet/NetCatPod: pod "app=netcat" failed to start within 15m0s: context deadline exceeded ****
net_test.go:163: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kindnet-771611 -n kindnet-771611
net_test.go:163: TestNetworkPlugins/group/kindnet/NetCatPod: showing logs for failed pods as of 2024-09-16 11:47:44.759284961 +0000 UTC m=+5131.741542023
net_test.go:164: failed waiting for netcat pod: app=netcat within 15m0s: context deadline exceeded
--- FAIL: TestNetworkPlugins/group/kindnet/NetCatPod (1800.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (7.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-hrmv2" [4ae00ae7-ba15-40b6-9f23-61722bbfb09a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004320322s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-006978 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-006978 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: fork/exec /usr/local/bin/kubectl: exec format error (658.137µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-006978 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": fork/exec /usr/local/bin/kubectl: exec format error
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-006978
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-006978:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "92220cda3aab6b195a38e8c57831506f5fef8d5456bc7737cfbe9edbdceff751",
	        "Created": "2024-09-16T11:12:40.853683512Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 309948,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-16T11:13:30.152634053Z",
	            "FinishedAt": "2024-09-16T11:13:29.287625112Z"
	        },
	        "Image": "sha256:20d492278eed428d119466f58713403332b5d2ac1db7c6863f797e2406f2b671",
	        "ResolvConfPath": "/var/lib/docker/containers/92220cda3aab6b195a38e8c57831506f5fef8d5456bc7737cfbe9edbdceff751/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/92220cda3aab6b195a38e8c57831506f5fef8d5456bc7737cfbe9edbdceff751/hostname",
	        "HostsPath": "/var/lib/docker/containers/92220cda3aab6b195a38e8c57831506f5fef8d5456bc7737cfbe9edbdceff751/hosts",
	        "LogPath": "/var/lib/docker/containers/92220cda3aab6b195a38e8c57831506f5fef8d5456bc7737cfbe9edbdceff751/92220cda3aab6b195a38e8c57831506f5fef8d5456bc7737cfbe9edbdceff751-json.log",
	        "Name": "/default-k8s-diff-port-006978",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-006978:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-006978",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/016c3d4fc319478973bdc14ac47307bb58de6ea7b8c61d9d42356a23a7da5f9c-init/diff:/var/lib/docker/overlay2/e391a31d00d345931d18da68f8a7d7338830f6d9b98fe203461225d03a65d594/diff",
	                "MergedDir": "/var/lib/docker/overlay2/016c3d4fc319478973bdc14ac47307bb58de6ea7b8c61d9d42356a23a7da5f9c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/016c3d4fc319478973bdc14ac47307bb58de6ea7b8c61d9d42356a23a7da5f9c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/016c3d4fc319478973bdc14ac47307bb58de6ea7b8c61d9d42356a23a7da5f9c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-006978",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-006978/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-006978",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-006978",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-006978",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c8819a3f95e8f2ed1d179e7725bfb19c008b553ee30ff5f3980f9eac7ac888bd",
	            "SandboxKey": "/var/run/docker/netns/c8819a3f95e8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-006978": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "77357235afcef96415382e78c67fcc53123318fac9325f81acae0f265d8eb86e",
	                    "EndpointID": "729abc4fa69d81686c4170de95f16975cdaa777b0a41b6aa132b9abfbae8ea10",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-006978",
	                        "92220cda3aab"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-006978 -n default-k8s-diff-port-006978
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-006978 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-006978 logs -n 25: (1.632758218s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -p newest-cni-802652 --memory=2200 --alsologtostderr   | newest-cni-802652      | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC | 16 Sep 24 11:14 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                        |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                        |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                        |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                        |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd        |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-802652             | newest-cni-802652      | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC | 16 Sep 24 11:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p newest-cni-802652                                   | newest-cni-802652      | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC | 16 Sep 24 11:14 UTC |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-802652                  | newest-cni-802652      | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC | 16 Sep 24 11:14 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p newest-cni-802652 --memory=2200 --alsologtostderr   | newest-cni-802652      | jenkins | v1.34.0 | 16 Sep 24 11:14 UTC | 16 Sep 24 11:15 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                        |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                        |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                        |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                        |         |         |                     |                     |
	|         | --driver=docker  --container-runtime=containerd        |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                        |         |         |                     |                     |
	| image   | newest-cni-802652 image list                           | newest-cni-802652      | jenkins | v1.34.0 | 16 Sep 24 11:15 UTC | 16 Sep 24 11:15 UTC |
	|         | --format=json                                          |                        |         |         |                     |                     |
	| pause   | -p newest-cni-802652                                   | newest-cni-802652      | jenkins | v1.34.0 | 16 Sep 24 11:15 UTC | 16 Sep 24 11:15 UTC |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| unpause | -p newest-cni-802652                                   | newest-cni-802652      | jenkins | v1.34.0 | 16 Sep 24 11:15 UTC | 16 Sep 24 11:15 UTC |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| delete  | -p newest-cni-802652                                   | newest-cni-802652      | jenkins | v1.34.0 | 16 Sep 24 11:15 UTC | 16 Sep 24 11:15 UTC |
	| delete  | -p newest-cni-802652                                   | newest-cni-802652      | jenkins | v1.34.0 | 16 Sep 24 11:15 UTC | 16 Sep 24 11:15 UTC |
	| start   | -p auto-771611 --memory=3072                           | auto-771611            | jenkins | v1.34.0 | 16 Sep 24 11:15 UTC | 16 Sep 24 11:16 UTC |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                        |         |         |                     |                     |
	| ssh     | -p auto-771611 pgrep -a                                | auto-771611            | jenkins | v1.34.0 | 16 Sep 24 11:16 UTC | 16 Sep 24 11:16 UTC |
	|         | kubelet                                                |                        |         |         |                     |                     |
	| image   | embed-certs-679624 image list                          | embed-certs-679624     | jenkins | v1.34.0 | 16 Sep 24 11:16 UTC | 16 Sep 24 11:16 UTC |
	|         | --format=json                                          |                        |         |         |                     |                     |
	| pause   | -p embed-certs-679624                                  | embed-certs-679624     | jenkins | v1.34.0 | 16 Sep 24 11:16 UTC | 16 Sep 24 11:16 UTC |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| unpause | -p embed-certs-679624                                  | embed-certs-679624     | jenkins | v1.34.0 | 16 Sep 24 11:16 UTC | 16 Sep 24 11:16 UTC |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| delete  | -p embed-certs-679624                                  | embed-certs-679624     | jenkins | v1.34.0 | 16 Sep 24 11:16 UTC | 16 Sep 24 11:16 UTC |
	| delete  | -p embed-certs-679624                                  | embed-certs-679624     | jenkins | v1.34.0 | 16 Sep 24 11:16 UTC | 16 Sep 24 11:16 UTC |
	| start   | -p kindnet-771611                                      | kindnet-771611         | jenkins | v1.34.0 | 16 Sep 24 11:16 UTC | 16 Sep 24 11:17 UTC |
	|         | --memory=3072                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                        |         |         |                     |                     |
	|         | --cni=kindnet --driver=docker                          |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                        |         |         |                     |                     |
	| image   | old-k8s-version-371039 image                           | old-k8s-version-371039 | jenkins | v1.34.0 | 16 Sep 24 11:17 UTC | 16 Sep 24 11:17 UTC |
	|         | list --format=json                                     |                        |         |         |                     |                     |
	| pause   | -p old-k8s-version-371039                              | old-k8s-version-371039 | jenkins | v1.34.0 | 16 Sep 24 11:17 UTC | 16 Sep 24 11:17 UTC |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| unpause | -p old-k8s-version-371039                              | old-k8s-version-371039 | jenkins | v1.34.0 | 16 Sep 24 11:17 UTC | 16 Sep 24 11:17 UTC |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| delete  | -p old-k8s-version-371039                              | old-k8s-version-371039 | jenkins | v1.34.0 | 16 Sep 24 11:17 UTC | 16 Sep 24 11:17 UTC |
	| delete  | -p old-k8s-version-371039                              | old-k8s-version-371039 | jenkins | v1.34.0 | 16 Sep 24 11:17 UTC | 16 Sep 24 11:17 UTC |
	| start   | -p calico-771611 --memory=3072                         | calico-771611          | jenkins | v1.34.0 | 16 Sep 24 11:17 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                        |         |         |                     |                     |
	|         | --cni=calico --driver=docker                           |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                        |         |         |                     |                     |
	| ssh     | -p kindnet-771611 pgrep -a                             | kindnet-771611         | jenkins | v1.34.0 | 16 Sep 24 11:17 UTC | 16 Sep 24 11:17 UTC |
	|         | kubelet                                                |                        |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 11:17:22
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 11:17:22.787458  337729 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:17:22.787634  337729 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:17:22.787663  337729 out.go:358] Setting ErrFile to fd 2...
	I0916 11:17:22.787678  337729 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:17:22.787958  337729 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 11:17:22.788617  337729 out.go:352] Setting JSON to false
	I0916 11:17:22.790069  337729 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3587,"bootTime":1726481856,"procs":327,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:17:22.790140  337729 start.go:139] virtualization: kvm guest
	I0916 11:17:22.792755  337729 out.go:177] * [calico-771611] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:17:22.794039  337729 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:17:22.794093  337729 notify.go:220] Checking for updates...
	I0916 11:17:22.796116  337729 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:17:22.797169  337729 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:17:22.798446  337729 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 11:17:22.799771  337729 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:17:22.801147  337729 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:17:22.802886  337729 config.go:182] Loaded profile config "auto-771611": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:17:22.803049  337729 config.go:182] Loaded profile config "default-k8s-diff-port-006978": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:17:22.803176  337729 config.go:182] Loaded profile config "kindnet-771611": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:17:22.803339  337729 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:17:22.833852  337729 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 11:17:22.833959  337729 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:17:22.900743  337729 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:76 SystemTime:2024-09-16 11:17:22.891292195 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:17:22.900865  337729 docker.go:318] overlay module found
	I0916 11:17:22.903189  337729 out.go:177] * Using the docker driver based on user configuration
	I0916 11:17:22.904585  337729 start.go:297] selected driver: docker
	I0916 11:17:22.904600  337729 start.go:901] validating driver "docker" against <nil>
	I0916 11:17:22.904612  337729 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:17:22.905454  337729 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:17:22.982189  337729 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:53 OomKillDisable:true NGoroutines:76 SystemTime:2024-09-16 11:17:22.973021575 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:17:22.982399  337729 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 11:17:22.982700  337729 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:17:22.984797  337729 out.go:177] * Using Docker driver with root privileges
	I0916 11:17:22.986225  337729 cni.go:84] Creating CNI manager for "calico"
	I0916 11:17:22.986243  337729 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0916 11:17:22.986311  337729 start.go:340] cluster config:
	{Name:calico-771611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-771611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticI
P: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:17:22.987859  337729 out.go:177] * Starting "calico-771611" primary control-plane node in "calico-771611" cluster
	I0916 11:17:22.989145  337729 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 11:17:22.990376  337729 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0916 11:17:22.991465  337729 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:17:22.991489  337729 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 11:17:22.991502  337729 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4
	I0916 11:17:22.991509  337729 cache.go:56] Caching tarball of preloaded images
	I0916 11:17:22.991589  337729 preload.go:172] Found /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0916 11:17:22.991600  337729 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0916 11:17:22.991692  337729 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/config.json ...
	I0916 11:17:22.991709  337729 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/config.json: {Name:mk8bdff28105bbfb914f331ce7a0baa5a416e206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0916 11:17:23.012123  337729 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 is of wrong architecture
	I0916 11:17:23.012142  337729 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 11:17:23.012203  337729 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 11:17:23.012218  337729 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 11:17:23.012222  337729 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 11:17:23.012228  337729 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 11:17:23.012236  337729 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0916 11:17:23.072450  337729 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0916 11:17:23.072496  337729 cache.go:194] Successfully downloaded all kic artifacts
	I0916 11:17:23.072543  337729 start.go:360] acquireMachinesLock for calico-771611: {Name:mk8470cb9e95b53275a0fe23c9a8597ce0b9e382 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0916 11:17:23.072689  337729 start.go:364] duration metric: took 111.542µs to acquireMachinesLock for "calico-771611"
	I0916 11:17:23.072723  337729 start.go:93] Provisioning new machine with config: &{Name:calico-771611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-771611 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMet
rics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 11:17:23.072866  337729 start.go:125] createHost starting for "" (driver="docker")
	I0916 11:17:21.966795  332275 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:17:21.966828  332275 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:17:21.966894  332275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-771611
	I0916 11:17:21.987901  332275 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:17:21.987933  332275 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:17:21.987994  332275 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-771611
	I0916 11:17:21.997730  332275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/kindnet-771611/id_rsa Username:docker}
	I0916 11:17:22.013809  332275 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/kindnet-771611/id_rsa Username:docker}
	I0916 11:17:22.059809  332275 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 11:17:22.130625  332275 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:17:22.227127  332275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:17:22.322942  332275 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:17:22.724847  332275 start.go:971] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0916 11:17:22.726255  332275 node_ready.go:35] waiting up to 15m0s for node "kindnet-771611" to be "Ready" ...
	I0916 11:17:22.735578  332275 node_ready.go:49] node "kindnet-771611" has status "Ready":"True"
	I0916 11:17:22.735602  332275 node_ready.go:38] duration metric: took 9.325411ms for node "kindnet-771611" to be "Ready" ...
	I0916 11:17:22.735612  332275 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:17:22.751924  332275 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-5tgxr" in "kube-system" namespace to be "Ready" ...
	I0916 11:17:23.230160  332275 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-771611" context rescaled to 1 replicas
	I0916 11:17:23.268693  332275 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0916 11:17:22.130379  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:17:24.132458  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:17:23.270160  332275 addons.go:510] duration metric: took 1.342572105s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0916 11:17:24.758456  332275 pod_ready.go:103] pod "coredns-7c65d6cfc9-5tgxr" in "kube-system" namespace has status "Ready":"False"
	I0916 11:17:26.758499  332275 pod_ready.go:103] pod "coredns-7c65d6cfc9-5tgxr" in "kube-system" namespace has status "Ready":"False"
	I0916 11:17:23.074993  337729 out.go:235] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0916 11:17:23.075313  337729 start.go:159] libmachine.API.Create for "calico-771611" (driver="docker")
	I0916 11:17:23.075351  337729 client.go:168] LocalClient.Create starting
	I0916 11:17:23.075433  337729 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem
	I0916 11:17:23.075473  337729 main.go:141] libmachine: Decoding PEM data...
	I0916 11:17:23.075496  337729 main.go:141] libmachine: Parsing certificate...
	I0916 11:17:23.075563  337729 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem
	I0916 11:17:23.075586  337729 main.go:141] libmachine: Decoding PEM data...
	I0916 11:17:23.075600  337729 main.go:141] libmachine: Parsing certificate...
	I0916 11:17:23.076072  337729 cli_runner.go:164] Run: docker network inspect calico-771611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0916 11:17:23.093928  337729 cli_runner.go:211] docker network inspect calico-771611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0916 11:17:23.094003  337729 network_create.go:284] running [docker network inspect calico-771611] to gather additional debugging logs...
	I0916 11:17:23.094022  337729 cli_runner.go:164] Run: docker network inspect calico-771611
	W0916 11:17:23.111436  337729 cli_runner.go:211] docker network inspect calico-771611 returned with exit code 1
	I0916 11:17:23.111480  337729 network_create.go:287] error running [docker network inspect calico-771611]: docker network inspect calico-771611: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-771611 not found
	I0916 11:17:23.111492  337729 network_create.go:289] output of [docker network inspect calico-771611]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-771611 not found
	
	** /stderr **
	I0916 11:17:23.111572  337729 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:17:23.134623  337729 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c95c64bb41bd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:bd:76:ab:c4} reservation:<nil>}
	I0916 11:17:23.135981  337729 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-fad43aa9929b IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:94:3f:61:fd} reservation:<nil>}
	I0916 11:17:23.137463  337729 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-49585fce923a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:ad:45:94:54} reservation:<nil>}
	I0916 11:17:23.138469  337729 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-77357235afce IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:c7:d5:e1:f1} reservation:<nil>}
	I0916 11:17:23.140068  337729 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-4737861a1701 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:02:42:8d:ca:39:4e} reservation:<nil>}
	I0916 11:17:23.141227  337729 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-37308c957196 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:02:42:6f:22:a7:01} reservation:<nil>}
	I0916 11:17:23.142667  337729 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d23210}
	I0916 11:17:23.142700  337729 network_create.go:124] attempt to create docker network calico-771611 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I0916 11:17:23.142755  337729 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-771611 calico-771611
	I0916 11:17:23.216504  337729 network_create.go:108] docker network calico-771611 192.168.103.0/24 created
	I0916 11:17:23.216534  337729 kic.go:121] calculated static IP "192.168.103.2" for the "calico-771611" container
	I0916 11:17:23.216633  337729 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0916 11:17:23.238311  337729 cli_runner.go:164] Run: docker volume create calico-771611 --label name.minikube.sigs.k8s.io=calico-771611 --label created_by.minikube.sigs.k8s.io=true
	I0916 11:17:23.261752  337729 oci.go:103] Successfully created a docker volume calico-771611
	I0916 11:17:23.261845  337729 cli_runner.go:164] Run: docker run --rm --name calico-771611-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-771611 --entrypoint /usr/bin/test -v calico-771611:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0916 11:17:23.829963  337729 oci.go:107] Successfully prepared a docker volume calico-771611
	I0916 11:17:23.830028  337729 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:17:23.830070  337729 kic.go:194] Starting extracting preloaded images to volume ...
	I0916 11:17:23.830140  337729 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-771611:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0916 11:17:26.629137  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:17:29.132290  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:17:28.345449  337729 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-771611:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.515251126s)
	I0916 11:17:28.345480  337729 kic.go:203] duration metric: took 4.515407102s to extract preloaded images to volume ...
	W0916 11:17:28.345633  337729 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0916 11:17:28.345749  337729 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0916 11:17:28.396476  337729 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-771611 --name calico-771611 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-771611 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-771611 --network calico-771611 --ip 192.168.103.2 --volume calico-771611:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0916 11:17:28.701573  337729 cli_runner.go:164] Run: docker container inspect calico-771611 --format={{.State.Running}}
	I0916 11:17:28.720925  337729 cli_runner.go:164] Run: docker container inspect calico-771611 --format={{.State.Status}}
	I0916 11:17:28.740396  337729 cli_runner.go:164] Run: docker exec calico-771611 stat /var/lib/dpkg/alternatives/iptables
	I0916 11:17:28.785805  337729 oci.go:144] the created container "calico-771611" has a running status.
	I0916 11:17:28.785840  337729 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/calico-771611/id_rsa...
	I0916 11:17:28.998776  337729 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19651-3687/.minikube/machines/calico-771611/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0916 11:17:29.026764  337729 cli_runner.go:164] Run: docker container inspect calico-771611 --format={{.State.Status}}
	I0916 11:17:29.048125  337729 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0916 11:17:29.048145  337729 kic_runner.go:114] Args: [docker exec --privileged calico-771611 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0916 11:17:29.134067  337729 cli_runner.go:164] Run: docker container inspect calico-771611 --format={{.State.Status}}
	I0916 11:17:29.158280  337729 machine.go:93] provisionDockerMachine start ...
	I0916 11:17:29.158387  337729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-771611
	I0916 11:17:29.177446  337729 main.go:141] libmachine: Using SSH client type: native
	I0916 11:17:29.177735  337729 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0916 11:17:29.177750  337729 main.go:141] libmachine: About to run SSH command:
	hostname
	I0916 11:17:29.375202  337729 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-771611
	
	I0916 11:17:29.375235  337729 ubuntu.go:169] provisioning hostname "calico-771611"
	I0916 11:17:29.375286  337729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-771611
	I0916 11:17:29.394804  337729 main.go:141] libmachine: Using SSH client type: native
	I0916 11:17:29.395053  337729 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0916 11:17:29.395078  337729 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-771611 && echo "calico-771611" | sudo tee /etc/hostname
	I0916 11:17:29.544327  337729 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-771611
	
	I0916 11:17:29.544400  337729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-771611
	I0916 11:17:29.563562  337729 main.go:141] libmachine: Using SSH client type: native
	I0916 11:17:29.563816  337729 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x86c560] 0x86f240 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I0916 11:17:29.563852  337729 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-771611' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-771611/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-771611' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0916 11:17:29.696009  337729 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0916 11:17:29.696049  337729 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19651-3687/.minikube CaCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19651-3687/.minikube}
	I0916 11:17:29.696102  337729 ubuntu.go:177] setting up certificates
	I0916 11:17:29.696121  337729 provision.go:84] configureAuth start
	I0916 11:17:29.696192  337729 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-771611
	I0916 11:17:29.713706  337729 provision.go:143] copyHostCerts
	I0916 11:17:29.713790  337729 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem, removing ...
	I0916 11:17:29.713802  337729 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem
	I0916 11:17:29.713875  337729 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/key.pem (1675 bytes)
	I0916 11:17:29.714035  337729 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem, removing ...
	I0916 11:17:29.714050  337729 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem
	I0916 11:17:29.714094  337729 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/ca.pem (1078 bytes)
	I0916 11:17:29.714175  337729 exec_runner.go:144] found /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem, removing ...
	I0916 11:17:29.714185  337729 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem
	I0916 11:17:29.714219  337729 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19651-3687/.minikube/cert.pem (1123 bytes)
	I0916 11:17:29.714286  337729 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem org=jenkins.calico-771611 san=[127.0.0.1 192.168.103.2 calico-771611 localhost minikube]
	I0916 11:17:29.770881  337729 provision.go:177] copyRemoteCerts
	I0916 11:17:29.770945  337729 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0916 11:17:29.770996  337729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-771611
	I0916 11:17:29.789693  337729 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/calico-771611/id_rsa Username:docker}
	I0916 11:17:29.885112  337729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0916 11:17:29.909069  337729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0916 11:17:29.932114  337729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0916 11:17:29.956379  337729 provision.go:87] duration metric: took 260.238929ms to configureAuth
	I0916 11:17:29.956415  337729 ubuntu.go:193] setting minikube options for container-runtime
	I0916 11:17:29.956612  337729 config.go:182] Loaded profile config "calico-771611": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:17:29.956628  337729 machine.go:96] duration metric: took 798.318684ms to provisionDockerMachine
	I0916 11:17:29.956634  337729 client.go:171] duration metric: took 6.881275251s to LocalClient.Create
	I0916 11:17:29.956652  337729 start.go:167] duration metric: took 6.881343736s to libmachine.API.Create "calico-771611"
	I0916 11:17:29.956662  337729 start.go:293] postStartSetup for "calico-771611" (driver="docker")
	I0916 11:17:29.956670  337729 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0916 11:17:29.956711  337729 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0916 11:17:29.956757  337729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-771611
	I0916 11:17:29.975272  337729 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/calico-771611/id_rsa Username:docker}
	I0916 11:17:30.077006  337729 ssh_runner.go:195] Run: cat /etc/os-release
	I0916 11:17:30.080209  337729 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0916 11:17:30.080250  337729 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0916 11:17:30.080261  337729 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0916 11:17:30.080269  337729 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0916 11:17:30.080283  337729 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/addons for local assets ...
	I0916 11:17:30.080351  337729 filesync.go:126] Scanning /home/jenkins/minikube-integration/19651-3687/.minikube/files for local assets ...
	I0916 11:17:30.080461  337729 filesync.go:149] local asset: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem -> 111892.pem in /etc/ssl/certs
	I0916 11:17:30.080591  337729 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0916 11:17:30.089028  337729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:17:30.113191  337729 start.go:296] duration metric: took 156.516238ms for postStartSetup
	I0916 11:17:30.113560  337729 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-771611
	I0916 11:17:30.132717  337729 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/config.json ...
	I0916 11:17:30.133023  337729 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 11:17:30.133076  337729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-771611
	I0916 11:17:30.150896  337729 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/calico-771611/id_rsa Username:docker}
	I0916 11:17:30.244760  337729 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0916 11:17:30.248943  337729 start.go:128] duration metric: took 7.176060433s to createHost
	I0916 11:17:30.248968  337729 start.go:83] releasing machines lock for "calico-771611", held for 7.176262632s
	I0916 11:17:30.249036  337729 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-771611
	I0916 11:17:30.267365  337729 ssh_runner.go:195] Run: cat /version.json
	I0916 11:17:30.267398  337729 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0916 11:17:30.267411  337729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-771611
	I0916 11:17:30.267470  337729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-771611
	I0916 11:17:30.286844  337729 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/calico-771611/id_rsa Username:docker}
	I0916 11:17:30.287417  337729 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/calico-771611/id_rsa Username:docker}
	I0916 11:17:30.383378  337729 ssh_runner.go:195] Run: systemctl --version
	I0916 11:17:30.459411  337729 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0916 11:17:30.464034  337729 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0916 11:17:30.488776  337729 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0916 11:17:30.488850  337729 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0916 11:17:30.517236  337729 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0916 11:17:30.517257  337729 start.go:495] detecting cgroup driver to use...
	I0916 11:17:30.517288  337729 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0916 11:17:30.517341  337729 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0916 11:17:30.528911  337729 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0916 11:17:30.539912  337729 docker.go:217] disabling cri-docker service (if available) ...
	I0916 11:17:30.539972  337729 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0916 11:17:30.552649  337729 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0916 11:17:30.566258  337729 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0916 11:17:30.646919  337729 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0916 11:17:30.727875  337729 docker.go:233] disabling docker service ...
	I0916 11:17:30.727943  337729 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0916 11:17:30.748920  337729 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0916 11:17:30.761279  337729 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0916 11:17:30.839484  337729 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0916 11:17:30.920583  337729 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0916 11:17:30.932055  337729 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0916 11:17:30.948494  337729 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0916 11:17:30.958582  337729 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0916 11:17:30.968663  337729 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0916 11:17:30.968729  337729 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0916 11:17:30.978525  337729 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 11:17:30.988058  337729 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0916 11:17:30.997501  337729 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0916 11:17:31.006907  337729 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0916 11:17:31.016232  337729 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0916 11:17:31.026455  337729 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0916 11:17:31.036333  337729 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0916 11:17:31.046166  337729 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0916 11:17:31.054711  337729 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0916 11:17:31.063190  337729 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:17:31.146225  337729 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0916 11:17:31.252019  337729 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0916 11:17:31.252081  337729 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0916 11:17:31.256469  337729 start.go:563] Will wait 60s for crictl version
	I0916 11:17:31.256532  337729 ssh_runner.go:195] Run: which crictl
	I0916 11:17:31.260187  337729 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0916 11:17:31.293623  337729 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0916 11:17:31.293684  337729 ssh_runner.go:195] Run: containerd --version
	I0916 11:17:31.315429  337729 ssh_runner.go:195] Run: containerd --version
	I0916 11:17:31.340146  337729 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0916 11:17:28.758922  332275 pod_ready.go:103] pod "coredns-7c65d6cfc9-5tgxr" in "kube-system" namespace has status "Ready":"False"
	I0916 11:17:31.258543  332275 pod_ready.go:103] pod "coredns-7c65d6cfc9-5tgxr" in "kube-system" namespace has status "Ready":"False"
	I0916 11:17:31.341453  337729 cli_runner.go:164] Run: docker network inspect calico-771611 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0916 11:17:31.358834  337729 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0916 11:17:31.362538  337729 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:17:31.373984  337729 kubeadm.go:883] updating cluster {Name:calico-771611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-771611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0916 11:17:31.374115  337729 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 11:17:31.374179  337729 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:17:31.406002  337729 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 11:17:31.406022  337729 containerd.go:534] Images already preloaded, skipping extraction
	I0916 11:17:31.406066  337729 ssh_runner.go:195] Run: sudo crictl images --output json
	I0916 11:17:31.439164  337729 containerd.go:627] all images are preloaded for containerd runtime.
	I0916 11:17:31.439184  337729 cache_images.go:84] Images are preloaded, skipping loading
	I0916 11:17:31.439191  337729 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.31.1 containerd true true} ...
	I0916 11:17:31.439273  337729 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-771611 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:calico-771611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I0916 11:17:31.439325  337729 ssh_runner.go:195] Run: sudo crictl info
	I0916 11:17:31.473515  337729 cni.go:84] Creating CNI manager for "calico"
	I0916 11:17:31.473543  337729 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0916 11:17:31.473564  337729 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-771611 NodeName:calico-771611 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0916 11:17:31.473701  337729 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "calico-771611"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0916 11:17:31.473759  337729 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0916 11:17:31.482458  337729 binaries.go:44] Found k8s binaries, skipping transfer
	I0916 11:17:31.482573  337729 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0916 11:17:31.490901  337729 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0916 11:17:31.507938  337729 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0916 11:17:31.526153  337729 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2170 bytes)
	I0916 11:17:31.544580  337729 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0916 11:17:31.547874  337729 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0916 11:17:31.558417  337729 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:17:31.638270  337729 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:17:31.651445  337729 certs.go:68] Setting up /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611 for IP: 192.168.103.2
	I0916 11:17:31.651470  337729 certs.go:194] generating shared ca certs ...
	I0916 11:17:31.651491  337729 certs.go:226] acquiring lock for ca certs: {Name:mk4b9a3a00ee9ff5cc0ce1b6e575845534270c29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:17:31.651673  337729 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key
	I0916 11:17:31.651750  337729 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key
	I0916 11:17:31.651763  337729 certs.go:256] generating profile certs ...
	I0916 11:17:31.651831  337729 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/client.key
	I0916 11:17:31.651848  337729 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/client.crt with IP's: []
	I0916 11:17:31.732064  337729 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/client.crt ...
	I0916 11:17:31.732103  337729 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/client.crt: {Name:mk25bbe09aae32dd2328a388867888ea38b71593 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:17:31.732297  337729 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/client.key ...
	I0916 11:17:31.732313  337729 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/client.key: {Name:mk4cfebafea379e355c6d9f2d99d6fe5f218e001 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:17:31.732396  337729 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/apiserver.key.758e01c9
	I0916 11:17:31.732416  337729 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/apiserver.crt.758e01c9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I0916 11:17:32.099539  337729 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/apiserver.crt.758e01c9 ...
	I0916 11:17:32.099581  337729 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/apiserver.crt.758e01c9: {Name:mkd01cfc3c7eb23363ba4505d92073969ad54a38 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:17:32.099778  337729 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/apiserver.key.758e01c9 ...
	I0916 11:17:32.099797  337729 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/apiserver.key.758e01c9: {Name:mk217070e798e15c12d50c52e18ef0d31cc53e73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:17:32.099889  337729 certs.go:381] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/apiserver.crt.758e01c9 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/apiserver.crt
	I0916 11:17:32.099962  337729 certs.go:385] copying /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/apiserver.key.758e01c9 -> /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/apiserver.key
	I0916 11:17:32.100013  337729 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/proxy-client.key
	I0916 11:17:32.100027  337729 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/proxy-client.crt with IP's: []
	I0916 11:17:32.555161  337729 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/proxy-client.crt ...
	I0916 11:17:32.555210  337729 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/proxy-client.crt: {Name:mk7e0d7903febba8aad0ce2c993738f41eb0d413 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:17:32.555430  337729 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/proxy-client.key ...
	I0916 11:17:32.555448  337729 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/proxy-client.key: {Name:mk315111d49656502422afb38b5b0b965a6e290a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:17:32.555687  337729 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem (1338 bytes)
	W0916 11:17:32.555747  337729 certs.go:480] ignoring /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189_empty.pem, impossibly tiny 0 bytes
	I0916 11:17:32.555763  337729 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca-key.pem (1679 bytes)
	I0916 11:17:32.555798  337729 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/ca.pem (1078 bytes)
	I0916 11:17:32.555829  337729 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/cert.pem (1123 bytes)
	I0916 11:17:32.555860  337729 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/certs/key.pem (1675 bytes)
	I0916 11:17:32.555919  337729 certs.go:484] found cert: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem (1708 bytes)
	I0916 11:17:32.556783  337729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0916 11:17:32.581415  337729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0916 11:17:32.604799  337729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0916 11:17:32.630428  337729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0916 11:17:32.653890  337729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0916 11:17:32.676883  337729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0916 11:17:32.700341  337729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0916 11:17:32.722956  337729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0916 11:17:32.745412  337729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/ssl/certs/111892.pem --> /usr/share/ca-certificates/111892.pem (1708 bytes)
	I0916 11:17:32.769797  337729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0916 11:17:32.792626  337729 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19651-3687/.minikube/certs/11189.pem --> /usr/share/ca-certificates/11189.pem (1338 bytes)
	I0916 11:17:32.815615  337729 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0916 11:17:32.833021  337729 ssh_runner.go:195] Run: openssl version
	I0916 11:17:32.838324  337729 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/111892.pem && ln -fs /usr/share/ca-certificates/111892.pem /etc/ssl/certs/111892.pem"
	I0916 11:17:32.847412  337729 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/111892.pem
	I0916 11:17:32.850777  337729 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 16 10:40 /usr/share/ca-certificates/111892.pem
	I0916 11:17:32.850833  337729 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/111892.pem
	I0916 11:17:32.857280  337729 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/111892.pem /etc/ssl/certs/3ec20f2e.0"
	I0916 11:17:32.866172  337729 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0916 11:17:32.875131  337729 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:17:32.878722  337729 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 16 10:23 /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:17:32.878769  337729 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0916 11:17:32.885335  337729 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0916 11:17:32.894881  337729 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11189.pem && ln -fs /usr/share/ca-certificates/11189.pem /etc/ssl/certs/11189.pem"
	I0916 11:17:32.904112  337729 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11189.pem
	I0916 11:17:32.907601  337729 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 16 10:40 /usr/share/ca-certificates/11189.pem
	I0916 11:17:32.907662  337729 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11189.pem
	I0916 11:17:32.914151  337729 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/11189.pem /etc/ssl/certs/51391683.0"
	I0916 11:17:32.923087  337729 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0916 11:17:32.926373  337729 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0916 11:17:32.926429  337729 kubeadm.go:392] StartCluster: {Name:calico-771611 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-771611 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 11:17:32.926502  337729 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0916 11:17:32.926544  337729 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0916 11:17:32.961870  337729 cri.go:89] found id: ""
	I0916 11:17:32.961949  337729 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0916 11:17:32.970807  337729 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0916 11:17:32.979524  337729 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0916 11:17:32.979585  337729 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0916 11:17:32.988312  337729 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0916 11:17:32.988334  337729 kubeadm.go:157] found existing configuration files:
	
	I0916 11:17:32.988376  337729 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0916 11:17:32.997050  337729 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0916 11:17:32.997111  337729 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0916 11:17:33.005357  337729 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0916 11:17:33.014485  337729 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0916 11:17:33.014555  337729 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0916 11:17:33.022508  337729 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0916 11:17:33.030768  337729 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0916 11:17:33.030818  337729 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0916 11:17:33.038917  337729 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0916 11:17:33.047719  337729 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0916 11:17:33.047810  337729 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0916 11:17:33.055720  337729 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0916 11:17:33.094157  337729 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0916 11:17:33.094254  337729 kubeadm.go:310] [preflight] Running pre-flight checks
	I0916 11:17:33.110703  337729 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0916 11:17:33.110813  337729 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-gcp
	I0916 11:17:33.110887  337729 kubeadm.go:310] OS: Linux
	I0916 11:17:33.110970  337729 kubeadm.go:310] CGROUPS_CPU: enabled
	I0916 11:17:33.111044  337729 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0916 11:17:33.111089  337729 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0916 11:17:33.111154  337729 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0916 11:17:33.111225  337729 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0916 11:17:33.111310  337729 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0916 11:17:33.111370  337729 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0916 11:17:33.111415  337729 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0916 11:17:33.111459  337729 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0916 11:17:33.166893  337729 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0916 11:17:33.167031  337729 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0916 11:17:33.167137  337729 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0916 11:17:33.172112  337729 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0916 11:17:31.629065  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:17:34.130379  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:17:33.174304  337729 out.go:235]   - Generating certificates and keys ...
	I0916 11:17:33.174419  337729 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0916 11:17:33.174474  337729 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0916 11:17:33.362056  337729 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0916 11:17:33.454495  337729 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0916 11:17:33.777594  337729 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0916 11:17:34.002243  337729 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0916 11:17:34.122203  337729 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0916 11:17:34.122404  337729 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [calico-771611 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0916 11:17:34.257797  337729 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0916 11:17:34.258036  337729 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [calico-771611 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0916 11:17:34.519999  337729 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0916 11:17:34.731812  337729 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0916 11:17:34.811165  337729 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0916 11:17:34.811250  337729 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0916 11:17:35.138550  337729 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0916 11:17:35.280180  337729 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0916 11:17:35.475933  337729 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0916 11:17:35.588222  337729 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0916 11:17:35.642798  337729 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0916 11:17:35.643354  337729 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0916 11:17:35.645931  337729 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0916 11:17:33.757827  332275 pod_ready.go:103] pod "coredns-7c65d6cfc9-5tgxr" in "kube-system" namespace has status "Ready":"False"
	I0916 11:17:35.758494  332275 pod_ready.go:103] pod "coredns-7c65d6cfc9-5tgxr" in "kube-system" namespace has status "Ready":"False"
	I0916 11:17:36.759056  332275 pod_ready.go:93] pod "coredns-7c65d6cfc9-5tgxr" in "kube-system" namespace has status "Ready":"True"
	I0916 11:17:36.759094  332275 pod_ready.go:82] duration metric: took 14.00713657s for pod "coredns-7c65d6cfc9-5tgxr" in "kube-system" namespace to be "Ready" ...
	I0916 11:17:36.759110  332275 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-8872c" in "kube-system" namespace to be "Ready" ...
	I0916 11:17:36.761623  332275 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-8872c" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-8872c" not found
	I0916 11:17:36.761651  332275 pod_ready.go:82] duration metric: took 2.532151ms for pod "coredns-7c65d6cfc9-8872c" in "kube-system" namespace to be "Ready" ...
	E0916 11:17:36.761682  332275 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-8872c" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-8872c" not found
	I0916 11:17:36.761692  332275 pod_ready.go:79] waiting up to 15m0s for pod "etcd-kindnet-771611" in "kube-system" namespace to be "Ready" ...
	I0916 11:17:36.766857  332275 pod_ready.go:93] pod "etcd-kindnet-771611" in "kube-system" namespace has status "Ready":"True"
	I0916 11:17:36.766879  332275 pod_ready.go:82] duration metric: took 5.179812ms for pod "etcd-kindnet-771611" in "kube-system" namespace to be "Ready" ...
	I0916 11:17:36.766895  332275 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-kindnet-771611" in "kube-system" namespace to be "Ready" ...
	I0916 11:17:36.772175  332275 pod_ready.go:93] pod "kube-apiserver-kindnet-771611" in "kube-system" namespace has status "Ready":"True"
	I0916 11:17:36.772208  332275 pod_ready.go:82] duration metric: took 5.302683ms for pod "kube-apiserver-kindnet-771611" in "kube-system" namespace to be "Ready" ...
	I0916 11:17:36.772222  332275 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-kindnet-771611" in "kube-system" namespace to be "Ready" ...
	I0916 11:17:36.777602  332275 pod_ready.go:93] pod "kube-controller-manager-kindnet-771611" in "kube-system" namespace has status "Ready":"True"
	I0916 11:17:36.777635  332275 pod_ready.go:82] duration metric: took 5.404388ms for pod "kube-controller-manager-kindnet-771611" in "kube-system" namespace to be "Ready" ...
	I0916 11:17:36.777650  332275 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-czkdd" in "kube-system" namespace to be "Ready" ...
	I0916 11:17:35.648282  337729 out.go:235]   - Booting up control plane ...
	I0916 11:17:35.648384  337729 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0916 11:17:35.648454  337729 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0916 11:17:35.648539  337729 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0916 11:17:35.657808  337729 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0916 11:17:35.663445  337729 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0916 11:17:35.663534  337729 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0916 11:17:35.753409  337729 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0916 11:17:35.753560  337729 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0916 11:17:36.254873  337729 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.308664ms
	I0916 11:17:36.255014  337729 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0916 11:17:36.955847  332275 pod_ready.go:93] pod "kube-proxy-czkdd" in "kube-system" namespace has status "Ready":"True"
	I0916 11:17:36.955895  332275 pod_ready.go:82] duration metric: took 178.235543ms for pod "kube-proxy-czkdd" in "kube-system" namespace to be "Ready" ...
	I0916 11:17:36.955910  332275 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-kindnet-771611" in "kube-system" namespace to be "Ready" ...
	I0916 11:17:37.356358  332275 pod_ready.go:93] pod "kube-scheduler-kindnet-771611" in "kube-system" namespace has status "Ready":"True"
	I0916 11:17:37.356451  332275 pod_ready.go:82] duration metric: took 400.531781ms for pod "kube-scheduler-kindnet-771611" in "kube-system" namespace to be "Ready" ...
	I0916 11:17:37.356466  332275 pod_ready.go:39] duration metric: took 14.62084171s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:17:37.356487  332275 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:17:37.356566  332275 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:17:37.372301  332275 api_server.go:72] duration metric: took 15.44475647s to wait for apiserver process to appear ...
	I0916 11:17:37.372332  332275 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:17:37.372358  332275 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0916 11:17:37.377125  332275 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0916 11:17:37.378259  332275 api_server.go:141] control plane version: v1.31.1
	I0916 11:17:37.378282  332275 api_server.go:131] duration metric: took 5.944079ms to wait for apiserver health ...
	I0916 11:17:37.378290  332275 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 11:17:37.558570  332275 system_pods.go:59] 8 kube-system pods found
	I0916 11:17:37.558613  332275 system_pods.go:61] "coredns-7c65d6cfc9-5tgxr" [8948ba20-aad4-4697-937b-6f43f3408bf8] Running
	I0916 11:17:37.558620  332275 system_pods.go:61] "etcd-kindnet-771611" [ccf9e03f-0262-48f3-afd8-272c0adb8f1a] Running
	I0916 11:17:37.558625  332275 system_pods.go:61] "kindnet-gn59w" [286df0a6-9ecb-4f78-bcac-8b4ce2c556e5] Running
	I0916 11:17:37.558630  332275 system_pods.go:61] "kube-apiserver-kindnet-771611" [3f5c9ae1-8ff5-46da-90a0-962f6129e702] Running
	I0916 11:17:37.558636  332275 system_pods.go:61] "kube-controller-manager-kindnet-771611" [652c4d39-6d6b-4553-b94b-2ec2ebdb2b96] Running
	I0916 11:17:37.558643  332275 system_pods.go:61] "kube-proxy-czkdd" [01aa938f-f94d-4282-8ef7-d4160f29d8db] Running
	I0916 11:17:37.558648  332275 system_pods.go:61] "kube-scheduler-kindnet-771611" [ae0d2855-3990-413e-868a-c9d27d8c829f] Running
	I0916 11:17:37.558655  332275 system_pods.go:61] "storage-provisioner" [32db824b-8f98-4549-ba1f-674c1a01518b] Running
	I0916 11:17:37.558667  332275 system_pods.go:74] duration metric: took 180.369636ms to wait for pod list to return data ...
	I0916 11:17:37.558688  332275 default_sa.go:34] waiting for default service account to be created ...
	I0916 11:17:37.756368  332275 default_sa.go:45] found service account: "default"
	I0916 11:17:37.756394  332275 default_sa.go:55] duration metric: took 197.697089ms for default service account to be created ...
	I0916 11:17:37.756404  332275 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 11:17:37.958882  332275 system_pods.go:86] 8 kube-system pods found
	I0916 11:17:37.958913  332275 system_pods.go:89] "coredns-7c65d6cfc9-5tgxr" [8948ba20-aad4-4697-937b-6f43f3408bf8] Running
	I0916 11:17:37.958922  332275 system_pods.go:89] "etcd-kindnet-771611" [ccf9e03f-0262-48f3-afd8-272c0adb8f1a] Running
	I0916 11:17:37.958927  332275 system_pods.go:89] "kindnet-gn59w" [286df0a6-9ecb-4f78-bcac-8b4ce2c556e5] Running
	I0916 11:17:37.958932  332275 system_pods.go:89] "kube-apiserver-kindnet-771611" [3f5c9ae1-8ff5-46da-90a0-962f6129e702] Running
	I0916 11:17:37.958939  332275 system_pods.go:89] "kube-controller-manager-kindnet-771611" [652c4d39-6d6b-4553-b94b-2ec2ebdb2b96] Running
	I0916 11:17:37.958944  332275 system_pods.go:89] "kube-proxy-czkdd" [01aa938f-f94d-4282-8ef7-d4160f29d8db] Running
	I0916 11:17:37.958950  332275 system_pods.go:89] "kube-scheduler-kindnet-771611" [ae0d2855-3990-413e-868a-c9d27d8c829f] Running
	I0916 11:17:37.958954  332275 system_pods.go:89] "storage-provisioner" [32db824b-8f98-4549-ba1f-674c1a01518b] Running
	I0916 11:17:37.958963  332275 system_pods.go:126] duration metric: took 202.552635ms to wait for k8s-apps to be running ...
	I0916 11:17:37.958978  332275 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 11:17:37.959028  332275 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:17:37.973003  332275 system_svc.go:56] duration metric: took 14.00673ms WaitForService to wait for kubelet
	I0916 11:17:37.973044  332275 kubeadm.go:582] duration metric: took 16.045512069s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:17:37.973070  332275 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:17:38.156988  332275 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 11:17:38.157017  332275 node_conditions.go:123] node cpu capacity is 8
	I0916 11:17:38.157028  332275 node_conditions.go:105] duration metric: took 183.9535ms to run NodePressure ...
	I0916 11:17:38.157039  332275 start.go:241] waiting for startup goroutines ...
	I0916 11:17:38.157045  332275 start.go:246] waiting for cluster config update ...
	I0916 11:17:38.157055  332275 start.go:255] writing updated cluster config ...
	I0916 11:17:38.157312  332275 ssh_runner.go:195] Run: rm -f paused
	I0916 11:17:38.163998  332275 out.go:177] * Done! kubectl is now configured to use "kindnet-771611" cluster and "default" namespace by default
	E0916 11:17:38.165336  332275 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	I0916 11:17:36.629939  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:17:39.130209  309669 pod_ready.go:103] pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace has status "Ready":"False"
	I0916 11:17:40.756103  337729 kubeadm.go:310] [api-check] The API server is healthy after 4.501535277s
	I0916 11:17:40.770203  337729 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0916 11:17:40.786161  337729 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0916 11:17:40.810724  337729 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0916 11:17:40.810947  337729 kubeadm.go:310] [mark-control-plane] Marking the node calico-771611 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0916 11:17:40.821012  337729 kubeadm.go:310] [bootstrap-token] Using token: p23tmq.c83gxxx3emg1xwr2
	I0916 11:17:40.822644  337729 out.go:235]   - Configuring RBAC rules ...
	I0916 11:17:40.822816  337729 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0916 11:17:40.827482  337729 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0916 11:17:40.834833  337729 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0916 11:17:40.838270  337729 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0916 11:17:40.841616  337729 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0916 11:17:40.845753  337729 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0916 11:17:41.162426  337729 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0916 11:17:41.586747  337729 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0916 11:17:42.162808  337729 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0916 11:17:42.163728  337729 kubeadm.go:310] 
	I0916 11:17:42.163865  337729 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0916 11:17:42.163875  337729 kubeadm.go:310] 
	I0916 11:17:42.163975  337729 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0916 11:17:42.164002  337729 kubeadm.go:310] 
	I0916 11:17:42.164048  337729 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0916 11:17:42.164139  337729 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0916 11:17:42.164213  337729 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0916 11:17:42.164227  337729 kubeadm.go:310] 
	I0916 11:17:42.164305  337729 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0916 11:17:42.164315  337729 kubeadm.go:310] 
	I0916 11:17:42.164391  337729 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0916 11:17:42.164407  337729 kubeadm.go:310] 
	I0916 11:17:42.164479  337729 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0916 11:17:42.164590  337729 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0916 11:17:42.164686  337729 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0916 11:17:42.164696  337729 kubeadm.go:310] 
	I0916 11:17:42.164832  337729 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0916 11:17:42.164943  337729 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0916 11:17:42.164954  337729 kubeadm.go:310] 
	I0916 11:17:42.165069  337729 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token p23tmq.c83gxxx3emg1xwr2 \
	I0916 11:17:42.165184  337729 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 \
	I0916 11:17:42.165222  337729 kubeadm.go:310] 	--control-plane 
	I0916 11:17:42.165237  337729 kubeadm.go:310] 
	I0916 11:17:42.165363  337729 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0916 11:17:42.165373  337729 kubeadm.go:310] 
	I0916 11:17:42.165514  337729 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token p23tmq.c83gxxx3emg1xwr2 \
	I0916 11:17:42.165619  337729 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:98a702be5b5d3b3b3becc38b5841e80991e597e246b60161686d9df7f6d6b018 
	I0916 11:17:42.168533  337729 kubeadm.go:310] W0916 11:17:33.091572    1136 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:17:42.168815  337729 kubeadm.go:310] W0916 11:17:33.092219    1136 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0916 11:17:42.169076  337729 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-gcp\n", err: exit status 1
	I0916 11:17:42.169236  337729 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0916 11:17:42.169271  337729 cni.go:84] Creating CNI manager for "calico"
	I0916 11:17:42.171879  337729 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I0916 11:17:42.173124  337729 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0916 11:17:42.173143  337729 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (253923 bytes)
	I0916 11:17:42.191559  337729 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0916 11:17:40.629493  309669 pod_ready.go:82] duration metric: took 4m0.0065481s for pod "metrics-server-6867b74b74-shznv" in "kube-system" namespace to be "Ready" ...
	E0916 11:17:40.629522  309669 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0916 11:17:40.629533  309669 pod_ready.go:39] duration metric: took 4m0.675314662s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:17:40.629551  309669 api_server.go:52] waiting for apiserver process to appear ...
	I0916 11:17:40.629597  309669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:17:40.629663  309669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:17:40.667837  309669 cri.go:89] found id: "1f3abcf5f7f43e973f2dbe8fa5f49c4b9538073b43bb9a85f4267e53a3a36e2d"
	I0916 11:17:40.667858  309669 cri.go:89] found id: "bdf3aa888730f420448b90d2c08e29c4518e341208ebdb9b901916839a3b5862"
	I0916 11:17:40.667862  309669 cri.go:89] found id: ""
	I0916 11:17:40.667868  309669 logs.go:276] 2 containers: [1f3abcf5f7f43e973f2dbe8fa5f49c4b9538073b43bb9a85f4267e53a3a36e2d bdf3aa888730f420448b90d2c08e29c4518e341208ebdb9b901916839a3b5862]
	I0916 11:17:40.667924  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:40.671286  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:40.674674  309669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:17:40.674730  309669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:17:40.707815  309669 cri.go:89] found id: "174cbcad5dfdeefd18c67a333f77412eac57b42959fe1689429240db27f5f548"
	I0916 11:17:40.707837  309669 cri.go:89] found id: "06406ac4e01c0da912225abf9c21537955221a3a9f8dfb94deb8f00faf06ca09"
	I0916 11:17:40.707841  309669 cri.go:89] found id: ""
	I0916 11:17:40.707847  309669 logs.go:276] 2 containers: [174cbcad5dfdeefd18c67a333f77412eac57b42959fe1689429240db27f5f548 06406ac4e01c0da912225abf9c21537955221a3a9f8dfb94deb8f00faf06ca09]
	I0916 11:17:40.707897  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:40.711435  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:40.714603  309669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:17:40.714657  309669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:17:40.752598  309669 cri.go:89] found id: "50e36801f5598ea07d10424053f545dc5dd90ec5f26cbc8e642ce5903df03ab0"
	I0916 11:17:40.752623  309669 cri.go:89] found id: "308f1d6d730a2535f1e6a084892c5da3b5b775594ef073672e029c09537b57da"
	I0916 11:17:40.752627  309669 cri.go:89] found id: ""
	I0916 11:17:40.752634  309669 logs.go:276] 2 containers: [50e36801f5598ea07d10424053f545dc5dd90ec5f26cbc8e642ce5903df03ab0 308f1d6d730a2535f1e6a084892c5da3b5b775594ef073672e029c09537b57da]
	I0916 11:17:40.752683  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:40.756498  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:40.760251  309669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:17:40.760315  309669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:17:40.800451  309669 cri.go:89] found id: "8aae7d457c9d91907e960cc200783c7b987fc3faa60e764bf6e297a4062abfc3"
	I0916 11:17:40.800478  309669 cri.go:89] found id: "3b1640b111894c26e26aef3c208049c9bac3c487c7b837d73cc15609a25be8a5"
	I0916 11:17:40.800484  309669 cri.go:89] found id: ""
	I0916 11:17:40.800493  309669 logs.go:276] 2 containers: [8aae7d457c9d91907e960cc200783c7b987fc3faa60e764bf6e297a4062abfc3 3b1640b111894c26e26aef3c208049c9bac3c487c7b837d73cc15609a25be8a5]
	I0916 11:17:40.800564  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:40.804215  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:40.807570  309669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:17:40.807636  309669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:17:40.847072  309669 cri.go:89] found id: "724c353ef95d6aa2f7994445fdbec2c622dfe3fa4249a6ee081f6287cd363ade"
	I0916 11:17:40.847096  309669 cri.go:89] found id: "947c3b3b00e44516aae5e8db652e25532afcfccf08cf372511b7656ed49e1dce"
	I0916 11:17:40.847102  309669 cri.go:89] found id: ""
	I0916 11:17:40.847110  309669 logs.go:276] 2 containers: [724c353ef95d6aa2f7994445fdbec2c622dfe3fa4249a6ee081f6287cd363ade 947c3b3b00e44516aae5e8db652e25532afcfccf08cf372511b7656ed49e1dce]
	I0916 11:17:40.847167  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:40.851023  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:40.854481  309669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:17:40.854550  309669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:17:40.887373  309669 cri.go:89] found id: "652dda2ab8f82129994a3a233d5a39997232b7edbd25a283090f11902f278931"
	I0916 11:17:40.887400  309669 cri.go:89] found id: "a085c20f4e6d11ad60d4935c3c6e2b3ee2a685ac38a3ca0cae9820d4bab0905e"
	I0916 11:17:40.887405  309669 cri.go:89] found id: ""
	I0916 11:17:40.887412  309669 logs.go:276] 2 containers: [652dda2ab8f82129994a3a233d5a39997232b7edbd25a283090f11902f278931 a085c20f4e6d11ad60d4935c3c6e2b3ee2a685ac38a3ca0cae9820d4bab0905e]
	I0916 11:17:40.887478  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:40.890814  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:40.893977  309669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:17:40.894040  309669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:17:40.927026  309669 cri.go:89] found id: "36216a560c65c1a4b8db30aadf1b0944b90f3fafc31450a99a43f9f0fc189397"
	I0916 11:17:40.927050  309669 cri.go:89] found id: "3d2d679d3f9204404ae6eba2a4077abf21a6fcc5943b04d5a2e69b78836e3cd3"
	I0916 11:17:40.927054  309669 cri.go:89] found id: ""
	I0916 11:17:40.927061  309669 logs.go:276] 2 containers: [36216a560c65c1a4b8db30aadf1b0944b90f3fafc31450a99a43f9f0fc189397 3d2d679d3f9204404ae6eba2a4077abf21a6fcc5943b04d5a2e69b78836e3cd3]
	I0916 11:17:40.927111  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:40.930807  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:40.934369  309669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:17:40.934445  309669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:17:40.968874  309669 cri.go:89] found id: "a8afbcb8b3f5e9d33bb313f7bd0bf38677e0c1e5428abe31d56f8f6c5b153276"
	I0916 11:17:40.968902  309669 cri.go:89] found id: "60884be84b9066615bf9741565f86d3efb578c13196835196f3e9476d4c6a2a2"
	I0916 11:17:40.968908  309669 cri.go:89] found id: ""
	I0916 11:17:40.968917  309669 logs.go:276] 2 containers: [a8afbcb8b3f5e9d33bb313f7bd0bf38677e0c1e5428abe31d56f8f6c5b153276 60884be84b9066615bf9741565f86d3efb578c13196835196f3e9476d4c6a2a2]
	I0916 11:17:40.968965  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:40.972596  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:40.976078  309669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0916 11:17:40.976155  309669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0916 11:17:41.010269  309669 cri.go:89] found id: "e5d19169d238b57c06dbca10d4ee56c90fe06036b8057ecf0a14e458bc719cc4"
	I0916 11:17:41.010290  309669 cri.go:89] found id: ""
	I0916 11:17:41.010297  309669 logs.go:276] 1 containers: [e5d19169d238b57c06dbca10d4ee56c90fe06036b8057ecf0a14e458bc719cc4]
	I0916 11:17:41.010345  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:41.013976  309669 logs.go:123] Gathering logs for storage-provisioner [a8afbcb8b3f5e9d33bb313f7bd0bf38677e0c1e5428abe31d56f8f6c5b153276] ...
	I0916 11:17:41.014004  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8afbcb8b3f5e9d33bb313f7bd0bf38677e0c1e5428abe31d56f8f6c5b153276"
	I0916 11:17:41.047952  309669 logs.go:123] Gathering logs for kubernetes-dashboard [e5d19169d238b57c06dbca10d4ee56c90fe06036b8057ecf0a14e458bc719cc4] ...
	I0916 11:17:41.047981  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e5d19169d238b57c06dbca10d4ee56c90fe06036b8057ecf0a14e458bc719cc4"
	I0916 11:17:41.083597  309669 logs.go:123] Gathering logs for container status ...
	I0916 11:17:41.083628  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:17:41.122556  309669 logs.go:123] Gathering logs for kube-scheduler [3b1640b111894c26e26aef3c208049c9bac3c487c7b837d73cc15609a25be8a5] ...
	I0916 11:17:41.122591  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b1640b111894c26e26aef3c208049c9bac3c487c7b837d73cc15609a25be8a5"
	I0916 11:17:41.169112  309669 logs.go:123] Gathering logs for kube-proxy [947c3b3b00e44516aae5e8db652e25532afcfccf08cf372511b7656ed49e1dce] ...
	I0916 11:17:41.169157  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 947c3b3b00e44516aae5e8db652e25532afcfccf08cf372511b7656ed49e1dce"
	I0916 11:17:41.218304  309669 logs.go:123] Gathering logs for kindnet [36216a560c65c1a4b8db30aadf1b0944b90f3fafc31450a99a43f9f0fc189397] ...
	I0916 11:17:41.218333  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36216a560c65c1a4b8db30aadf1b0944b90f3fafc31450a99a43f9f0fc189397"
	I0916 11:17:41.263298  309669 logs.go:123] Gathering logs for kube-apiserver [1f3abcf5f7f43e973f2dbe8fa5f49c4b9538073b43bb9a85f4267e53a3a36e2d] ...
	I0916 11:17:41.263341  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f3abcf5f7f43e973f2dbe8fa5f49c4b9538073b43bb9a85f4267e53a3a36e2d"
	I0916 11:17:41.308112  309669 logs.go:123] Gathering logs for etcd [06406ac4e01c0da912225abf9c21537955221a3a9f8dfb94deb8f00faf06ca09] ...
	I0916 11:17:41.308149  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06406ac4e01c0da912225abf9c21537955221a3a9f8dfb94deb8f00faf06ca09"
	I0916 11:17:41.346394  309669 logs.go:123] Gathering logs for coredns [308f1d6d730a2535f1e6a084892c5da3b5b775594ef073672e029c09537b57da] ...
	I0916 11:17:41.346428  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 308f1d6d730a2535f1e6a084892c5da3b5b775594ef073672e029c09537b57da"
	I0916 11:17:41.387347  309669 logs.go:123] Gathering logs for kindnet [3d2d679d3f9204404ae6eba2a4077abf21a6fcc5943b04d5a2e69b78836e3cd3] ...
	I0916 11:17:41.387375  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d2d679d3f9204404ae6eba2a4077abf21a6fcc5943b04d5a2e69b78836e3cd3"
	I0916 11:17:41.425911  309669 logs.go:123] Gathering logs for kubelet ...
	I0916 11:17:41.425952  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:17:41.498460  309669 logs.go:123] Gathering logs for dmesg ...
	I0916 11:17:41.498511  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:17:41.525015  309669 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:17:41.525059  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:17:41.645471  309669 logs.go:123] Gathering logs for kube-controller-manager [a085c20f4e6d11ad60d4935c3c6e2b3ee2a685ac38a3ca0cae9820d4bab0905e] ...
	I0916 11:17:41.645509  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a085c20f4e6d11ad60d4935c3c6e2b3ee2a685ac38a3ca0cae9820d4bab0905e"
	I0916 11:17:41.694561  309669 logs.go:123] Gathering logs for storage-provisioner [60884be84b9066615bf9741565f86d3efb578c13196835196f3e9476d4c6a2a2] ...
	I0916 11:17:41.694597  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60884be84b9066615bf9741565f86d3efb578c13196835196f3e9476d4c6a2a2"
	I0916 11:17:41.733081  309669 logs.go:123] Gathering logs for containerd ...
	I0916 11:17:41.733111  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:17:41.791996  309669 logs.go:123] Gathering logs for etcd [174cbcad5dfdeefd18c67a333f77412eac57b42959fe1689429240db27f5f548] ...
	I0916 11:17:41.792031  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 174cbcad5dfdeefd18c67a333f77412eac57b42959fe1689429240db27f5f548"
	I0916 11:17:41.833489  309669 logs.go:123] Gathering logs for kube-scheduler [8aae7d457c9d91907e960cc200783c7b987fc3faa60e764bf6e297a4062abfc3] ...
	I0916 11:17:41.833522  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8aae7d457c9d91907e960cc200783c7b987fc3faa60e764bf6e297a4062abfc3"
	I0916 11:17:41.869897  309669 logs.go:123] Gathering logs for kube-controller-manager [652dda2ab8f82129994a3a233d5a39997232b7edbd25a283090f11902f278931] ...
	I0916 11:17:41.869929  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 652dda2ab8f82129994a3a233d5a39997232b7edbd25a283090f11902f278931"
	I0916 11:17:41.926149  309669 logs.go:123] Gathering logs for kube-apiserver [bdf3aa888730f420448b90d2c08e29c4518e341208ebdb9b901916839a3b5862] ...
	I0916 11:17:41.926194  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdf3aa888730f420448b90d2c08e29c4518e341208ebdb9b901916839a3b5862"
	I0916 11:17:41.971368  309669 logs.go:123] Gathering logs for coredns [50e36801f5598ea07d10424053f545dc5dd90ec5f26cbc8e642ce5903df03ab0] ...
	I0916 11:17:41.971401  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50e36801f5598ea07d10424053f545dc5dd90ec5f26cbc8e642ce5903df03ab0"
	I0916 11:17:42.007465  309669 logs.go:123] Gathering logs for kube-proxy [724c353ef95d6aa2f7994445fdbec2c622dfe3fa4249a6ee081f6287cd363ade] ...
	I0916 11:17:42.007495  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 724c353ef95d6aa2f7994445fdbec2c622dfe3fa4249a6ee081f6287cd363ade"
	I0916 11:17:44.544803  309669 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 11:17:44.557199  309669 api_server.go:72] duration metric: took 4m8.171143619s to wait for apiserver process to appear ...
	I0916 11:17:44.557232  309669 api_server.go:88] waiting for apiserver healthz status ...
	I0916 11:17:44.557270  309669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:17:44.557343  309669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:17:44.590815  309669 cri.go:89] found id: "1f3abcf5f7f43e973f2dbe8fa5f49c4b9538073b43bb9a85f4267e53a3a36e2d"
	I0916 11:17:44.590838  309669 cri.go:89] found id: "bdf3aa888730f420448b90d2c08e29c4518e341208ebdb9b901916839a3b5862"
	I0916 11:17:44.590843  309669 cri.go:89] found id: ""
	I0916 11:17:44.590859  309669 logs.go:276] 2 containers: [1f3abcf5f7f43e973f2dbe8fa5f49c4b9538073b43bb9a85f4267e53a3a36e2d bdf3aa888730f420448b90d2c08e29c4518e341208ebdb9b901916839a3b5862]
	I0916 11:17:44.590918  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:44.594523  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:44.597999  309669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:17:44.598050  309669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:17:44.632960  309669 cri.go:89] found id: "174cbcad5dfdeefd18c67a333f77412eac57b42959fe1689429240db27f5f548"
	I0916 11:17:44.632985  309669 cri.go:89] found id: "06406ac4e01c0da912225abf9c21537955221a3a9f8dfb94deb8f00faf06ca09"
	I0916 11:17:44.632989  309669 cri.go:89] found id: ""
	I0916 11:17:44.632996  309669 logs.go:276] 2 containers: [174cbcad5dfdeefd18c67a333f77412eac57b42959fe1689429240db27f5f548 06406ac4e01c0da912225abf9c21537955221a3a9f8dfb94deb8f00faf06ca09]
	I0916 11:17:44.633049  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:44.636581  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:44.640024  309669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:17:44.640084  309669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:17:44.676863  309669 cri.go:89] found id: "50e36801f5598ea07d10424053f545dc5dd90ec5f26cbc8e642ce5903df03ab0"
	I0916 11:17:44.676883  309669 cri.go:89] found id: "308f1d6d730a2535f1e6a084892c5da3b5b775594ef073672e029c09537b57da"
	I0916 11:17:44.676887  309669 cri.go:89] found id: ""
	I0916 11:17:44.676894  309669 logs.go:276] 2 containers: [50e36801f5598ea07d10424053f545dc5dd90ec5f26cbc8e642ce5903df03ab0 308f1d6d730a2535f1e6a084892c5da3b5b775594ef073672e029c09537b57da]
	I0916 11:17:44.676945  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:44.680596  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:44.684177  309669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:17:44.684254  309669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:17:44.717389  309669 cri.go:89] found id: "8aae7d457c9d91907e960cc200783c7b987fc3faa60e764bf6e297a4062abfc3"
	I0916 11:17:44.717430  309669 cri.go:89] found id: "3b1640b111894c26e26aef3c208049c9bac3c487c7b837d73cc15609a25be8a5"
	I0916 11:17:44.717437  309669 cri.go:89] found id: ""
	I0916 11:17:44.717447  309669 logs.go:276] 2 containers: [8aae7d457c9d91907e960cc200783c7b987fc3faa60e764bf6e297a4062abfc3 3b1640b111894c26e26aef3c208049c9bac3c487c7b837d73cc15609a25be8a5]
	I0916 11:17:44.717503  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:44.721180  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:44.724513  309669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:17:44.724590  309669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:17:44.759287  309669 cri.go:89] found id: "724c353ef95d6aa2f7994445fdbec2c622dfe3fa4249a6ee081f6287cd363ade"
	I0916 11:17:44.759311  309669 cri.go:89] found id: "947c3b3b00e44516aae5e8db652e25532afcfccf08cf372511b7656ed49e1dce"
	I0916 11:17:44.759317  309669 cri.go:89] found id: ""
	I0916 11:17:44.759326  309669 logs.go:276] 2 containers: [724c353ef95d6aa2f7994445fdbec2c622dfe3fa4249a6ee081f6287cd363ade 947c3b3b00e44516aae5e8db652e25532afcfccf08cf372511b7656ed49e1dce]
	I0916 11:17:44.759385  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:44.763063  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:44.766762  309669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:17:44.766823  309669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:17:43.353163  337729 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.161560303s)
	I0916 11:17:43.353222  337729 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0916 11:17:43.353321  337729 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:17:43.353361  337729 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-771611 minikube.k8s.io/updated_at=2024_09_16T11_17_43_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed minikube.k8s.io/name=calico-771611 minikube.k8s.io/primary=true
	I0916 11:17:43.360567  337729 ops.go:34] apiserver oom_adj: -16
	I0916 11:17:43.451623  337729 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:17:43.951729  337729 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:17:44.452597  337729 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:17:44.951774  337729 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:17:45.451898  337729 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:17:45.952638  337729 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:17:46.451895  337729 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:17:46.952700  337729 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0916 11:17:47.061704  337729 kubeadm.go:1113] duration metric: took 3.70843483s to wait for elevateKubeSystemPrivileges
	I0916 11:17:47.061747  337729 kubeadm.go:394] duration metric: took 14.135321525s to StartCluster
	I0916 11:17:47.061769  337729 settings.go:142] acquiring lock: {Name:mk5f140da961bb985c566f732d611f3e238f1f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:17:47.061852  337729 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:17:47.063968  337729 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/kubeconfig: {Name:mk14e53f493eb1002278196bae5598421d6de4bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 11:17:47.064304  337729 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0916 11:17:47.064304  337729 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0916 11:17:47.064379  337729 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0916 11:17:47.064474  337729 addons.go:69] Setting storage-provisioner=true in profile "calico-771611"
	I0916 11:17:47.064493  337729 addons.go:234] Setting addon storage-provisioner=true in "calico-771611"
	I0916 11:17:47.064534  337729 host.go:66] Checking if "calico-771611" exists ...
	I0916 11:17:47.064561  337729 config.go:182] Loaded profile config "calico-771611": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:17:47.064550  337729 addons.go:69] Setting default-storageclass=true in profile "calico-771611"
	I0916 11:17:47.064586  337729 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-771611"
	I0916 11:17:47.064920  337729 cli_runner.go:164] Run: docker container inspect calico-771611 --format={{.State.Status}}
	I0916 11:17:47.065128  337729 cli_runner.go:164] Run: docker container inspect calico-771611 --format={{.State.Status}}
	I0916 11:17:47.067231  337729 out.go:177] * Verifying Kubernetes components...
	I0916 11:17:47.068800  337729 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0916 11:17:47.086436  337729 addons.go:234] Setting addon default-storageclass=true in "calico-771611"
	I0916 11:17:47.086480  337729 host.go:66] Checking if "calico-771611" exists ...
	I0916 11:17:47.086863  337729 cli_runner.go:164] Run: docker container inspect calico-771611 --format={{.State.Status}}
	I0916 11:17:47.088470  337729 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0916 11:17:47.089943  337729 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:17:47.089962  337729 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0916 11:17:47.090007  337729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-771611
	I0916 11:17:47.109704  337729 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0916 11:17:47.109728  337729 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0916 11:17:47.109776  337729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-771611
	I0916 11:17:47.109871  337729 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/calico-771611/id_rsa Username:docker}
	I0916 11:17:47.135726  337729 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/calico-771611/id_rsa Username:docker}
	I0916 11:17:47.420788  337729 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0916 11:17:47.425461  337729 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0916 11:17:47.439393  337729 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0916 11:17:47.439621  337729 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0916 11:17:48.252535  337729 start.go:971] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0916 11:17:48.254015  337729 node_ready.go:35] waiting up to 15m0s for node "calico-771611" to be "Ready" ...
	I0916 11:17:48.255910  337729 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0916 11:17:44.801904  309669 cri.go:89] found id: "652dda2ab8f82129994a3a233d5a39997232b7edbd25a283090f11902f278931"
	I0916 11:17:44.801928  309669 cri.go:89] found id: "a085c20f4e6d11ad60d4935c3c6e2b3ee2a685ac38a3ca0cae9820d4bab0905e"
	I0916 11:17:44.801934  309669 cri.go:89] found id: ""
	I0916 11:17:44.801942  309669 logs.go:276] 2 containers: [652dda2ab8f82129994a3a233d5a39997232b7edbd25a283090f11902f278931 a085c20f4e6d11ad60d4935c3c6e2b3ee2a685ac38a3ca0cae9820d4bab0905e]
	I0916 11:17:44.801996  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:44.805462  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:44.808849  309669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:17:44.808907  309669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:17:44.841901  309669 cri.go:89] found id: "36216a560c65c1a4b8db30aadf1b0944b90f3fafc31450a99a43f9f0fc189397"
	I0916 11:17:44.841926  309669 cri.go:89] found id: "3d2d679d3f9204404ae6eba2a4077abf21a6fcc5943b04d5a2e69b78836e3cd3"
	I0916 11:17:44.841932  309669 cri.go:89] found id: ""
	I0916 11:17:44.841939  309669 logs.go:276] 2 containers: [36216a560c65c1a4b8db30aadf1b0944b90f3fafc31450a99a43f9f0fc189397 3d2d679d3f9204404ae6eba2a4077abf21a6fcc5943b04d5a2e69b78836e3cd3]
	I0916 11:17:44.841991  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:44.845917  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:44.849248  309669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0916 11:17:44.849333  309669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0916 11:17:44.884555  309669 cri.go:89] found id: "e5d19169d238b57c06dbca10d4ee56c90fe06036b8057ecf0a14e458bc719cc4"
	I0916 11:17:44.884583  309669 cri.go:89] found id: ""
	I0916 11:17:44.884595  309669 logs.go:276] 1 containers: [e5d19169d238b57c06dbca10d4ee56c90fe06036b8057ecf0a14e458bc719cc4]
	I0916 11:17:44.884649  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:44.888165  309669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:17:44.888237  309669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:17:44.926020  309669 cri.go:89] found id: "a8afbcb8b3f5e9d33bb313f7bd0bf38677e0c1e5428abe31d56f8f6c5b153276"
	I0916 11:17:44.926046  309669 cri.go:89] found id: "60884be84b9066615bf9741565f86d3efb578c13196835196f3e9476d4c6a2a2"
	I0916 11:17:44.926050  309669 cri.go:89] found id: ""
	I0916 11:17:44.926057  309669 logs.go:276] 2 containers: [a8afbcb8b3f5e9d33bb313f7bd0bf38677e0c1e5428abe31d56f8f6c5b153276 60884be84b9066615bf9741565f86d3efb578c13196835196f3e9476d4c6a2a2]
	I0916 11:17:44.926114  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:44.929976  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:44.933592  309669 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:17:44.933618  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:17:45.042950  309669 logs.go:123] Gathering logs for kubernetes-dashboard [e5d19169d238b57c06dbca10d4ee56c90fe06036b8057ecf0a14e458bc719cc4] ...
	I0916 11:17:45.042988  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e5d19169d238b57c06dbca10d4ee56c90fe06036b8057ecf0a14e458bc719cc4"
	I0916 11:17:45.079283  309669 logs.go:123] Gathering logs for storage-provisioner [a8afbcb8b3f5e9d33bb313f7bd0bf38677e0c1e5428abe31d56f8f6c5b153276] ...
	I0916 11:17:45.079311  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8afbcb8b3f5e9d33bb313f7bd0bf38677e0c1e5428abe31d56f8f6c5b153276"
	I0916 11:17:45.115945  309669 logs.go:123] Gathering logs for storage-provisioner [60884be84b9066615bf9741565f86d3efb578c13196835196f3e9476d4c6a2a2] ...
	I0916 11:17:45.115977  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60884be84b9066615bf9741565f86d3efb578c13196835196f3e9476d4c6a2a2"
	I0916 11:17:45.153236  309669 logs.go:123] Gathering logs for container status ...
	I0916 11:17:45.153264  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:17:45.192847  309669 logs.go:123] Gathering logs for kubelet ...
	I0916 11:17:45.192884  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:17:45.254572  309669 logs.go:123] Gathering logs for etcd [174cbcad5dfdeefd18c67a333f77412eac57b42959fe1689429240db27f5f548] ...
	I0916 11:17:45.254608  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 174cbcad5dfdeefd18c67a333f77412eac57b42959fe1689429240db27f5f548"
	I0916 11:17:45.308634  309669 logs.go:123] Gathering logs for etcd [06406ac4e01c0da912225abf9c21537955221a3a9f8dfb94deb8f00faf06ca09] ...
	I0916 11:17:45.308685  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06406ac4e01c0da912225abf9c21537955221a3a9f8dfb94deb8f00faf06ca09"
	I0916 11:17:45.354793  309669 logs.go:123] Gathering logs for kube-scheduler [3b1640b111894c26e26aef3c208049c9bac3c487c7b837d73cc15609a25be8a5] ...
	I0916 11:17:45.354834  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b1640b111894c26e26aef3c208049c9bac3c487c7b837d73cc15609a25be8a5"
	I0916 11:17:45.402413  309669 logs.go:123] Gathering logs for kube-proxy [947c3b3b00e44516aae5e8db652e25532afcfccf08cf372511b7656ed49e1dce] ...
	I0916 11:17:45.402452  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 947c3b3b00e44516aae5e8db652e25532afcfccf08cf372511b7656ed49e1dce"
	I0916 11:17:45.451209  309669 logs.go:123] Gathering logs for kube-controller-manager [652dda2ab8f82129994a3a233d5a39997232b7edbd25a283090f11902f278931] ...
	I0916 11:17:45.451260  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 652dda2ab8f82129994a3a233d5a39997232b7edbd25a283090f11902f278931"
	I0916 11:17:45.526413  309669 logs.go:123] Gathering logs for kube-controller-manager [a085c20f4e6d11ad60d4935c3c6e2b3ee2a685ac38a3ca0cae9820d4bab0905e] ...
	I0916 11:17:45.526471  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a085c20f4e6d11ad60d4935c3c6e2b3ee2a685ac38a3ca0cae9820d4bab0905e"
	I0916 11:17:45.585700  309669 logs.go:123] Gathering logs for kindnet [36216a560c65c1a4b8db30aadf1b0944b90f3fafc31450a99a43f9f0fc189397] ...
	I0916 11:17:45.585740  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36216a560c65c1a4b8db30aadf1b0944b90f3fafc31450a99a43f9f0fc189397"
	I0916 11:17:45.628718  309669 logs.go:123] Gathering logs for kube-apiserver [1f3abcf5f7f43e973f2dbe8fa5f49c4b9538073b43bb9a85f4267e53a3a36e2d] ...
	I0916 11:17:45.628772  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f3abcf5f7f43e973f2dbe8fa5f49c4b9538073b43bb9a85f4267e53a3a36e2d"
	I0916 11:17:45.676609  309669 logs.go:123] Gathering logs for containerd ...
	I0916 11:17:45.676659  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:17:45.731105  309669 logs.go:123] Gathering logs for kube-apiserver [bdf3aa888730f420448b90d2c08e29c4518e341208ebdb9b901916839a3b5862] ...
	I0916 11:17:45.731139  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdf3aa888730f420448b90d2c08e29c4518e341208ebdb9b901916839a3b5862"
	I0916 11:17:45.778422  309669 logs.go:123] Gathering logs for coredns [308f1d6d730a2535f1e6a084892c5da3b5b775594ef073672e029c09537b57da] ...
	I0916 11:17:45.778455  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 308f1d6d730a2535f1e6a084892c5da3b5b775594ef073672e029c09537b57da"
	I0916 11:17:45.823624  309669 logs.go:123] Gathering logs for kube-scheduler [8aae7d457c9d91907e960cc200783c7b987fc3faa60e764bf6e297a4062abfc3] ...
	I0916 11:17:45.823666  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8aae7d457c9d91907e960cc200783c7b987fc3faa60e764bf6e297a4062abfc3"
	I0916 11:17:45.873036  309669 logs.go:123] Gathering logs for kindnet [3d2d679d3f9204404ae6eba2a4077abf21a6fcc5943b04d5a2e69b78836e3cd3] ...
	I0916 11:17:45.873068  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d2d679d3f9204404ae6eba2a4077abf21a6fcc5943b04d5a2e69b78836e3cd3"
	I0916 11:17:45.912497  309669 logs.go:123] Gathering logs for dmesg ...
	I0916 11:17:45.912527  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:17:45.950172  309669 logs.go:123] Gathering logs for kube-proxy [724c353ef95d6aa2f7994445fdbec2c622dfe3fa4249a6ee081f6287cd363ade] ...
	I0916 11:17:45.950215  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 724c353ef95d6aa2f7994445fdbec2c622dfe3fa4249a6ee081f6287cd363ade"
	I0916 11:17:45.997022  309669 logs.go:123] Gathering logs for coredns [50e36801f5598ea07d10424053f545dc5dd90ec5f26cbc8e642ce5903df03ab0] ...
	I0916 11:17:45.997051  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50e36801f5598ea07d10424053f545dc5dd90ec5f26cbc8e642ce5903df03ab0"
	I0916 11:17:48.541061  309669 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0916 11:17:48.545402  309669 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I0916 11:17:48.546479  309669 api_server.go:141] control plane version: v1.31.1
	I0916 11:17:48.546510  309669 api_server.go:131] duration metric: took 3.989270147s to wait for apiserver health ...
	I0916 11:17:48.546521  309669 system_pods.go:43] waiting for kube-system pods to appear ...
	I0916 11:17:48.546548  309669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0916 11:17:48.546599  309669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0916 11:17:48.593096  309669 cri.go:89] found id: "1f3abcf5f7f43e973f2dbe8fa5f49c4b9538073b43bb9a85f4267e53a3a36e2d"
	I0916 11:17:48.593122  309669 cri.go:89] found id: "bdf3aa888730f420448b90d2c08e29c4518e341208ebdb9b901916839a3b5862"
	I0916 11:17:48.593128  309669 cri.go:89] found id: ""
	I0916 11:17:48.593137  309669 logs.go:276] 2 containers: [1f3abcf5f7f43e973f2dbe8fa5f49c4b9538073b43bb9a85f4267e53a3a36e2d bdf3aa888730f420448b90d2c08e29c4518e341208ebdb9b901916839a3b5862]
	I0916 11:17:48.593197  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:48.597698  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:48.601817  309669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0916 11:17:48.601871  309669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0916 11:17:48.640531  309669 cri.go:89] found id: "174cbcad5dfdeefd18c67a333f77412eac57b42959fe1689429240db27f5f548"
	I0916 11:17:48.640565  309669 cri.go:89] found id: "06406ac4e01c0da912225abf9c21537955221a3a9f8dfb94deb8f00faf06ca09"
	I0916 11:17:48.640571  309669 cri.go:89] found id: ""
	I0916 11:17:48.640580  309669 logs.go:276] 2 containers: [174cbcad5dfdeefd18c67a333f77412eac57b42959fe1689429240db27f5f548 06406ac4e01c0da912225abf9c21537955221a3a9f8dfb94deb8f00faf06ca09]
	I0916 11:17:48.640632  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:48.645575  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:48.649646  309669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0916 11:17:48.649715  309669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0916 11:17:48.693376  309669 cri.go:89] found id: "50e36801f5598ea07d10424053f545dc5dd90ec5f26cbc8e642ce5903df03ab0"
	I0916 11:17:48.693409  309669 cri.go:89] found id: "308f1d6d730a2535f1e6a084892c5da3b5b775594ef073672e029c09537b57da"
	I0916 11:17:48.693416  309669 cri.go:89] found id: ""
	I0916 11:17:48.693434  309669 logs.go:276] 2 containers: [50e36801f5598ea07d10424053f545dc5dd90ec5f26cbc8e642ce5903df03ab0 308f1d6d730a2535f1e6a084892c5da3b5b775594ef073672e029c09537b57da]
	I0916 11:17:48.693503  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:48.697277  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:48.701006  309669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0916 11:17:48.701077  309669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0916 11:17:48.743446  309669 cri.go:89] found id: "8aae7d457c9d91907e960cc200783c7b987fc3faa60e764bf6e297a4062abfc3"
	I0916 11:17:48.743469  309669 cri.go:89] found id: "3b1640b111894c26e26aef3c208049c9bac3c487c7b837d73cc15609a25be8a5"
	I0916 11:17:48.743473  309669 cri.go:89] found id: ""
	I0916 11:17:48.743480  309669 logs.go:276] 2 containers: [8aae7d457c9d91907e960cc200783c7b987fc3faa60e764bf6e297a4062abfc3 3b1640b111894c26e26aef3c208049c9bac3c487c7b837d73cc15609a25be8a5]
	I0916 11:17:48.743528  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:48.747554  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:48.750767  309669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0916 11:17:48.750825  309669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0916 11:17:48.788074  309669 cri.go:89] found id: "724c353ef95d6aa2f7994445fdbec2c622dfe3fa4249a6ee081f6287cd363ade"
	I0916 11:17:48.788099  309669 cri.go:89] found id: "947c3b3b00e44516aae5e8db652e25532afcfccf08cf372511b7656ed49e1dce"
	I0916 11:17:48.788104  309669 cri.go:89] found id: ""
	I0916 11:17:48.788112  309669 logs.go:276] 2 containers: [724c353ef95d6aa2f7994445fdbec2c622dfe3fa4249a6ee081f6287cd363ade 947c3b3b00e44516aae5e8db652e25532afcfccf08cf372511b7656ed49e1dce]
	I0916 11:17:48.788174  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:48.791976  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:48.795240  309669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0916 11:17:48.795310  309669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0916 11:17:48.828960  309669 cri.go:89] found id: "652dda2ab8f82129994a3a233d5a39997232b7edbd25a283090f11902f278931"
	I0916 11:17:48.828982  309669 cri.go:89] found id: "a085c20f4e6d11ad60d4935c3c6e2b3ee2a685ac38a3ca0cae9820d4bab0905e"
	I0916 11:17:48.828988  309669 cri.go:89] found id: ""
	I0916 11:17:48.828996  309669 logs.go:276] 2 containers: [652dda2ab8f82129994a3a233d5a39997232b7edbd25a283090f11902f278931 a085c20f4e6d11ad60d4935c3c6e2b3ee2a685ac38a3ca0cae9820d4bab0905e]
	I0916 11:17:48.829053  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:48.832556  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:48.836312  309669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0916 11:17:48.836376  309669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0916 11:17:48.870289  309669 cri.go:89] found id: "36216a560c65c1a4b8db30aadf1b0944b90f3fafc31450a99a43f9f0fc189397"
	I0916 11:17:48.870310  309669 cri.go:89] found id: "3d2d679d3f9204404ae6eba2a4077abf21a6fcc5943b04d5a2e69b78836e3cd3"
	I0916 11:17:48.870314  309669 cri.go:89] found id: ""
	I0916 11:17:48.870320  309669 logs.go:276] 2 containers: [36216a560c65c1a4b8db30aadf1b0944b90f3fafc31450a99a43f9f0fc189397 3d2d679d3f9204404ae6eba2a4077abf21a6fcc5943b04d5a2e69b78836e3cd3]
	I0916 11:17:48.870375  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:48.873943  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:48.877408  309669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0916 11:17:48.877489  309669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0916 11:17:48.910998  309669 cri.go:89] found id: "a8afbcb8b3f5e9d33bb313f7bd0bf38677e0c1e5428abe31d56f8f6c5b153276"
	I0916 11:17:48.911026  309669 cri.go:89] found id: "60884be84b9066615bf9741565f86d3efb578c13196835196f3e9476d4c6a2a2"
	I0916 11:17:48.911033  309669 cri.go:89] found id: ""
	I0916 11:17:48.911043  309669 logs.go:276] 2 containers: [a8afbcb8b3f5e9d33bb313f7bd0bf38677e0c1e5428abe31d56f8f6c5b153276 60884be84b9066615bf9741565f86d3efb578c13196835196f3e9476d4c6a2a2]
	I0916 11:17:48.911102  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:48.914463  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:48.917494  309669 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0916 11:17:48.917560  309669 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0916 11:17:48.952390  309669 cri.go:89] found id: "e5d19169d238b57c06dbca10d4ee56c90fe06036b8057ecf0a14e458bc719cc4"
	I0916 11:17:48.952414  309669 cri.go:89] found id: ""
	I0916 11:17:48.952424  309669 logs.go:276] 1 containers: [e5d19169d238b57c06dbca10d4ee56c90fe06036b8057ecf0a14e458bc719cc4]
	I0916 11:17:48.952491  309669 ssh_runner.go:195] Run: which crictl
	I0916 11:17:48.956081  309669 logs.go:123] Gathering logs for kube-controller-manager [a085c20f4e6d11ad60d4935c3c6e2b3ee2a685ac38a3ca0cae9820d4bab0905e] ...
	I0916 11:17:48.956117  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a085c20f4e6d11ad60d4935c3c6e2b3ee2a685ac38a3ca0cae9820d4bab0905e"
	I0916 11:17:49.002328  309669 logs.go:123] Gathering logs for kindnet [3d2d679d3f9204404ae6eba2a4077abf21a6fcc5943b04d5a2e69b78836e3cd3] ...
	I0916 11:17:49.002361  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3d2d679d3f9204404ae6eba2a4077abf21a6fcc5943b04d5a2e69b78836e3cd3"
	I0916 11:17:49.036493  309669 logs.go:123] Gathering logs for storage-provisioner [a8afbcb8b3f5e9d33bb313f7bd0bf38677e0c1e5428abe31d56f8f6c5b153276] ...
	I0916 11:17:49.036523  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8afbcb8b3f5e9d33bb313f7bd0bf38677e0c1e5428abe31d56f8f6c5b153276"
	I0916 11:17:49.072358  309669 logs.go:123] Gathering logs for kubernetes-dashboard [e5d19169d238b57c06dbca10d4ee56c90fe06036b8057ecf0a14e458bc719cc4] ...
	I0916 11:17:49.072393  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e5d19169d238b57c06dbca10d4ee56c90fe06036b8057ecf0a14e458bc719cc4"
	I0916 11:17:49.107039  309669 logs.go:123] Gathering logs for dmesg ...
	I0916 11:17:49.107067  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0916 11:17:49.132197  309669 logs.go:123] Gathering logs for coredns [50e36801f5598ea07d10424053f545dc5dd90ec5f26cbc8e642ce5903df03ab0] ...
	I0916 11:17:49.132234  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 50e36801f5598ea07d10424053f545dc5dd90ec5f26cbc8e642ce5903df03ab0"
	I0916 11:17:49.171447  309669 logs.go:123] Gathering logs for kube-proxy [724c353ef95d6aa2f7994445fdbec2c622dfe3fa4249a6ee081f6287cd363ade] ...
	I0916 11:17:49.171482  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 724c353ef95d6aa2f7994445fdbec2c622dfe3fa4249a6ee081f6287cd363ade"
	I0916 11:17:49.206808  309669 logs.go:123] Gathering logs for kube-controller-manager [652dda2ab8f82129994a3a233d5a39997232b7edbd25a283090f11902f278931] ...
	I0916 11:17:49.206834  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 652dda2ab8f82129994a3a233d5a39997232b7edbd25a283090f11902f278931"
	I0916 11:17:49.259045  309669 logs.go:123] Gathering logs for container status ...
	I0916 11:17:49.259078  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0916 11:17:49.298364  309669 logs.go:123] Gathering logs for describe nodes ...
	I0916 11:17:49.298397  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0916 11:17:49.391126  309669 logs.go:123] Gathering logs for kube-apiserver [1f3abcf5f7f43e973f2dbe8fa5f49c4b9538073b43bb9a85f4267e53a3a36e2d] ...
	I0916 11:17:49.391156  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1f3abcf5f7f43e973f2dbe8fa5f49c4b9538073b43bb9a85f4267e53a3a36e2d"
	I0916 11:17:49.434010  309669 logs.go:123] Gathering logs for kube-scheduler [3b1640b111894c26e26aef3c208049c9bac3c487c7b837d73cc15609a25be8a5] ...
	I0916 11:17:49.434041  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3b1640b111894c26e26aef3c208049c9bac3c487c7b837d73cc15609a25be8a5"
	I0916 11:17:49.475836  309669 logs.go:123] Gathering logs for etcd [174cbcad5dfdeefd18c67a333f77412eac57b42959fe1689429240db27f5f548] ...
	I0916 11:17:49.475880  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 174cbcad5dfdeefd18c67a333f77412eac57b42959fe1689429240db27f5f548"
	I0916 11:17:49.516885  309669 logs.go:123] Gathering logs for kube-scheduler [8aae7d457c9d91907e960cc200783c7b987fc3faa60e764bf6e297a4062abfc3] ...
	I0916 11:17:49.516922  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8aae7d457c9d91907e960cc200783c7b987fc3faa60e764bf6e297a4062abfc3"
	I0916 11:17:49.560965  309669 logs.go:123] Gathering logs for kube-proxy [947c3b3b00e44516aae5e8db652e25532afcfccf08cf372511b7656ed49e1dce] ...
	I0916 11:17:49.560997  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 947c3b3b00e44516aae5e8db652e25532afcfccf08cf372511b7656ed49e1dce"
	I0916 11:17:49.597485  309669 logs.go:123] Gathering logs for kindnet [36216a560c65c1a4b8db30aadf1b0944b90f3fafc31450a99a43f9f0fc189397] ...
	I0916 11:17:49.597511  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 36216a560c65c1a4b8db30aadf1b0944b90f3fafc31450a99a43f9f0fc189397"
	I0916 11:17:49.633403  309669 logs.go:123] Gathering logs for storage-provisioner [60884be84b9066615bf9741565f86d3efb578c13196835196f3e9476d4c6a2a2] ...
	I0916 11:17:49.633431  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60884be84b9066615bf9741565f86d3efb578c13196835196f3e9476d4c6a2a2"
	I0916 11:17:49.667338  309669 logs.go:123] Gathering logs for containerd ...
	I0916 11:17:49.667364  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0916 11:17:49.718092  309669 logs.go:123] Gathering logs for kubelet ...
	I0916 11:17:49.718130  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0916 11:17:49.784459  309669 logs.go:123] Gathering logs for kube-apiserver [bdf3aa888730f420448b90d2c08e29c4518e341208ebdb9b901916839a3b5862] ...
	I0916 11:17:49.784499  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdf3aa888730f420448b90d2c08e29c4518e341208ebdb9b901916839a3b5862"
	I0916 11:17:49.835209  309669 logs.go:123] Gathering logs for etcd [06406ac4e01c0da912225abf9c21537955221a3a9f8dfb94deb8f00faf06ca09] ...
	I0916 11:17:49.835249  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06406ac4e01c0da912225abf9c21537955221a3a9f8dfb94deb8f00faf06ca09"
	I0916 11:17:49.879213  309669 logs.go:123] Gathering logs for coredns [308f1d6d730a2535f1e6a084892c5da3b5b775594ef073672e029c09537b57da] ...
	I0916 11:17:49.879244  309669 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 308f1d6d730a2535f1e6a084892c5da3b5b775594ef073672e029c09537b57da"
	I0916 11:17:52.427845  309669 system_pods.go:59] 9 kube-system pods found
	I0916 11:17:52.427892  309669 system_pods.go:61] "coredns-7c65d6cfc9-sc74v" [5655635d-c5e6-4043-b178-77f3df972e86] Running
	I0916 11:17:52.427906  309669 system_pods.go:61] "etcd-default-k8s-diff-port-006978" [d2e16749-ded9-4a12-9454-f66910bb9e5e] Running
	I0916 11:17:52.427913  309669 system_pods.go:61] "kindnet-njckk" [666b88e3-d80f-4bd7-b0a9-2cec72a365f0] Running
	I0916 11:17:52.427923  309669 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-006978" [81a8cd00-67af-4a7b-adcb-26da8bd1403a] Running
	I0916 11:17:52.427930  309669 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-006978" [76606226-2ee9-4661-a691-29edce3f8d0e] Running
	I0916 11:17:52.427939  309669 system_pods.go:61] "kube-proxy-2mcbv" [8fae6563-2965-49e7-96b5-b9813dc369e1] Running
	I0916 11:17:52.427944  309669 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-006978" [a69636f9-c121-4c9e-856e-600dc2fea787] Running
	I0916 11:17:52.427955  309669 system_pods.go:61] "metrics-server-6867b74b74-shznv" [a7a51241-b731-46a8-abc5-cdbd6bf2d41e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 11:17:52.427964  309669 system_pods.go:61] "storage-provisioner" [08708819-cf0d-4505-a1f0-5563be02bd8c] Running
	I0916 11:17:52.427974  309669 system_pods.go:74] duration metric: took 3.881445308s to wait for pod list to return data ...
	I0916 11:17:52.427986  309669 default_sa.go:34] waiting for default service account to be created ...
	I0916 11:17:52.431067  309669 default_sa.go:45] found service account: "default"
	I0916 11:17:52.431097  309669 default_sa.go:55] duration metric: took 3.102216ms for default service account to be created ...
	I0916 11:17:52.431109  309669 system_pods.go:116] waiting for k8s-apps to be running ...
	I0916 11:17:52.437580  309669 system_pods.go:86] 9 kube-system pods found
	I0916 11:17:52.437618  309669 system_pods.go:89] "coredns-7c65d6cfc9-sc74v" [5655635d-c5e6-4043-b178-77f3df972e86] Running
	I0916 11:17:52.437629  309669 system_pods.go:89] "etcd-default-k8s-diff-port-006978" [d2e16749-ded9-4a12-9454-f66910bb9e5e] Running
	I0916 11:17:52.437636  309669 system_pods.go:89] "kindnet-njckk" [666b88e3-d80f-4bd7-b0a9-2cec72a365f0] Running
	I0916 11:17:52.437642  309669 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-006978" [81a8cd00-67af-4a7b-adcb-26da8bd1403a] Running
	I0916 11:17:52.437648  309669 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-006978" [76606226-2ee9-4661-a691-29edce3f8d0e] Running
	I0916 11:17:52.437653  309669 system_pods.go:89] "kube-proxy-2mcbv" [8fae6563-2965-49e7-96b5-b9813dc369e1] Running
	I0916 11:17:52.437659  309669 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-006978" [a69636f9-c121-4c9e-856e-600dc2fea787] Running
	I0916 11:17:52.437671  309669 system_pods.go:89] "metrics-server-6867b74b74-shznv" [a7a51241-b731-46a8-abc5-cdbd6bf2d41e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0916 11:17:52.437684  309669 system_pods.go:89] "storage-provisioner" [08708819-cf0d-4505-a1f0-5563be02bd8c] Running
	I0916 11:17:52.437694  309669 system_pods.go:126] duration metric: took 6.57852ms to wait for k8s-apps to be running ...
	I0916 11:17:52.437705  309669 system_svc.go:44] waiting for kubelet service to be running ....
	I0916 11:17:52.437762  309669 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 11:17:52.452321  309669 system_svc.go:56] duration metric: took 14.603361ms WaitForService to wait for kubelet
	I0916 11:17:52.452361  309669 kubeadm.go:582] duration metric: took 4m16.066310233s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0916 11:17:52.452388  309669 node_conditions.go:102] verifying NodePressure condition ...
	I0916 11:17:52.456114  309669 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0916 11:17:52.456150  309669 node_conditions.go:123] node cpu capacity is 8
	I0916 11:17:52.456163  309669 node_conditions.go:105] duration metric: took 3.768424ms to run NodePressure ...
	I0916 11:17:52.456177  309669 start.go:241] waiting for startup goroutines ...
	I0916 11:17:52.456186  309669 start.go:246] waiting for cluster config update ...
	I0916 11:17:52.456200  309669 start.go:255] writing updated cluster config ...
	I0916 11:17:52.456532  309669 ssh_runner.go:195] Run: rm -f paused
	I0916 11:17:52.464373  309669 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-006978" cluster and "default" namespace by default
	E0916 11:17:52.465699  309669 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error
	I0916 11:17:48.257431  337729 node_ready.go:49] node "calico-771611" has status "Ready":"True"
	I0916 11:17:48.257452  337729 node_ready.go:38] duration metric: took 3.409072ms for node "calico-771611" to be "Ready" ...
	I0916 11:17:48.257463  337729 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0916 11:17:48.257734  337729 addons.go:510] duration metric: took 1.193358584s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0916 11:17:48.268785  337729 pod_ready.go:79] waiting up to 15m0s for pod "calico-kube-controllers-7fbd86d5c5-lpz94" in "kube-system" namespace to be "Ready" ...
	I0916 11:17:48.757125  337729 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-771611" context rescaled to 1 replicas
	I0916 11:17:50.274876  337729 pod_ready.go:103] pod "calico-kube-controllers-7fbd86d5c5-lpz94" in "kube-system" namespace has status "Ready":"False"
	I0916 11:17:52.275382  337729 pod_ready.go:103] pod "calico-kube-controllers-7fbd86d5c5-lpz94" in "kube-system" namespace has status "Ready":"False"
	I0916 11:17:54.814992  337729 pod_ready.go:103] pod "calico-kube-controllers-7fbd86d5c5-lpz94" in "kube-system" namespace has status "Ready":"False"
	I0916 11:17:57.274624  337729 pod_ready.go:103] pod "calico-kube-controllers-7fbd86d5c5-lpz94" in "kube-system" namespace has status "Ready":"False"
	I0916 11:17:59.276126  337729 pod_ready.go:103] pod "calico-kube-controllers-7fbd86d5c5-lpz94" in "kube-system" namespace has status "Ready":"False"
	I0916 11:18:01.776192  337729 pod_ready.go:103] pod "calico-kube-controllers-7fbd86d5c5-lpz94" in "kube-system" namespace has status "Ready":"False"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	621c203504277       523cad1a4df73       About a minute ago   Exited              dashboard-metrics-scraper   5                   3aa575bf2d59f       dashboard-metrics-scraper-7c96f5b85b-9ck82
	a8afbcb8b3f5e       6e38f40d628db       3 minutes ago        Running             storage-provisioner         2                   1154373e1d836       storage-provisioner
	e5d19169d238b       07655ddf2eebe       4 minutes ago        Running             kubernetes-dashboard        0                   d7a0eb82c13ee       kubernetes-dashboard-695b96c756-hrmv2
	50e36801f5598       c69fa2e9cbf5f       4 minutes ago        Running             coredns                     1                   e61bf900e6853       coredns-7c65d6cfc9-sc74v
	36216a560c65c       12968670680f4       4 minutes ago        Running             kindnet-cni                 1                   062764c63635a       kindnet-njckk
	60884be84b906       6e38f40d628db       4 minutes ago        Exited              storage-provisioner         1                   1154373e1d836       storage-provisioner
	724c353ef95d6       60c005f310ff3       4 minutes ago        Running             kube-proxy                  1                   6486584442d73       kube-proxy-2mcbv
	174cbcad5dfde       2e96e5913fc06       4 minutes ago        Running             etcd                        1                   41197329b8726       etcd-default-k8s-diff-port-006978
	8aae7d457c9d9       9aa1fad941575       4 minutes ago        Running             kube-scheduler              1                   067af9bdbaac2       kube-scheduler-default-k8s-diff-port-006978
	652dda2ab8f82       175ffd71cce3d       4 minutes ago        Running             kube-controller-manager     1                   570c7fa4676db       kube-controller-manager-default-k8s-diff-port-006978
	1f3abcf5f7f43       6bab7719df100       4 minutes ago        Running             kube-apiserver              1                   7fbbe962a8004       kube-apiserver-default-k8s-diff-port-006978
	308f1d6d730a2       c69fa2e9cbf5f       4 minutes ago        Exited              coredns                     0                   a427aaf4dc7bf       coredns-7c65d6cfc9-sc74v
	3d2d679d3f920       12968670680f4       5 minutes ago        Exited              kindnet-cni                 0                   c9b1db2846501       kindnet-njckk
	947c3b3b00e44       60c005f310ff3       5 minutes ago        Exited              kube-proxy                  0                   d0095dc7cbd78       kube-proxy-2mcbv
	06406ac4e01c0       2e96e5913fc06       5 minutes ago        Exited              etcd                        0                   6908ea2d82b0c       etcd-default-k8s-diff-port-006978
	3b1640b111894       9aa1fad941575       5 minutes ago        Exited              kube-scheduler              0                   75eb18111b77e       kube-scheduler-default-k8s-diff-port-006978
	a085c20f4e6d1       175ffd71cce3d       5 minutes ago        Exited              kube-controller-manager     0                   4e59876f0bb83       kube-controller-manager-default-k8s-diff-port-006978
	bdf3aa888730f       6bab7719df100       5 minutes ago        Exited              kube-apiserver              0                   8f6d53f6f0c9d       kube-apiserver-default-k8s-diff-port-006978
	
	
	==> containerd <==
	Sep 16 11:15:16 default-k8s-diff-port-006978 containerd[595]: time="2024-09-16T11:15:16.954501537Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Sep 16 11:15:16 default-k8s-diff-port-006978 containerd[595]: time="2024-09-16T11:15:16.956016293Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 16 11:15:16 default-k8s-diff-port-006978 containerd[595]: time="2024-09-16T11:15:16.956084565Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 16 11:15:29 default-k8s-diff-port-006978 containerd[595]: time="2024-09-16T11:15:29.933737692Z" level=info msg="CreateContainer within sandbox \"3aa575bf2d59f141b548cffe60a23cdb6708a9e91d903aec89776ad3a6374d89\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:4,}"
	Sep 16 11:15:29 default-k8s-diff-port-006978 containerd[595]: time="2024-09-16T11:15:29.946856388Z" level=info msg="CreateContainer within sandbox \"3aa575bf2d59f141b548cffe60a23cdb6708a9e91d903aec89776ad3a6374d89\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:4,} returns container id \"f022c2d041f48c327f1b4afac56b9fd3d527e84f5a394f2045c105051dd69d24\""
	Sep 16 11:15:29 default-k8s-diff-port-006978 containerd[595]: time="2024-09-16T11:15:29.947489688Z" level=info msg="StartContainer for \"f022c2d041f48c327f1b4afac56b9fd3d527e84f5a394f2045c105051dd69d24\""
	Sep 16 11:15:29 default-k8s-diff-port-006978 containerd[595]: time="2024-09-16T11:15:29.992851971Z" level=info msg="StartContainer for \"f022c2d041f48c327f1b4afac56b9fd3d527e84f5a394f2045c105051dd69d24\" returns successfully"
	Sep 16 11:15:30 default-k8s-diff-port-006978 containerd[595]: time="2024-09-16T11:15:30.024476097Z" level=info msg="shim disconnected" id=f022c2d041f48c327f1b4afac56b9fd3d527e84f5a394f2045c105051dd69d24 namespace=k8s.io
	Sep 16 11:15:30 default-k8s-diff-port-006978 containerd[595]: time="2024-09-16T11:15:30.024542822Z" level=warning msg="cleaning up after shim disconnected" id=f022c2d041f48c327f1b4afac56b9fd3d527e84f5a394f2045c105051dd69d24 namespace=k8s.io
	Sep 16 11:15:30 default-k8s-diff-port-006978 containerd[595]: time="2024-09-16T11:15:30.024554223Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 16 11:15:30 default-k8s-diff-port-006978 containerd[595]: time="2024-09-16T11:15:30.611858192Z" level=info msg="RemoveContainer for \"f5bb97c80f0d5f2be04c146b0a599379c7fa65d5276e05a62ba7653bf280e8e8\""
	Sep 16 11:15:30 default-k8s-diff-port-006978 containerd[595]: time="2024-09-16T11:15:30.617114501Z" level=info msg="RemoveContainer for \"f5bb97c80f0d5f2be04c146b0a599379c7fa65d5276e05a62ba7653bf280e8e8\" returns successfully"
	Sep 16 11:16:38 default-k8s-diff-port-006978 containerd[595]: time="2024-09-16T11:16:38.932664699Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 16 11:16:38 default-k8s-diff-port-006978 containerd[595]: time="2024-09-16T11:16:38.963506994Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Sep 16 11:16:38 default-k8s-diff-port-006978 containerd[595]: time="2024-09-16T11:16:38.964711000Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 16 11:16:38 default-k8s-diff-port-006978 containerd[595]: time="2024-09-16T11:16:38.964794477Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 16 11:17:03 default-k8s-diff-port-006978 containerd[595]: time="2024-09-16T11:17:03.941118074Z" level=info msg="CreateContainer within sandbox \"3aa575bf2d59f141b548cffe60a23cdb6708a9e91d903aec89776ad3a6374d89\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:5,}"
	Sep 16 11:17:03 default-k8s-diff-port-006978 containerd[595]: time="2024-09-16T11:17:03.956847158Z" level=info msg="CreateContainer within sandbox \"3aa575bf2d59f141b548cffe60a23cdb6708a9e91d903aec89776ad3a6374d89\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:5,} returns container id \"621c2035042771cecc1f28b081df131e0827c9a6451de57767c6299f7b3ea789\""
	Sep 16 11:17:03 default-k8s-diff-port-006978 containerd[595]: time="2024-09-16T11:17:03.957492381Z" level=info msg="StartContainer for \"621c2035042771cecc1f28b081df131e0827c9a6451de57767c6299f7b3ea789\""
	Sep 16 11:17:04 default-k8s-diff-port-006978 containerd[595]: time="2024-09-16T11:17:04.015434899Z" level=info msg="StartContainer for \"621c2035042771cecc1f28b081df131e0827c9a6451de57767c6299f7b3ea789\" returns successfully"
	Sep 16 11:17:04 default-k8s-diff-port-006978 containerd[595]: time="2024-09-16T11:17:04.061724091Z" level=info msg="shim disconnected" id=621c2035042771cecc1f28b081df131e0827c9a6451de57767c6299f7b3ea789 namespace=k8s.io
	Sep 16 11:17:04 default-k8s-diff-port-006978 containerd[595]: time="2024-09-16T11:17:04.061803035Z" level=warning msg="cleaning up after shim disconnected" id=621c2035042771cecc1f28b081df131e0827c9a6451de57767c6299f7b3ea789 namespace=k8s.io
	Sep 16 11:17:04 default-k8s-diff-port-006978 containerd[595]: time="2024-09-16T11:17:04.061815674Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 16 11:17:04 default-k8s-diff-port-006978 containerd[595]: time="2024-09-16T11:17:04.824223636Z" level=info msg="RemoveContainer for \"f022c2d041f48c327f1b4afac56b9fd3d527e84f5a394f2045c105051dd69d24\""
	Sep 16 11:17:04 default-k8s-diff-port-006978 containerd[595]: time="2024-09-16T11:17:04.829865845Z" level=info msg="RemoveContainer for \"f022c2d041f48c327f1b4afac56b9fd3d527e84f5a394f2045c105051dd69d24\" returns successfully"
	
	
	==> coredns [308f1d6d730a2535f1e6a084892c5da3b5b775594ef073672e029c09537b57da] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:41169 - 50662 "HINFO IN 4844345484503832019.4449023886173755708. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011300932s
	
	
	==> coredns [50e36801f5598ea07d10424053f545dc5dd90ec5f26cbc8e642ce5903df03ab0] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:43233 - 45615 "HINFO IN 3544582404646040376.8592317231223959640. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.007219885s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1657641337]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 11:13:42.731) (total time: 30001ms):
	Trace[1657641337]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (11:14:12.732)
	Trace[1657641337]: [30.001915357s] [30.001915357s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1402107692]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 11:13:42.731) (total time: 30001ms):
	Trace[1402107692]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (11:14:12.732)
	Trace[1402107692]: [30.001977144s] [30.001977144s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[888767036]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (16-Sep-2024 11:13:42.731) (total time: 30001ms):
	Trace[888767036]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (11:14:12.732)
	Trace[888767036]: [30.001832115s] [30.001832115s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-006978
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-006978
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=90d544f06ea0f69499271b003be64a9a224d57ed
	                    minikube.k8s.io/name=default-k8s-diff-port-006978
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_16T11_12_54_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Sep 2024 11:12:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-006978
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Sep 2024 11:17:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Sep 2024 11:14:10 +0000   Mon, 16 Sep 2024 11:12:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Sep 2024 11:14:10 +0000   Mon, 16 Sep 2024 11:12:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Sep 2024 11:14:10 +0000   Mon, 16 Sep 2024 11:12:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Sep 2024 11:14:10 +0000   Mon, 16 Sep 2024 11:12:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-006978
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 b66584e923004035a25bea3a665dcc11
	  System UUID:                15408216-8343-44b6-bf08-785f58970e8a
	  Boot ID:                    271cf859-f98b-4998-a24c-7b137822f999
	  Kernel Version:             5.15.0-1068-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-sc74v                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     5m6s
	  kube-system                 etcd-default-k8s-diff-port-006978                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         5m11s
	  kube-system                 kindnet-njckk                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      5m6s
	  kube-system                 kube-apiserver-default-k8s-diff-port-006978             250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m12s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-006978    200m (2%)     0 (0%)      0 (0%)           0 (0%)         5m11s
	  kube-system                 kube-proxy-2mcbv                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m6s
	  kube-system                 kube-scheduler-default-k8s-diff-port-006978             100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m12s
	  kube-system                 metrics-server-6867b74b74-shznv                         100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m44s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m5s
	  kubernetes-dashboard        dashboard-metrics-scraper-7c96f5b85b-9ck82              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-hrmv2                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m4s                   kube-proxy       
	  Normal   Starting                 4m22s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  5m17s (x8 over 5m17s)  kubelet          Node default-k8s-diff-port-006978 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m17s (x7 over 5m17s)  kubelet          Node default-k8s-diff-port-006978 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m17s (x7 over 5m17s)  kubelet          Node default-k8s-diff-port-006978 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 5m12s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m12s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasNoDiskPressure    5m11s                  kubelet          Node default-k8s-diff-port-006978 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  5m11s                  kubelet          Node default-k8s-diff-port-006978 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     5m11s                  kubelet          Node default-k8s-diff-port-006978 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           5m7s                   node-controller  Node default-k8s-diff-port-006978 event: Registered Node default-k8s-diff-port-006978 in Controller
	  Normal   Starting                 4m30s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m30s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  4m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  4m29s (x8 over 4m30s)  kubelet          Node default-k8s-diff-port-006978 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m29s (x7 over 4m30s)  kubelet          Node default-k8s-diff-port-006978 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m29s (x7 over 4m30s)  kubelet          Node default-k8s-diff-port-006978 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m22s                  node-controller  Node default-k8s-diff-port-006978 event: Registered Node default-k8s-diff-port-006978 in Controller
	
	
	==> dmesg <==
	[  +0.000004] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +1.024015] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000007] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000005] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +0.000001] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000001] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +2.015813] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000006] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000002] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +4.063624] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000002] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +0.000004] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000002] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +8.191266] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000006] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	[  +0.000006] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-77357235afce
	[  +0.000002] ll header: 00000000: 02 42 c7 d5 e1 f1 02 42 c0 a8 4c 02 08 00
	
	
	==> etcd [06406ac4e01c0da912225abf9c21537955221a3a9f8dfb94deb8f00faf06ca09] <==
	{"level":"info","ts":"2024-09-16T11:12:49.134376Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-16T11:12:49.134546Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-09-16T11:12:49.134585Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-09-16T11:12:49.135386Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-16T11:12:49.135426Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-16T11:12:49.463407Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-16T11:12:49.463466Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-16T11:12:49.463496Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2024-09-16T11:12:49.463511Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2024-09-16T11:12:49.463520Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2024-09-16T11:12:49.463536Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2024-09-16T11:12:49.463550Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2024-09-16T11:12:49.464413Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:12:49.465019Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:default-k8s-diff-port-006978 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T11:12:49.465072Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:12:49.465051Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:12:49.465386Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T11:12:49.465431Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T11:12:49.466272Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:12:49.467109Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2024-09-16T11:12:49.467934Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:12:49.468738Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T11:12:49.470945Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:12:49.471063Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:12:49.471097Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> etcd [174cbcad5dfdeefd18c67a333f77412eac57b42959fe1689429240db27f5f548] <==
	{"level":"info","ts":"2024-09-16T11:13:37.622882Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T11:13:37.622896Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-09-16T11:13:37.623014Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-09-16T11:13:37.623035Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2024-09-16T11:13:37.623234Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2024-09-16T11:13:37.623407Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2024-09-16T11:13:37.623635Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:13:37.623774Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-16T11:13:38.734245Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2024-09-16T11:13:38.734299Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-09-16T11:13:38.734351Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2024-09-16T11:13:38.734370Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2024-09-16T11:13:38.734382Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2024-09-16T11:13:38.734394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2024-09-16T11:13:38.734406Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2024-09-16T11:13:38.736156Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:default-k8s-diff-port-006978 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-16T11:13:38.736156Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:13:38.736194Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-16T11:13:38.736569Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-16T11:13:38.736648Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-16T11:13:38.738949Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:13:38.738949Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-16T11:13:38.739871Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-16T11:13:38.740081Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2024-09-16T11:18:02.173760Z","caller":"traceutil/trace.go:171","msg":"trace[1096158814] transaction","detail":"{read_only:false; response_revision:916; number_of_response:1; }","duration":"102.577024ms","start":"2024-09-16T11:18:02.071158Z","end":"2024-09-16T11:18:02.173735Z","steps":["trace[1096158814] 'process raft request'  (duration: 49.369215ms)","trace[1096158814] 'compare'  (duration: 53.029675ms)"],"step_count":2}
	
	
	==> kernel <==
	 11:18:05 up  1:00,  0 users,  load average: 1.64, 2.24, 2.13
	Linux default-k8s-diff-port-006978 5.15.0-1068-gcp #76~20.04.1-Ubuntu SMP Tue Aug 20 15:52:45 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [36216a560c65c1a4b8db30aadf1b0944b90f3fafc31450a99a43f9f0fc189397] <==
	I0916 11:16:03.244866       1 main.go:299] handling current node
	I0916 11:16:13.244827       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0916 11:16:13.244859       1 main.go:299] handling current node
	I0916 11:16:23.243838       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0916 11:16:23.243873       1 main.go:299] handling current node
	I0916 11:16:33.243889       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0916 11:16:33.243925       1 main.go:299] handling current node
	I0916 11:16:43.236646       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0916 11:16:43.236693       1 main.go:299] handling current node
	I0916 11:16:53.238224       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0916 11:16:53.238278       1 main.go:299] handling current node
	I0916 11:17:03.245396       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0916 11:17:03.245429       1 main.go:299] handling current node
	I0916 11:17:13.237337       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0916 11:17:13.237378       1 main.go:299] handling current node
	I0916 11:17:23.243901       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0916 11:17:23.243948       1 main.go:299] handling current node
	I0916 11:17:33.245597       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0916 11:17:33.245635       1 main.go:299] handling current node
	I0916 11:17:43.237338       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0916 11:17:43.237384       1 main.go:299] handling current node
	I0916 11:17:53.245045       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0916 11:17:53.245086       1 main.go:299] handling current node
	I0916 11:18:03.245836       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0916 11:18:03.245868       1 main.go:299] handling current node
	
	
	==> kindnet [3d2d679d3f9204404ae6eba2a4077abf21a6fcc5943b04d5a2e69b78836e3cd3] <==
	I0916 11:13:00.621671       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0916 11:13:00.621894       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0916 11:13:00.622047       1 main.go:148] setting mtu 1500 for CNI 
	I0916 11:13:00.622069       1 main.go:178] kindnetd IP family: "ipv4"
	I0916 11:13:00.622081       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0916 11:13:00.948822       1 controller.go:334] Starting controller kube-network-policies
	I0916 11:13:00.948853       1 controller.go:338] Waiting for informer caches to sync
	I0916 11:13:00.948861       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0916 11:13:01.249787       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0916 11:13:01.249852       1 metrics.go:61] Registering metrics
	I0916 11:13:01.249923       1 controller.go:374] Syncing nftables rules
	I0916 11:13:10.948665       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0916 11:13:10.948741       1 main.go:299] handling current node
	I0916 11:13:20.951953       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0916 11:13:20.952035       1 main.go:299] handling current node
	
	
	==> kube-apiserver [1f3abcf5f7f43e973f2dbe8fa5f49c4b9538073b43bb9a85f4267e53a3a36e2d] <==
	I0916 11:13:42.435416       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0916 11:13:42.449610       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0916 11:13:42.655005       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.96.215"}
	I0916 11:13:42.670402       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.136.248"}
	I0916 11:13:43.580419       1 controller.go:615] quota admission added evaluator for: endpoints
	I0916 11:13:43.679862       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0916 11:13:44.079930       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	W0916 11:14:41.028607       1 handler_proxy.go:99] no RequestInfo found in the context
	W0916 11:14:41.028606       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 11:14:41.028695       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0916 11:14:41.028728       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0916 11:14:41.029834       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0916 11:14:41.029859       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0916 11:16:41.030372       1 handler_proxy.go:99] no RequestInfo found in the context
	W0916 11:16:41.030377       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 11:16:41.030466       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0916 11:16:41.030488       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0916 11:16:41.031595       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0916 11:16:41.031629       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [bdf3aa888730f420448b90d2c08e29c4518e341208ebdb9b901916839a3b5862] <==
	E0916 11:13:21.935573       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0916 11:13:21.936996       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I0916 11:13:22.042892       1 alloc.go:330] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.110.61.57"}
	W0916 11:13:22.049002       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 11:13:22.049067       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0916 11:13:22.053674       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 11:13:22.053736       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W0916 11:13:22.930436       1 handler_proxy.go:99] no RequestInfo found in the context
	W0916 11:13:22.930471       1 handler_proxy.go:99] no RequestInfo found in the context
	E0916 11:13:22.930477       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E0916 11:13:22.930574       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0916 11:13:22.931584       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0916 11:13:22.931628       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [652dda2ab8f82129994a3a233d5a39997232b7edbd25a283090f11902f278931] <==
	E0916 11:14:43.691834       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 11:14:44.141686       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0916 11:14:45.515900       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="166.75µs"
	I0916 11:14:52.138869       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="63.191µs"
	I0916 11:14:52.941983       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="58.398µs"
	E0916 11:15:13.697274       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 11:15:14.156195       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0916 11:15:29.943107       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="80.94µs"
	I0916 11:15:30.622167       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="70.154µs"
	I0916 11:15:32.140238       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="68.166µs"
	E0916 11:15:43.703857       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 11:15:44.164553       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0916 11:15:44.943427       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="81.884µs"
	E0916 11:16:13.709836       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 11:16:14.172201       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0916 11:16:43.715565       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 11:16:44.179605       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0916 11:16:50.942685       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="98.065µs"
	I0916 11:17:03.949502       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="51.72µs"
	I0916 11:17:04.834627       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="59.816µs"
	I0916 11:17:12.142783       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b" duration="60.519µs"
	E0916 11:17:13.721789       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 11:17:14.188703       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0916 11:17:43.727314       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0916 11:17:44.197614       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-controller-manager [a085c20f4e6d11ad60d4935c3c6e2b3ee2a685ac38a3ca0cae9820d4bab0905e] <==
	I0916 11:12:58.409038       1 shared_informer.go:320] Caches are synced for resource quota
	I0916 11:12:58.817951       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:12:58.898115       1 shared_informer.go:320] Caches are synced for garbage collector
	I0916 11:12:58.898137       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0916 11:12:59.210806       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-006978"
	I0916 11:12:59.426362       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="1.017098138s"
	I0916 11:12:59.433522       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="7.101603ms"
	I0916 11:12:59.433635       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="61.021µs"
	I0916 11:12:59.520137       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="123.944µs"
	I0916 11:12:59.539861       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="150.746µs"
	I0916 11:13:00.058053       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="15.011457ms"
	I0916 11:13:00.126185       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="68.061499ms"
	I0916 11:13:00.126320       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="81.97µs"
	I0916 11:13:01.081758       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="88.254µs"
	I0916 11:13:01.086415       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="103.992µs"
	I0916 11:13:01.089510       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="69.774µs"
	I0916 11:13:04.291467       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-006978"
	I0916 11:13:16.102024       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="80.781µs"
	I0916 11:13:16.119318       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="6.696792ms"
	I0916 11:13:16.119456       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="86.29µs"
	I0916 11:13:21.966599       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="18.537342ms"
	I0916 11:13:21.975332       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="8.670034ms"
	I0916 11:13:21.975437       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="61.655µs"
	I0916 11:13:21.979253       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="71.081µs"
	I0916 11:13:23.115282       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="69.174µs"
	
	
	==> kube-proxy [724c353ef95d6aa2f7994445fdbec2c622dfe3fa4249a6ee081f6287cd363ade] <==
	I0916 11:13:42.564235       1 server_linux.go:66] "Using iptables proxy"
	I0916 11:13:42.775949       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.76.2"]
	E0916 11:13:42.776038       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 11:13:42.795053       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 11:13:42.795123       1 server_linux.go:169] "Using iptables Proxier"
	I0916 11:13:42.797091       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 11:13:42.797528       1 server.go:483] "Version info" version="v1.31.1"
	I0916 11:13:42.797555       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:13:42.798706       1 config.go:199] "Starting service config controller"
	I0916 11:13:42.798765       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 11:13:42.798811       1 config.go:328] "Starting node config controller"
	I0916 11:13:42.798826       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 11:13:42.798805       1 config.go:105] "Starting endpoint slice config controller"
	I0916 11:13:42.798920       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 11:13:42.899808       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0916 11:13:42.899855       1 shared_informer.go:320] Caches are synced for node config
	I0916 11:13:42.899874       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [947c3b3b00e44516aae5e8db652e25532afcfccf08cf372511b7656ed49e1dce] <==
	I0916 11:13:00.253825       1 server_linux.go:66] "Using iptables proxy"
	I0916 11:13:00.407401       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.76.2"]
	E0916 11:13:00.407487       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0916 11:13:00.429078       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0916 11:13:00.429182       1 server_linux.go:169] "Using iptables Proxier"
	I0916 11:13:00.432606       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0916 11:13:00.434317       1 server.go:483] "Version info" version="v1.31.1"
	I0916 11:13:00.434355       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:13:00.436922       1 config.go:199] "Starting service config controller"
	I0916 11:13:00.436961       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0916 11:13:00.436992       1 config.go:328] "Starting node config controller"
	I0916 11:13:00.436998       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0916 11:13:00.437231       1 config.go:105] "Starting endpoint slice config controller"
	I0916 11:13:00.437259       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0916 11:13:00.537105       1 shared_informer.go:320] Caches are synced for node config
	I0916 11:13:00.537113       1 shared_informer.go:320] Caches are synced for service config
	I0916 11:13:00.538249       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [3b1640b111894c26e26aef3c208049c9bac3c487c7b837d73cc15609a25be8a5] <==
	W0916 11:12:51.533950       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0916 11:12:51.533967       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:51.534038       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 11:12:51.534052       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:51.534137       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 11:12:51.534151       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:51.534222       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0916 11:12:51.534236       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:51.534340       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0916 11:12:51.534355       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:51.534367       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0916 11:12:51.534374       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:51.534388       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0916 11:12:51.534396       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:52.339002       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0916 11:12:52.339046       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:52.406598       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0916 11:12:52.406652       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:52.413957       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0916 11:12:52.413997       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:52.416027       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0916 11:12:52.416071       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0916 11:12:52.594671       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0916 11:12:52.594714       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0916 11:12:54.631845       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [8aae7d457c9d91907e960cc200783c7b987fc3faa60e764bf6e297a4062abfc3] <==
	I0916 11:13:38.641090       1 serving.go:386] Generated self-signed cert in-memory
	W0916 11:13:39.956033       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0916 11:13:39.956065       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0916 11:13:39.956073       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0916 11:13:39.956079       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0916 11:13:40.039097       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0916 11:13:40.039324       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0916 11:13:40.121543       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0916 11:13:40.121612       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0916 11:13:40.121831       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0916 11:13:40.121855       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0916 11:13:40.228248       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 16 11:16:38 default-k8s-diff-port-006978 kubelet[723]: E0916 11:16:38.965085     723 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 16 11:16:38 default-k8s-diff-port-006978 kubelet[723]: E0916 11:16:38.965265     723 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fw8qk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPr
opagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:
nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-shznv_kube-system(a7a51241-b731-46a8-abc5-cdbd6bf2d41e): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" logger="UnhandledError"
	Sep 16 11:16:38 default-k8s-diff-port-006978 kubelet[723]: E0916 11:16:38.966457     723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-6867b74b74-shznv" podUID="a7a51241-b731-46a8-abc5-cdbd6bf2d41e"
	Sep 16 11:16:49 default-k8s-diff-port-006978 kubelet[723]: I0916 11:16:49.932398     723 scope.go:117] "RemoveContainer" containerID="f022c2d041f48c327f1b4afac56b9fd3d527e84f5a394f2045c105051dd69d24"
	Sep 16 11:16:49 default-k8s-diff-port-006978 kubelet[723]: E0916 11:16:49.932732     723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7c96f5b85b-9ck82_kubernetes-dashboard(00a03ef4-b23b-4678-84c1-775a17bac837)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-9ck82" podUID="00a03ef4-b23b-4678-84c1-775a17bac837"
	Sep 16 11:16:50 default-k8s-diff-port-006978 kubelet[723]: E0916 11:16:50.933189     723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-shznv" podUID="a7a51241-b731-46a8-abc5-cdbd6bf2d41e"
	Sep 16 11:17:03 default-k8s-diff-port-006978 kubelet[723]: I0916 11:17:03.932097     723 scope.go:117] "RemoveContainer" containerID="f022c2d041f48c327f1b4afac56b9fd3d527e84f5a394f2045c105051dd69d24"
	Sep 16 11:17:03 default-k8s-diff-port-006978 kubelet[723]: E0916 11:17:03.933778     723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-shznv" podUID="a7a51241-b731-46a8-abc5-cdbd6bf2d41e"
	Sep 16 11:17:04 default-k8s-diff-port-006978 kubelet[723]: I0916 11:17:04.822818     723 scope.go:117] "RemoveContainer" containerID="f022c2d041f48c327f1b4afac56b9fd3d527e84f5a394f2045c105051dd69d24"
	Sep 16 11:17:04 default-k8s-diff-port-006978 kubelet[723]: I0916 11:17:04.823254     723 scope.go:117] "RemoveContainer" containerID="621c2035042771cecc1f28b081df131e0827c9a6451de57767c6299f7b3ea789"
	Sep 16 11:17:04 default-k8s-diff-port-006978 kubelet[723]: E0916 11:17:04.823471     723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7c96f5b85b-9ck82_kubernetes-dashboard(00a03ef4-b23b-4678-84c1-775a17bac837)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-9ck82" podUID="00a03ef4-b23b-4678-84c1-775a17bac837"
	Sep 16 11:17:12 default-k8s-diff-port-006978 kubelet[723]: I0916 11:17:12.129992     723 scope.go:117] "RemoveContainer" containerID="621c2035042771cecc1f28b081df131e0827c9a6451de57767c6299f7b3ea789"
	Sep 16 11:17:12 default-k8s-diff-port-006978 kubelet[723]: E0916 11:17:12.130193     723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7c96f5b85b-9ck82_kubernetes-dashboard(00a03ef4-b23b-4678-84c1-775a17bac837)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-9ck82" podUID="00a03ef4-b23b-4678-84c1-775a17bac837"
	Sep 16 11:17:17 default-k8s-diff-port-006978 kubelet[723]: E0916 11:17:17.933010     723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-shznv" podUID="a7a51241-b731-46a8-abc5-cdbd6bf2d41e"
	Sep 16 11:17:22 default-k8s-diff-port-006978 kubelet[723]: I0916 11:17:22.932056     723 scope.go:117] "RemoveContainer" containerID="621c2035042771cecc1f28b081df131e0827c9a6451de57767c6299f7b3ea789"
	Sep 16 11:17:22 default-k8s-diff-port-006978 kubelet[723]: E0916 11:17:22.932863     723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7c96f5b85b-9ck82_kubernetes-dashboard(00a03ef4-b23b-4678-84c1-775a17bac837)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-9ck82" podUID="00a03ef4-b23b-4678-84c1-775a17bac837"
	Sep 16 11:17:30 default-k8s-diff-port-006978 kubelet[723]: E0916 11:17:30.933358     723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-shznv" podUID="a7a51241-b731-46a8-abc5-cdbd6bf2d41e"
	Sep 16 11:17:36 default-k8s-diff-port-006978 kubelet[723]: I0916 11:17:36.931937     723 scope.go:117] "RemoveContainer" containerID="621c2035042771cecc1f28b081df131e0827c9a6451de57767c6299f7b3ea789"
	Sep 16 11:17:36 default-k8s-diff-port-006978 kubelet[723]: E0916 11:17:36.932200     723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7c96f5b85b-9ck82_kubernetes-dashboard(00a03ef4-b23b-4678-84c1-775a17bac837)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-9ck82" podUID="00a03ef4-b23b-4678-84c1-775a17bac837"
	Sep 16 11:17:43 default-k8s-diff-port-006978 kubelet[723]: E0916 11:17:43.932410     723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-shznv" podUID="a7a51241-b731-46a8-abc5-cdbd6bf2d41e"
	Sep 16 11:17:49 default-k8s-diff-port-006978 kubelet[723]: I0916 11:17:49.932169     723 scope.go:117] "RemoveContainer" containerID="621c2035042771cecc1f28b081df131e0827c9a6451de57767c6299f7b3ea789"
	Sep 16 11:17:49 default-k8s-diff-port-006978 kubelet[723]: E0916 11:17:49.932395     723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7c96f5b85b-9ck82_kubernetes-dashboard(00a03ef4-b23b-4678-84c1-775a17bac837)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-9ck82" podUID="00a03ef4-b23b-4678-84c1-775a17bac837"
	Sep 16 11:17:58 default-k8s-diff-port-006978 kubelet[723]: E0916 11:17:58.932514     723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-shznv" podUID="a7a51241-b731-46a8-abc5-cdbd6bf2d41e"
	Sep 16 11:18:04 default-k8s-diff-port-006978 kubelet[723]: I0916 11:18:04.932094     723 scope.go:117] "RemoveContainer" containerID="621c2035042771cecc1f28b081df131e0827c9a6451de57767c6299f7b3ea789"
	Sep 16 11:18:04 default-k8s-diff-port-006978 kubelet[723]: E0916 11:18:04.932383     723 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-7c96f5b85b-9ck82_kubernetes-dashboard(00a03ef4-b23b-4678-84c1-775a17bac837)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-7c96f5b85b-9ck82" podUID="00a03ef4-b23b-4678-84c1-775a17bac837"
	
	
	==> kubernetes-dashboard [e5d19169d238b57c06dbca10d4ee56c90fe06036b8057ecf0a14e458bc719cc4] <==
	2024/09/16 11:13:49 Using namespace: kubernetes-dashboard
	2024/09/16 11:13:49 Using in-cluster config to connect to apiserver
	2024/09/16 11:13:49 Using secret token for csrf signing
	2024/09/16 11:13:49 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/16 11:13:49 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/16 11:13:49 Successful initial request to the apiserver, version: v1.31.1
	2024/09/16 11:13:49 Generating JWE encryption key
	2024/09/16 11:13:49 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/16 11:13:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/16 11:13:49 Initializing JWE encryption key from synchronized object
	2024/09/16 11:13:49 Creating in-cluster Sidecar client
	2024/09/16 11:13:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:13:49 Serving insecurely on HTTP port: 9090
	2024/09/16 11:14:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:14:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:15:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:15:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:16:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:16:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:17:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:17:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/16 11:13:49 Starting overwatch
	
	
	==> storage-provisioner [60884be84b9066615bf9741565f86d3efb578c13196835196f3e9476d4c6a2a2] <==
	I0916 11:13:42.547192       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0916 11:14:12.553164       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a8afbcb8b3f5e9d33bb313f7bd0bf38677e0c1e5428abe31d56f8f6c5b153276] <==
	I0916 11:14:28.015986       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0916 11:14:28.022904       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0916 11:14:28.022951       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0916 11:14:45.421357       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0916 11:14:45.421574       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-006978_1aaf24ce-5280-4869-a0bb-cd3a4682f141!
	I0916 11:14:45.422074       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"48271e48-bb5a-477f-91cc-b9e1963cd811", APIVersion:"v1", ResourceVersion:"734", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-006978_1aaf24ce-5280-4869-a0bb-cd3a4682f141 became leader
	I0916 11:14:45.522575       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-006978_1aaf24ce-5280-4869-a0bb-cd3a4682f141!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-006978 -n default-k8s-diff-port-006978
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-006978 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:261: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-006978 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error (577.783µs)
helpers_test.go:263: kubectl --context default-k8s-diff-port-006978 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running: fork/exec /usr/local/bin/kubectl: exec format error
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (7.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (1800.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-771611 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Non-zero exit: kubectl --context calico-771611 replace --force -f testdata/netcat-deployment.yaml: fork/exec /usr/local/bin/kubectl: exec format error (458.521µs)
net_test.go:151: failed to apply netcat manifest: fork/exec /usr/local/bin/kubectl: exec format error
E0916 11:19:11.541844   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:19:11.548225   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:19:11.559579   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:19:11.581086   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:19:11.622481   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:19:11.703944   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:19:11.865472   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:19:12.187149   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:19:12.829396   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:19:14.111088   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:160: failed waiting for netcat deployment to stabilize: timed out waiting for the condition
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
E0916 11:34:11.542050   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestNetworkPlugins/group/calico/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: client rate limiter Wait returned an error: context deadline exceeded
net_test.go:163: ***** TestNetworkPlugins/group/calico/NetCatPod: pod "app=netcat" failed to start within 15m0s: context deadline exceeded ****
net_test.go:163: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p calico-771611 -n calico-771611
net_test.go:163: TestNetworkPlugins/group/calico/NetCatPod: showing logs for failed pods as of 2024-09-16 11:48:30.568885115 +0000 UTC m=+5177.551142176
net_test.go:164: failed waiting for netcat pod: app=netcat within 15m0s: context deadline exceeded
--- FAIL: TestNetworkPlugins/group/calico/NetCatPod (1800.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (1800.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-771611 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Non-zero exit: kubectl --context enable-default-cni-771611 replace --force -f testdata/netcat-deployment.yaml: fork/exec /usr/local/bin/kubectl: exec format error (471.093µs)
net_test.go:151: failed to apply netcat manifest: fork/exec /usr/local/bin/kubectl: exec format error
E0916 11:19:16.672976   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:19:21.795270   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:19:32.036967   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:19:52.518412   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:20:11.323171   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:20:33.480018   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:21:55.401338   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:22:03.923962   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:22:03.930403   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:22:03.941783   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:22:03.963177   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:22:04.004587   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:22:04.086037   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:22:04.247544   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:22:04.569326   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:22:05.211319   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:22:06.492728   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:22:08.256844   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:22:09.054132   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:22:14.175551   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:22:24.417435   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:22:29.777018   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:22:44.898887   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:22:52.829672   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:22:52.836080   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:22:52.847463   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:22:52.868882   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:22:52.910263   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:22:52.991657   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:22:53.153348   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:22:53.474973   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:22:54.116316   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:22:55.397995   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:22:57.959918   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:23:03.082082   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:23:13.324274   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:23:25.860967   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:23:33.805563   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:24:11.542424   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:24:14.767901   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:24:39.243061   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:24:47.783895   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:25:36.690292   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:27:03.924529   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:27:08.257180   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:27:29.777001   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:27:31.625373   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:27:52.829271   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:28:20.531926   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:29:11.541737   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:160: failed waiting for netcat deployment to stabilize: timed out waiting for the condition
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
E0916 11:35:34.604948   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:36:51.325456   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:37:03.924328   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:37:08.256486   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:37:29.776687   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:37:52.829679   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:38:26.987197   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:39:11.542258   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:39:15.893386   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:42:03.924760   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:42:08.256870   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:42:29.777070   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:42:52.829354   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:44:11.541573   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestNetworkPlugins/group/enable-default-cni/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: client rate limiter Wait returned an error: context deadline exceeded
net_test.go:163: ***** TestNetworkPlugins/group/enable-default-cni/NetCatPod: pod "app=netcat" failed to start within 15m0s: context deadline exceeded ****
net_test.go:163: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p enable-default-cni-771611 -n enable-default-cni-771611
net_test.go:163: TestNetworkPlugins/group/enable-default-cni/NetCatPod: showing logs for failed pods as of 2024-09-16 11:49:14.823110014 +0000 UTC m=+5221.805367090
net_test.go:164: failed waiting for netcat pod: app=netcat within 15m0s: context deadline exceeded
--- FAIL: TestNetworkPlugins/group/enable-default-cni/NetCatPod (1800.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (1800.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-771611 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Non-zero exit: kubectl --context flannel-771611 replace --force -f testdata/netcat-deployment.yaml: fork/exec /usr/local/bin/kubectl: exec format error (484.442µs)
net_test.go:151: failed to apply netcat manifest: fork/exec /usr/local/bin/kubectl: exec format error
E0916 11:47:08.257174   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:47:29.776601   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:160: failed waiting for netcat deployment to stabilize: timed out waiting for the condition
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
E0916 12:02:08.256928   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:02:29.776693   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:02:38.187666   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:02:52.830047   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:03:24.005504   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestNetworkPlugins/group/flannel/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: client rate limiter Wait returned an error: context deadline exceeded
net_test.go:163: ***** TestNetworkPlugins/group/flannel/NetCatPod: pod "app=netcat" failed to start within 15m0s: context deadline exceeded ****
net_test.go:163: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p flannel-771611 -n flannel-771611
net_test.go:163: TestNetworkPlugins/group/flannel/NetCatPod: showing logs for failed pods as of 2024-09-16 12:17:05.44648752 +0000 UTC m=+6892.428744586
net_test.go:164: failed waiting for netcat pod: app=netcat within 15m0s: context deadline exceeded
--- FAIL: TestNetworkPlugins/group/flannel/NetCatPod (1800.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (1800.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-771611 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Non-zero exit: kubectl --context bridge-771611 replace --force -f testdata/netcat-deployment.yaml: fork/exec /usr/local/bin/kubectl: exec format error (501.211µs)
net_test.go:151: failed to apply netcat manifest: fork/exec /usr/local/bin/kubectl: exec format error
E0916 11:49:11.542111   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:160: failed waiting for netcat deployment to stabilize: timed out waiting for the condition
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
E0916 12:04:11.541569   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:04:14.535289   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/enable-default-cni-771611/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestNetworkPlugins/group/bridge/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: client rate limiter Wait returned an error: context deadline exceeded
net_test.go:163: ***** TestNetworkPlugins/group/bridge/NetCatPod: pod "app=netcat" failed to start within 15m0s: context deadline exceeded ****
net_test.go:163: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p bridge-771611 -n bridge-771611
net_test.go:163: TestNetworkPlugins/group/bridge/NetCatPod: showing logs for failed pods as of 2024-09-16 12:18:58.487164299 +0000 UTC m=+7005.469421380
net_test.go:164: failed waiting for netcat pod: app=netcat within 15m0s: context deadline exceeded
--- FAIL: TestNetworkPlugins/group/bridge/NetCatPod (1800.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (1800.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-771611 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Non-zero exit: kubectl --context custom-flannel-771611 replace --force -f testdata/netcat-deployment.yaml: fork/exec /usr/local/bin/kubectl: exec format error (593.929µs)
net_test.go:151: failed to apply netcat manifest: fork/exec /usr/local/bin/kubectl: exec format error
net_test.go:160: failed waiting for netcat deployment to stabilize: timed out waiting for the condition
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
E0916 12:05:32.847273   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:06:01.049050   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:07:03.924196   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:07:08.257218   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:07:24.111342   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:07:29.777059   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:07:38.187693   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:07:52.829753   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:08:24.006100   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:08:54.608990   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:09:01.253365   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:09:11.542620   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:09:14.535344   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/enable-default-cni-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:09:47.068885   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:10:11.329121   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:10:37.598205   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/enable-default-cni-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:11:01.049044   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:11:46.991674   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:12:03.924235   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:12:08.256781   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:12:29.776599   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:12:35.897602   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:12:38.187906   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:12:52.829561   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:13:24.006312   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:14:11.542120   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:14:14.535032   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/enable-default-cni-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:16:01.048848   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:17:03.924695   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestNetworkPlugins/group/custom-flannel/NetCatPod: WARNING: pod list for "default" "app=netcat" returned: client rate limiter Wait returned an error: context deadline exceeded
net_test.go:163: ***** TestNetworkPlugins/group/custom-flannel/NetCatPod: pod "app=netcat" failed to start within 15m0s: context deadline exceeded ****
net_test.go:163: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p custom-flannel-771611 -n custom-flannel-771611
net_test.go:163: TestNetworkPlugins/group/custom-flannel/NetCatPod: showing logs for failed pods as of 2024-09-16 12:19:24.14387295 +0000 UTC m=+7031.126130011
net_test.go:164: failed waiting for netcat pod: app=netcat within 15m0s: context deadline exceeded
--- FAIL: TestNetworkPlugins/group/custom-flannel/NetCatPod (1800.29s)

                                                
                                    

Test pass (227/306)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 35.62
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.99
9 TestDownloadOnly/v1.20.0/DeleteAll 0.48
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.29
12 TestDownloadOnly/v1.31.1/json-events 22.07
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.2
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
20 TestDownloadOnlyKic 1.05
21 TestBinaryMirror 0.74
22 TestOffline 45.35
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 254.8
35 TestAddons/parallel/InspektorGadget 10.67
40 TestAddons/parallel/Headlamp 16.93
41 TestAddons/parallel/CloudSpanner 5.89
43 TestAddons/parallel/NvidiaDevicePlugin 6.5
44 TestAddons/parallel/Yakd 11.81
45 TestAddons/StoppedEnableDisable 6.24
47 TestCertExpiration 211.84
49 TestForceSystemdFlag 31.85
50 TestForceSystemdEnv 34.05
51 TestDockerEnvContainerd 38.64
52 TestKVMDriverInstallOrUpdate 4.37
56 TestErrorSpam/setup 23.63
57 TestErrorSpam/start 0.56
58 TestErrorSpam/status 0.84
59 TestErrorSpam/pause 1.51
60 TestErrorSpam/unpause 1.74
61 TestErrorSpam/stop 1.36
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 40.6
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 5.1
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.22
73 TestFunctional/serial/CacheCmd/cache/add_local 1.93
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.26
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.56
78 TestFunctional/serial/CacheCmd/cache/delete 0.09
79 TestFunctional/serial/MinikubeKubectlCmd 0.14
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 48.76
83 TestFunctional/serial/LogsCmd 1.33
84 TestFunctional/serial/LogsFileCmd 1.33
87 TestFunctional/parallel/ConfigCmd 0.37
89 TestFunctional/parallel/DryRun 0.42
90 TestFunctional/parallel/InternationalLanguage 0.2
91 TestFunctional/parallel/StatusCmd 1.14
96 TestFunctional/parallel/AddonsCmd 0.13
99 TestFunctional/parallel/SSHCmd 0.7
100 TestFunctional/parallel/CpCmd 1.79
102 TestFunctional/parallel/FileSync 0.28
103 TestFunctional/parallel/CertSync 1.56
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.51
111 TestFunctional/parallel/License 0.58
119 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.54
120 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
125 TestFunctional/parallel/ProfileCmd/profile_list 0.34
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
127 TestFunctional/parallel/Version/short 0.05
128 TestFunctional/parallel/Version/components 0.52
129 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
130 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
131 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
132 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
133 TestFunctional/parallel/ImageCommands/ImageBuild 4.18
134 TestFunctional/parallel/ImageCommands/Setup 1.74
135 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.29
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.7
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.11
139 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.33
140 TestFunctional/parallel/ImageCommands/ImageRemove 0.45
141 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.61
142 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.39
143 TestFunctional/parallel/MountCmd/specific-port 1.64
144 TestFunctional/parallel/MountCmd/VerifyCleanup 1.71
145 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
146 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
147 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
151 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
152 TestFunctional/delete_echo-server_images 0.04
153 TestFunctional/delete_my-image_image 0.02
154 TestFunctional/delete_minikube_cached_images 0.02
158 TestMultiControlPlane/serial/StartCluster 98.23
159 TestMultiControlPlane/serial/DeployApp 30.17
160 TestMultiControlPlane/serial/PingHostFromPods 1.01
161 TestMultiControlPlane/serial/AddWorkerNode 21.01
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.65
164 TestMultiControlPlane/serial/CopyFile 15.63
165 TestMultiControlPlane/serial/StopSecondaryNode 12.5
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.48
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.68
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 106.82
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.47
172 TestMultiControlPlane/serial/StopCluster 35.67
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.46
175 TestMultiControlPlane/serial/AddSecondaryNode 34.24
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.65
180 TestJSONOutput/start/Command 49.07
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 0.68
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 0.59
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 5.66
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.2
205 TestKicCustomNetwork/create_custom_network 33.57
206 TestKicCustomNetwork/use_default_bridge_network 23.23
207 TestKicExistingNetwork 23.13
208 TestKicCustomSubnet 23.72
209 TestKicStaticIP 24.16
210 TestMainNoArgs 0.05
211 TestMinikubeProfile 48.25
214 TestMountStart/serial/StartWithMountFirst 5.83
215 TestMountStart/serial/VerifyMountFirst 0.23
216 TestMountStart/serial/StartWithMountSecond 5.79
217 TestMountStart/serial/VerifyMountSecond 0.25
218 TestMountStart/serial/DeleteFirst 1.61
219 TestMountStart/serial/VerifyMountPostDelete 0.24
220 TestMountStart/serial/Stop 1.17
221 TestMountStart/serial/RestartStopped 7.49
222 TestMountStart/serial/VerifyMountPostStop 0.24
225 TestMultiNode/serial/FreshStart2Nodes 55.37
226 TestMultiNode/serial/DeployApp2Nodes 16.68
227 TestMultiNode/serial/PingHostFrom2Pods 0.69
228 TestMultiNode/serial/AddNode 13.4
230 TestMultiNode/serial/ProfileList 0.29
231 TestMultiNode/serial/CopyFile 8.91
232 TestMultiNode/serial/StopNode 2.09
234 TestMultiNode/serial/RestartKeepsNodes 89.35
236 TestMultiNode/serial/StopMultiNode 23.78
238 TestMultiNode/serial/ValidateNameConflict 22.07
243 TestPreload 116.67
245 TestScheduledStopUnix 96.34
248 TestInsufficientStorage 9.65
249 TestRunningBinaryUpgrade 100.91
252 TestMissingContainerUpgrade 166.35
253 TestStoppedBinaryUpgrade/Setup 2.45
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
256 TestNoKubernetes/serial/StartWithK8s 28.72
257 TestStoppedBinaryUpgrade/Upgrade 142.22
258 TestNoKubernetes/serial/StartWithStopK8s 23.56
259 TestNoKubernetes/serial/Start 4.71
260 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
261 TestNoKubernetes/serial/ProfileList 0.8
262 TestNoKubernetes/serial/Stop 1.3
263 TestNoKubernetes/serial/StartNoArgs 7.76
264 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.25
273 TestPause/serial/Start 39.57
274 TestPause/serial/SecondStartNoReconfiguration 5.26
275 TestPause/serial/Pause 0.68
276 TestPause/serial/VerifyStatus 0.28
277 TestPause/serial/Unpause 0.58
278 TestPause/serial/PauseAgain 0.81
279 TestPause/serial/DeletePaused 10.6
280 TestPause/serial/VerifyDeletedResources 0.79
281 TestStoppedBinaryUpgrade/MinikubeLogs 1.29
289 TestNetworkPlugins/group/false 4.04
294 TestStartStop/group/old-k8s-version/serial/FirstStart 134.32
296 TestStartStop/group/no-preload/serial/FirstStart 66.25
299 TestStartStop/group/no-preload/serial/Stop 5.72
300 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
301 TestStartStop/group/no-preload/serial/SecondStart 262.81
304 TestStartStop/group/old-k8s-version/serial/Stop 5.73
305 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
308 TestStartStop/group/embed-certs/serial/FirstStart 43.95
311 TestStartStop/group/embed-certs/serial/Stop 5.72
312 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.17
313 TestStartStop/group/embed-certs/serial/SecondStart 262.99
315 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 44.63
318 TestStartStop/group/default-k8s-diff-port/serial/Stop 5.73
319 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.17
320 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 263.08
321 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
323 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
324 TestStartStop/group/no-preload/serial/Pause 2.63
326 TestStartStop/group/newest-cni/serial/FirstStart 26.04
327 TestStartStop/group/newest-cni/serial/DeployApp 0
328 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.87
329 TestStartStop/group/newest-cni/serial/Stop 1.19
330 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.16
331 TestStartStop/group/newest-cni/serial/SecondStart 13.08
332 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
333 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
334 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
335 TestStartStop/group/newest-cni/serial/Pause 2.59
336 TestNetworkPlugins/group/auto/Start 44.34
337 TestNetworkPlugins/group/auto/KubeletFlags 0.26
339 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
341 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
342 TestStartStop/group/embed-certs/serial/Pause 2.64
343 TestNetworkPlugins/group/kindnet/Start 41.29
344 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
346 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
347 TestStartStop/group/old-k8s-version/serial/Pause 2.64
348 TestNetworkPlugins/group/calico/Start 61.27
349 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
350 TestNetworkPlugins/group/kindnet/KubeletFlags 0.27
352 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
354 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
355 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.63
356 TestNetworkPlugins/group/enable-default-cni/Start 62.62
357 TestNetworkPlugins/group/calico/ControllerPod 6.01
358 TestNetworkPlugins/group/calico/KubeletFlags 0.27
360 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.26
362 TestNetworkPlugins/group/flannel/Start 44.61
363 TestNetworkPlugins/group/flannel/ControllerPod 6.01
364 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
366 TestNetworkPlugins/group/bridge/Start 60.26
367 TestNetworkPlugins/group/custom-flannel/Start 40
368 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
x
+
TestDownloadOnly/v1.20.0/json-events (35.62s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-297488 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-297488 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (35.62106488s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (35.62s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-297488
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-297488: exit status 85 (991.672153ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-297488 | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |          |
	|         | -p download-only-297488        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:22:13
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:22:13.089539   11201 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:22:13.089662   11201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:22:13.089668   11201 out.go:358] Setting ErrFile to fd 2...
	I0916 10:22:13.089673   11201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:22:13.089843   11201 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	W0916 10:22:13.089987   11201 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19651-3687/.minikube/config/config.json: open /home/jenkins/minikube-integration/19651-3687/.minikube/config/config.json: no such file or directory
	I0916 10:22:13.090550   11201 out.go:352] Setting JSON to true
	I0916 10:22:13.091526   11201 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":277,"bootTime":1726481856,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:22:13.091620   11201 start.go:139] virtualization: kvm guest
	I0916 10:22:13.094204   11201 out.go:97] [download-only-297488] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0916 10:22:13.094306   11201 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball: no such file or directory
	I0916 10:22:13.094313   11201 notify.go:220] Checking for updates...
	I0916 10:22:13.095890   11201 out.go:169] MINIKUBE_LOCATION=19651
	I0916 10:22:13.097423   11201 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:22:13.099066   11201 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:22:13.100646   11201 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 10:22:13.101961   11201 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0916 10:22:13.104899   11201 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0916 10:22:13.105088   11201 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:22:13.125607   11201 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:22:13.125671   11201 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:22:13.502023   11201 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-16 10:22:13.493173749 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:22:13.502129   11201 docker.go:318] overlay module found
	I0916 10:22:13.503942   11201 out.go:97] Using the docker driver based on user configuration
	I0916 10:22:13.503973   11201 start.go:297] selected driver: docker
	I0916 10:22:13.503980   11201 start.go:901] validating driver "docker" against <nil>
	I0916 10:22:13.504088   11201 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:22:13.549914   11201 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-16 10:22:13.541502137 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:22:13.550136   11201 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:22:13.550886   11201 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0916 10:22:13.551102   11201 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 10:22:13.553150   11201 out.go:169] Using Docker driver with root privileges
	I0916 10:22:13.554415   11201 cni.go:84] Creating CNI manager for ""
	I0916 10:22:13.554467   11201 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 10:22:13.554479   11201 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 10:22:13.554545   11201 start.go:340] cluster config:
	{Name:download-only-297488 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-297488 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:22:13.555828   11201 out.go:97] Starting "download-only-297488" primary control-plane node in "download-only-297488" cluster
	I0916 10:22:13.555845   11201 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 10:22:13.557009   11201 out.go:97] Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:22:13.557030   11201 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0916 10:22:13.557136   11201 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:22:13.572759   11201 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:22:13.572939   11201 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:22:13.573025   11201 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:22:13.667852   11201 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0916 10:22:13.667884   11201 cache.go:56] Caching tarball of preloaded images
	I0916 10:22:13.668046   11201 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0916 10:22:13.670421   11201 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0916 10:22:13.670453   11201 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0916 10:22:13.771203   11201 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0916 10:22:34.849950   11201 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0916 10:22:34.850052   11201 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0916 10:22:35.773119   11201 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0916 10:22:35.773509   11201 profile.go:143] Saving config to /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/download-only-297488/config.json ...
	I0916 10:22:35.773548   11201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/download-only-297488/config.json: {Name:mkb641e095274eeb20743844016f81e47a039574 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0916 10:22:35.773762   11201 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0916 10:22:35.773993   11201 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-297488 host does not exist
	  To start a cluster, run: "minikube start -p download-only-297488"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.48s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.48s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-297488
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (22.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-024449 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-024449 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (22.069178137s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (22.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-024449
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-024449: exit status 85 (59.599091ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-297488 | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | -p download-only-297488        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| delete  | -p download-only-297488        | download-only-297488 | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC | 16 Sep 24 10:22 UTC |
	| start   | -o=json --download-only        | download-only-024449 | jenkins | v1.34.0 | 16 Sep 24 10:22 UTC |                     |
	|         | -p download-only-024449        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/16 10:22:50
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0916 10:22:50.473314   11669 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:22:50.473426   11669 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:22:50.473437   11669 out.go:358] Setting ErrFile to fd 2...
	I0916 10:22:50.473442   11669 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:22:50.473648   11669 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 10:22:50.474203   11669 out.go:352] Setting JSON to true
	I0916 10:22:50.475127   11669 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":314,"bootTime":1726481856,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:22:50.475236   11669 start.go:139] virtualization: kvm guest
	I0916 10:22:50.478726   11669 out.go:97] [download-only-024449] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:22:50.478917   11669 notify.go:220] Checking for updates...
	I0916 10:22:50.480294   11669 out.go:169] MINIKUBE_LOCATION=19651
	I0916 10:22:50.481937   11669 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:22:50.483348   11669 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:22:50.484900   11669 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 10:22:50.486353   11669 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0916 10:22:50.488810   11669 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0916 10:22:50.489141   11669 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:22:50.510767   11669 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:22:50.510841   11669 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:22:50.554521   11669 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-16 10:22:50.545989444 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:22:50.554634   11669 docker.go:318] overlay module found
	I0916 10:22:50.556581   11669 out.go:97] Using the docker driver based on user configuration
	I0916 10:22:50.556612   11669 start.go:297] selected driver: docker
	I0916 10:22:50.556619   11669 start.go:901] validating driver "docker" against <nil>
	I0916 10:22:50.556739   11669 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:22:50.604179   11669 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-16 10:22:50.595799954 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:22:50.604328   11669 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0916 10:22:50.604819   11669 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0916 10:22:50.604969   11669 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0916 10:22:50.606786   11669 out.go:169] Using Docker driver with root privileges
	I0916 10:22:50.608040   11669 cni.go:84] Creating CNI manager for ""
	I0916 10:22:50.608093   11669 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0916 10:22:50.608102   11669 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0916 10:22:50.608170   11669 start.go:340] cluster config:
	{Name:download-only-024449 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-024449 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:22:50.609424   11669 out.go:97] Starting "download-only-024449" primary control-plane node in "download-only-024449" cluster
	I0916 10:22:50.609441   11669 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0916 10:22:50.610503   11669 out.go:97] Pulling base image v0.0.45-1726358845-19644 ...
	I0916 10:22:50.610524   11669 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:22:50.610627   11669 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0916 10:22:50.626589   11669 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0916 10:22:50.626681   11669 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0916 10:22:50.626696   11669 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0916 10:22:50.626700   11669 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0916 10:22:50.626708   11669 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0916 10:22:51.049886   11669 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4
	I0916 10:22:51.049915   11669 cache.go:56] Caching tarball of preloaded images
	I0916 10:22:51.050085   11669 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0916 10:22:51.051958   11669 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0916 10:22:51.051977   11669 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4 ...
	I0916 10:22:51.598455   11669 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4?checksum=md5:6356ceed7fe748d0ea8e34a3342d6f3c -> /home/jenkins/minikube-integration/19651-3687/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-024449 host does not exist
	  To start a cluster, run: "minikube start -p download-only-024449"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-024449
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.05s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-065822 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "download-docker-065822" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-065822
--- PASS: TestDownloadOnlyKic (1.05s)

                                                
                                    
x
+
TestBinaryMirror (0.74s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-727123 --alsologtostderr --binary-mirror http://127.0.0.1:34779 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-727123" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-727123
--- PASS: TestBinaryMirror (0.74s)

                                                
                                    
x
+
TestOffline (45.35s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-294459 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-294459 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd: (43.093930001s)
helpers_test.go:175: Cleaning up "offline-containerd-294459" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-294459
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-294459: (2.256936949s)
--- PASS: TestOffline (45.35s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-191972
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-191972: exit status 85 (50.12753ms)

                                                
                                                
-- stdout --
	* Profile "addons-191972" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-191972"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-191972
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-191972: exit status 85 (52.158262ms)

                                                
                                                
-- stdout --
	* Profile "addons-191972" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-191972"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (254.8s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-191972 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-191972 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (4m14.796389742s)
--- PASS: TestAddons/Setup (254.80s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.67s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-rwwbs" [62b2176c-9dcb-4741-bd18-81ab2a2303f2] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004500681s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-191972
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-191972: (5.662613679s)
--- PASS: TestAddons/parallel/InspektorGadget (10.67s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-191972 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-191972 --alsologtostderr -v=1: (1.28292795s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-6zpfv" [59a7200b-607c-4d45-8e9f-2de2431e2196] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-6zpfv" [59a7200b-607c-4d45-8e9f-2de2431e2196] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-6zpfv" [59a7200b-607c-4d45-8e9f-2de2431e2196] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003496847s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-191972 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-191972 addons disable headlamp --alsologtostderr -v=1: (5.642149565s)
--- PASS: TestAddons/parallel/Headlamp (16.93s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.89s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-8tnxp" [3be99604-1ee4-4c70-96c9-466cd2d9349f] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.078870672s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-191972
--- PASS: TestAddons/parallel/CloudSpanner (5.89s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.5s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-vpb85" [14ca6c72-b73b-4254-910a-0b876ca73f90] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.00342612s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-191972
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.50s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-gsg67" [aa381c71-e508-46bc-afd6-1c593c0dc6f8] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.002674735s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-191972 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-191972 addons disable yakd --alsologtostderr -v=1: (5.805443516s)
--- PASS: TestAddons/parallel/Yakd (11.81s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (6.24s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-191972
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-191972: (6.001270761s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-191972
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-191972
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-191972
--- PASS: TestAddons/StoppedEnableDisable (6.24s)

                                                
                                    
x
+
TestCertExpiration (211.84s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-021107 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-021107 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (24.916256605s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-021107 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-021107 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (4.746061807s)
helpers_test.go:175: Cleaning up "cert-expiration-021107" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-021107
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-021107: (2.174536908s)
--- PASS: TestCertExpiration (211.84s)

                                                
                                    
x
+
TestForceSystemdFlag (31.85s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-917705 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-917705 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (29.493711725s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-917705 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-917705" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-917705
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-917705: (1.990356047s)
--- PASS: TestForceSystemdFlag (31.85s)

                                                
                                    
x
+
TestForceSystemdEnv (34.05s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-846070 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0916 11:07:29.777023   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-846070 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (31.748704522s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-846070 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-846070" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-846070
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-846070: (2.013039068s)
--- PASS: TestForceSystemdEnv (34.05s)

                                                
                                    
x
+
TestDockerEnvContainerd (38.64s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-042187 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-042187 --driver=docker  --container-runtime=containerd: (23.180196286s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-042187"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-ctUs3FZ3HijK/agent.37817" SSH_AGENT_PID="37818" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-ctUs3FZ3HijK/agent.37817" SSH_AGENT_PID="37818" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-ctUs3FZ3HijK/agent.37817" SSH_AGENT_PID="37818" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.743704168s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-ctUs3FZ3HijK/agent.37817" SSH_AGENT_PID="37818" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-042187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-042187
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-042187: (1.806549415s)
--- PASS: TestDockerEnvContainerd (38.64s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.37s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.37s)

                                                
                                    
x
+
TestErrorSpam/setup (23.63s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-421019 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-421019 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-421019 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-421019 --driver=docker  --container-runtime=containerd: (23.627432286s)
error_spam_test.go:91: acceptable stderr: "E0916 10:40:06.057391   38583 start.go:291] kubectl info: exec: fork/exec /usr/local/bin/kubectl: exec format error"
--- PASS: TestErrorSpam/setup (23.63s)

                                                
                                    
x
+
TestErrorSpam/start (0.56s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421019 --log_dir /tmp/nospam-421019 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421019 --log_dir /tmp/nospam-421019 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421019 --log_dir /tmp/nospam-421019 start --dry-run
--- PASS: TestErrorSpam/start (0.56s)

                                                
                                    
x
+
TestErrorSpam/status (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421019 --log_dir /tmp/nospam-421019 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421019 --log_dir /tmp/nospam-421019 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421019 --log_dir /tmp/nospam-421019 status
--- PASS: TestErrorSpam/status (0.84s)

                                                
                                    
x
+
TestErrorSpam/pause (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421019 --log_dir /tmp/nospam-421019 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421019 --log_dir /tmp/nospam-421019 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421019 --log_dir /tmp/nospam-421019 pause
--- PASS: TestErrorSpam/pause (1.51s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.74s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421019 --log_dir /tmp/nospam-421019 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421019 --log_dir /tmp/nospam-421019 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421019 --log_dir /tmp/nospam-421019 unpause
--- PASS: TestErrorSpam/unpause (1.74s)

                                                
                                    
x
+
TestErrorSpam/stop (1.36s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421019 --log_dir /tmp/nospam-421019 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-421019 --log_dir /tmp/nospam-421019 stop: (1.183808218s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421019 --log_dir /tmp/nospam-421019 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421019 --log_dir /tmp/nospam-421019 stop
--- PASS: TestErrorSpam/stop (1.36s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19651-3687/.minikube/files/etc/test/nested/copy/11189/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (40.6s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-016570 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-016570 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (40.603241743s)
--- PASS: TestFunctional/serial/StartWithProxy (40.60s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.1s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-016570 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-016570 --alsologtostderr -v=8: (5.099754787s)
functional_test.go:663: soft start took 5.100681994s for "functional-016570" cluster.
--- PASS: TestFunctional/serial/SoftStart (5.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-016570 cache add registry.k8s.io/pause:3.1: (1.07522833s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-016570 cache add registry.k8s.io/pause:3.3: (1.186640948s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.93s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-016570 /tmp/TestFunctionalserialCacheCmdcacheadd_local2345375152/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 cache add minikube-local-cache-test:functional-016570
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-016570 cache add minikube-local-cache-test:functional-016570: (1.585925923s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 cache delete minikube-local-cache-test:functional-016570
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-016570
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.93s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-016570 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (267.605359ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 kubectl -- --context functional-016570 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-016570 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (48.76s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-016570 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-016570 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (48.762116579s)
functional_test.go:761: restart took 48.762227275s for "functional-016570" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (48.76s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-016570 logs: (1.330471066s)
--- PASS: TestFunctional/serial/LogsCmd (1.33s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 logs --file /tmp/TestFunctionalserialLogsFileCmd1564633368/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-016570 logs --file /tmp/TestFunctionalserialLogsFileCmd1564633368/001/logs.txt: (1.328531768s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-016570 config get cpus: exit status 14 (75.75657ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-016570 config get cpus: exit status 14 (61.210915ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-016570 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-016570 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (173.815528ms)

                                                
                                                
-- stdout --
	* [functional-016570] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19651
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:42:07.727134   54948 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:42:07.727242   54948 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:42:07.727252   54948 out.go:358] Setting ErrFile to fd 2...
	I0916 10:42:07.727258   54948 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:42:07.727431   54948 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 10:42:07.727977   54948 out.go:352] Setting JSON to false
	I0916 10:42:07.728898   54948 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1472,"bootTime":1726481856,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:42:07.728963   54948 start.go:139] virtualization: kvm guest
	I0916 10:42:07.731723   54948 out.go:177] * [functional-016570] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 10:42:07.733364   54948 notify.go:220] Checking for updates...
	I0916 10:42:07.733415   54948 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:42:07.734761   54948 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:42:07.735961   54948 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:42:07.737261   54948 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 10:42:07.738485   54948 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:42:07.739841   54948 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:42:07.741812   54948 config.go:182] Loaded profile config "functional-016570": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:42:07.742397   54948 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:42:07.772584   54948 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:42:07.772688   54948 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:42:07.845464   54948 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-16 10:42:07.829570845 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:42:07.845624   54948 docker.go:318] overlay module found
	I0916 10:42:07.848475   54948 out.go:177] * Using the docker driver based on existing profile
	I0916 10:42:07.850260   54948 start.go:297] selected driver: docker
	I0916 10:42:07.850279   54948 start.go:901] validating driver "docker" against &{Name:functional-016570 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-016570 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:42:07.850395   54948 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:42:07.852839   54948 out.go:201] 
	W0916 10:42:07.854153   54948 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0916 10:42:07.855250   54948 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-016570 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-016570 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-016570 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (199.663511ms)

                                                
                                                
-- stdout --
	* [functional-016570] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19651
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:42:07.559528   54721 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:42:07.559657   54721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:42:07.559668   54721 out.go:358] Setting ErrFile to fd 2...
	I0916 10:42:07.559675   54721 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:42:07.560109   54721 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 10:42:07.560861   54721 out.go:352] Setting JSON to false
	I0916 10:42:07.562260   54721 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":1472,"bootTime":1726481856,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 10:42:07.562385   54721 start.go:139] virtualization: kvm guest
	I0916 10:42:07.565148   54721 out.go:177] * [functional-016570] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0916 10:42:07.566989   54721 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 10:42:07.567093   54721 notify.go:220] Checking for updates...
	I0916 10:42:07.570586   54721 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 10:42:07.572083   54721 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 10:42:07.573553   54721 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 10:42:07.575265   54721 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 10:42:07.577256   54721 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 10:42:07.580026   54721 config.go:182] Loaded profile config "functional-016570": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:42:07.580795   54721 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 10:42:07.613246   54721 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 10:42:07.613345   54721 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:42:07.672533   54721 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-16 10:42:07.663072094 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:42:07.672688   54721 docker.go:318] overlay module found
	I0916 10:42:07.675081   54721 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0916 10:42:07.676799   54721 start.go:297] selected driver: docker
	I0916 10:42:07.676820   54721 start.go:901] validating driver "docker" against &{Name:functional-016570 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-016570 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0916 10:42:07.676946   54721 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 10:42:07.679398   54721 out.go:201] 
	W0916 10:42:07.681145   54721 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0916 10:42:07.682764   54721 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 ssh -n functional-016570 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 cp functional-016570:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2928196455/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 ssh -n functional-016570 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 ssh -n functional-016570 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/11189/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 ssh "sudo cat /etc/test/nested/copy/11189/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/11189.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 ssh "sudo cat /etc/ssl/certs/11189.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/11189.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 ssh "sudo cat /usr/share/ca-certificates/11189.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/111892.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 ssh "sudo cat /etc/ssl/certs/111892.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/111892.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 ssh "sudo cat /usr/share/ca-certificates/111892.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-016570 ssh "sudo systemctl is-active docker": exit status 1 (271.306147ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-016570 ssh "sudo systemctl is-active crio": exit status 1 (243.277426ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-016570 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-016570 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-016570 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 56718: os: process already finished
helpers_test.go:508: unable to kill pid 56453: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-016570 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-016570 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "290.841382ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "50.668275ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "301.329692ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "46.453887ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-016570 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-016570
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kicbase/echo-server:functional-016570
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-016570 image ls --format short --alsologtostderr:
I0916 10:42:25.014541   63931 out.go:345] Setting OutFile to fd 1 ...
I0916 10:42:25.014668   63931 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:42:25.014676   63931 out.go:358] Setting ErrFile to fd 2...
I0916 10:42:25.014680   63931 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:42:25.014902   63931 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
I0916 10:42:25.015516   63931 config.go:182] Loaded profile config "functional-016570": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0916 10:42:25.015644   63931 config.go:182] Loaded profile config "functional-016570": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0916 10:42:25.016115   63931 cli_runner.go:164] Run: docker container inspect functional-016570 --format={{.State.Status}}
I0916 10:42:25.036066   63931 ssh_runner.go:195] Run: systemctl --version
I0916 10:42:25.036111   63931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-016570
I0916 10:42:25.056369   63931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/functional-016570/id_rsa Username:docker}
I0916 10:42:25.151838   63931 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-016570 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/kicbase/echo-server               | functional-016570  | sha256:9056ab | 2.37MB |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| registry.k8s.io/kube-apiserver              | v1.31.1            | sha256:6bab77 | 28MB   |
| registry.k8s.io/kube-scheduler              | v1.31.1            | sha256:9aa1fa | 20.2MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
| registry.k8s.io/kube-controller-manager     | v1.31.1            | sha256:175ffd | 26.2MB |
| docker.io/library/minikube-local-cache-test | functional-016570  | sha256:fb7b86 | 991B   |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:c69fa2 | 18.6MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:2e96e5 | 56.9MB |
| registry.k8s.io/kube-proxy                  | v1.31.1            | sha256:60c005 | 30.2MB |
| registry.k8s.io/pause                       | 3.10               | sha256:873ed7 | 320kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:129686 | 36.8MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-016570 image ls --format table --alsologtostderr:
I0916 10:42:25.683880   64334 out.go:345] Setting OutFile to fd 1 ...
I0916 10:42:25.684157   64334 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:42:25.684168   64334 out.go:358] Setting ErrFile to fd 2...
I0916 10:42:25.684174   64334 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:42:25.684387   64334 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
I0916 10:42:25.685042   64334 config.go:182] Loaded profile config "functional-016570": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0916 10:42:25.685167   64334 config.go:182] Loaded profile config "functional-016570": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0916 10:42:25.685602   64334 cli_runner.go:164] Run: docker container inspect functional-016570 --format={{.State.Status}}
I0916 10:42:25.703489   64334 ssh_runner.go:195] Run: systemctl --version
I0916 10:42:25.703548   64334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-016570
I0916 10:42:25.721045   64334 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/functional-016570/id_rsa Username:docker}
I0916 10:42:25.816177   64334 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-016570 image ls --format json --alsologtostderr:
[{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"56909194"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-016570"],"size":"2372971"},{"id":"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f
91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"18562039"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"30211884"},{"id":"sha256:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"20177215"},{"id":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.
k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"320368"},{"id":"sha256:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"36793393"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:fb7b86e66be1365d029dc7b49692ea0498bad2c6a64a485dd6553014f565a99c","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-016570"],"size":"991"},{"id":"sha256:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a
7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"28047142"},{"id":"sha256:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"26221554"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-016570 image ls --format json --alsologtostderr:
I0916 10:42:25.467076   64210 out.go:345] Setting OutFile to fd 1 ...
I0916 10:42:25.467228   64210 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:42:25.467239   64210 out.go:358] Setting ErrFile to fd 2...
I0916 10:42:25.467245   64210 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:42:25.467488   64210 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
I0916 10:42:25.468189   64210 config.go:182] Loaded profile config "functional-016570": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0916 10:42:25.468317   64210 config.go:182] Loaded profile config "functional-016570": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0916 10:42:25.468749   64210 cli_runner.go:164] Run: docker container inspect functional-016570 --format={{.State.Status}}
I0916 10:42:25.488406   64210 ssh_runner.go:195] Run: systemctl --version
I0916 10:42:25.488465   64210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-016570
I0916 10:42:25.505770   64210 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/functional-016570/id_rsa Username:docker}
I0916 10:42:25.600405   64210 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-016570 image ls --format yaml --alsologtostderr:
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-016570
size: "2372971"
- id: sha256:12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "36793393"
- id: sha256:fb7b86e66be1365d029dc7b49692ea0498bad2c6a64a485dd6553014f565a99c
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-016570
size: "991"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "26221554"
- id: sha256:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "20177215"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "320368"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "18562039"
- id: sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "56909194"
- id: sha256:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "30211884"
- id: sha256:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "28047142"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-016570 image ls --format yaml --alsologtostderr:
I0916 10:42:25.235876   64034 out.go:345] Setting OutFile to fd 1 ...
I0916 10:42:25.236126   64034 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:42:25.236134   64034 out.go:358] Setting ErrFile to fd 2...
I0916 10:42:25.236139   64034 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:42:25.236327   64034 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
I0916 10:42:25.236933   64034 config.go:182] Loaded profile config "functional-016570": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0916 10:42:25.237025   64034 config.go:182] Loaded profile config "functional-016570": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0916 10:42:25.237469   64034 cli_runner.go:164] Run: docker container inspect functional-016570 --format={{.State.Status}}
I0916 10:42:25.261941   64034 ssh_runner.go:195] Run: systemctl --version
I0916 10:42:25.262005   64034 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-016570
I0916 10:42:25.280727   64034 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/functional-016570/id_rsa Username:docker}
I0916 10:42:25.372603   64034 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-016570 ssh pgrep buildkitd: exit status 1 (247.265962ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 image build -t localhost/my-image:functional-016570 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-016570 image build -t localhost/my-image:functional-016570 testdata/build --alsologtostderr: (3.727424806s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-016570 image build -t localhost/my-image:functional-016570 testdata/build --alsologtostderr:
I0916 10:42:25.603690   64276 out.go:345] Setting OutFile to fd 1 ...
I0916 10:42:25.603872   64276 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:42:25.603883   64276 out.go:358] Setting ErrFile to fd 2...
I0916 10:42:25.603887   64276 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0916 10:42:25.604053   64276 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
I0916 10:42:25.604650   64276 config.go:182] Loaded profile config "functional-016570": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0916 10:42:25.605206   64276 config.go:182] Loaded profile config "functional-016570": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0916 10:42:25.605642   64276 cli_runner.go:164] Run: docker container inspect functional-016570 --format={{.State.Status}}
I0916 10:42:25.624932   64276 ssh_runner.go:195] Run: systemctl --version
I0916 10:42:25.624995   64276 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-016570
I0916 10:42:25.643121   64276 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/functional-016570/id_rsa Username:docker}
I0916 10:42:25.736016   64276 build_images.go:161] Building image from path: /tmp/build.2371873075.tar
I0916 10:42:25.736100   64276 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0916 10:42:25.745028   64276 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2371873075.tar
I0916 10:42:25.748258   64276 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2371873075.tar: stat -c "%s %y" /var/lib/minikube/build/build.2371873075.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2371873075.tar': No such file or directory
I0916 10:42:25.748287   64276 ssh_runner.go:362] scp /tmp/build.2371873075.tar --> /var/lib/minikube/build/build.2371873075.tar (3072 bytes)
I0916 10:42:25.772262   64276 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2371873075
I0916 10:42:25.780850   64276 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2371873075 -xf /var/lib/minikube/build/build.2371873075.tar
I0916 10:42:25.789750   64276 containerd.go:394] Building image: /var/lib/minikube/build/build.2371873075
I0916 10:42:25.789827   64276 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2371873075 --local dockerfile=/var/lib/minikube/build/build.2371873075 --output type=image,name=localhost/my-image:functional-016570
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.5s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.9s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:09ee2cd516cf6573cf9b3a4dbabbf383cae4d5c4dd418e6453ddc0fd214d52c1 done
#8 exporting config sha256:9d0b23f97fa55a94d55a6e3c8e6fb368b23e3cf63ca877fc70708a413ac19b5e done
#8 naming to localhost/my-image:functional-016570 done
#8 DONE 0.1s
I0916 10:42:29.262165   64276 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2371873075 --local dockerfile=/var/lib/minikube/build/build.2371873075 --output type=image,name=localhost/my-image:functional-016570: (3.472302884s)
I0916 10:42:29.262244   64276 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2371873075
I0916 10:42:29.271459   64276 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2371873075.tar
I0916 10:42:29.279759   64276 build_images.go:217] Built localhost/my-image:functional-016570 from /tmp/build.2371873075.tar
I0916 10:42:29.279789   64276 build_images.go:133] succeeded building to: functional-016570
I0916 10:42:29.279793   64276 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 image ls
E0916 10:42:29.777028   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:42:29.783825   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:42:29.795198   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:42:29.816631   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:42:29.858044   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:42:29.939461   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:42:30.100989   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:42:30.422759   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:42:31.064164   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:42:32.345796   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.723668147s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-016570
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 image load --daemon kicbase/echo-server:functional-016570 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 image load --daemon kicbase/echo-server:functional-016570 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p functional-016570 image load --daemon kicbase/echo-server:functional-016570 --alsologtostderr: (1.324068842s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-016570
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 image load --daemon kicbase/echo-server:functional-016570 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-amd64 -p functional-016570 image load --daemon kicbase/echo-server:functional-016570 --alsologtostderr: (1.086548957s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 image save kicbase/echo-server:functional-016570 /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 image rm kicbase/echo-server:functional-016570 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-016570
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 image save --daemon kicbase/echo-server:functional-016570 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-016570
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-016570 /tmp/TestFunctionalparallelMountCmdspecific-port2011867580/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-016570 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (268.530025ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-016570 /tmp/TestFunctionalparallelMountCmdspecific-port2011867580/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-016570 ssh "sudo umount -f /mount-9p": exit status 1 (259.866157ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-016570 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-016570 /tmp/TestFunctionalparallelMountCmdspecific-port2011867580/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-016570 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1743652396/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-016570 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1743652396/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-016570 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1743652396/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-016570 ssh "findmnt -T" /mount1: exit status 1 (351.666111ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-016570 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-016570 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1743652396/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-016570 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1743652396/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-016570 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1743652396/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-016570 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-016570 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-016570
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-016570
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-016570
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (98.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-770465 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0916 10:45:13.636385   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-770465 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m37.539877879s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (98.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (30.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-770465 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-770465 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-770465 -- rollout status deployment/busybox: (28.290970886s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-770465 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-770465 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-770465 -- exec busybox-7dff88458-845rc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-770465 -- exec busybox-7dff88458-dlndh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-770465 -- exec busybox-7dff88458-klfw4 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-770465 -- exec busybox-7dff88458-845rc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-770465 -- exec busybox-7dff88458-dlndh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-770465 -- exec busybox-7dff88458-klfw4 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-770465 -- exec busybox-7dff88458-845rc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-770465 -- exec busybox-7dff88458-dlndh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-770465 -- exec busybox-7dff88458-klfw4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (30.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-770465 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-770465 -- exec busybox-7dff88458-845rc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-770465 -- exec busybox-7dff88458-845rc -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-770465 -- exec busybox-7dff88458-dlndh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-770465 -- exec busybox-7dff88458-dlndh -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-770465 -- exec busybox-7dff88458-klfw4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-770465 -- exec busybox-7dff88458-klfw4 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (21.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-770465 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-770465 -v=7 --alsologtostderr: (20.170300236s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (21.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 cp testdata/cp-test.txt ha-770465:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 ssh -n ha-770465 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 cp ha-770465:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1340522930/001/cp-test_ha-770465.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 ssh -n ha-770465 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 cp ha-770465:/home/docker/cp-test.txt ha-770465-m02:/home/docker/cp-test_ha-770465_ha-770465-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 ssh -n ha-770465 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 ssh -n ha-770465-m02 "sudo cat /home/docker/cp-test_ha-770465_ha-770465-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 cp ha-770465:/home/docker/cp-test.txt ha-770465-m03:/home/docker/cp-test_ha-770465_ha-770465-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 ssh -n ha-770465 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 ssh -n ha-770465-m03 "sudo cat /home/docker/cp-test_ha-770465_ha-770465-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 cp ha-770465:/home/docker/cp-test.txt ha-770465-m04:/home/docker/cp-test_ha-770465_ha-770465-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 ssh -n ha-770465 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 ssh -n ha-770465-m04 "sudo cat /home/docker/cp-test_ha-770465_ha-770465-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 cp testdata/cp-test.txt ha-770465-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 ssh -n ha-770465-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 cp ha-770465-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1340522930/001/cp-test_ha-770465-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 ssh -n ha-770465-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 cp ha-770465-m02:/home/docker/cp-test.txt ha-770465:/home/docker/cp-test_ha-770465-m02_ha-770465.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 ssh -n ha-770465-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 ssh -n ha-770465 "sudo cat /home/docker/cp-test_ha-770465-m02_ha-770465.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 cp ha-770465-m02:/home/docker/cp-test.txt ha-770465-m03:/home/docker/cp-test_ha-770465-m02_ha-770465-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 ssh -n ha-770465-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 ssh -n ha-770465-m03 "sudo cat /home/docker/cp-test_ha-770465-m02_ha-770465-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 cp ha-770465-m02:/home/docker/cp-test.txt ha-770465-m04:/home/docker/cp-test_ha-770465-m02_ha-770465-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 ssh -n ha-770465-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 ssh -n ha-770465-m04 "sudo cat /home/docker/cp-test_ha-770465-m02_ha-770465-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 cp testdata/cp-test.txt ha-770465-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 ssh -n ha-770465-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 cp ha-770465-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1340522930/001/cp-test_ha-770465-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 ssh -n ha-770465-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 cp ha-770465-m03:/home/docker/cp-test.txt ha-770465:/home/docker/cp-test_ha-770465-m03_ha-770465.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 ssh -n ha-770465-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 ssh -n ha-770465 "sudo cat /home/docker/cp-test_ha-770465-m03_ha-770465.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 cp ha-770465-m03:/home/docker/cp-test.txt ha-770465-m02:/home/docker/cp-test_ha-770465-m03_ha-770465-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 ssh -n ha-770465-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 ssh -n ha-770465-m02 "sudo cat /home/docker/cp-test_ha-770465-m03_ha-770465-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 cp ha-770465-m03:/home/docker/cp-test.txt ha-770465-m04:/home/docker/cp-test_ha-770465-m03_ha-770465-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 ssh -n ha-770465-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 ssh -n ha-770465-m04 "sudo cat /home/docker/cp-test_ha-770465-m03_ha-770465-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 cp testdata/cp-test.txt ha-770465-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 ssh -n ha-770465-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 cp ha-770465-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1340522930/001/cp-test_ha-770465-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 ssh -n ha-770465-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 cp ha-770465-m04:/home/docker/cp-test.txt ha-770465:/home/docker/cp-test_ha-770465-m04_ha-770465.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 ssh -n ha-770465-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 ssh -n ha-770465 "sudo cat /home/docker/cp-test_ha-770465-m04_ha-770465.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 cp ha-770465-m04:/home/docker/cp-test.txt ha-770465-m02:/home/docker/cp-test_ha-770465-m04_ha-770465-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 ssh -n ha-770465-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 ssh -n ha-770465-m02 "sudo cat /home/docker/cp-test_ha-770465-m04_ha-770465-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 cp ha-770465-m04:/home/docker/cp-test.txt ha-770465-m03:/home/docker/cp-test_ha-770465-m04_ha-770465-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 ssh -n ha-770465-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 ssh -n ha-770465-m03 "sudo cat /home/docker/cp-test_ha-770465-m04_ha-770465-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-770465 node stop m02 -v=7 --alsologtostderr: (11.834472407s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-770465 status -v=7 --alsologtostderr: exit status 7 (660.716922ms)

                                                
                                                
-- stdout --
	ha-770465
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-770465-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-770465-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-770465-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:46:57.767772   87610 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:46:57.767932   87610 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:46:57.767942   87610 out.go:358] Setting ErrFile to fd 2...
	I0916 10:46:57.767946   87610 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:46:57.768190   87610 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 10:46:57.768371   87610 out.go:352] Setting JSON to false
	I0916 10:46:57.768400   87610 mustload.go:65] Loading cluster: ha-770465
	I0916 10:46:57.768460   87610 notify.go:220] Checking for updates...
	I0916 10:46:57.768786   87610 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:46:57.768799   87610 status.go:255] checking status of ha-770465 ...
	I0916 10:46:57.769260   87610 cli_runner.go:164] Run: docker container inspect ha-770465 --format={{.State.Status}}
	I0916 10:46:57.787316   87610 status.go:330] ha-770465 host status = "Running" (err=<nil>)
	I0916 10:46:57.787386   87610 host.go:66] Checking if "ha-770465" exists ...
	I0916 10:46:57.787673   87610 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465
	I0916 10:46:57.804888   87610 host.go:66] Checking if "ha-770465" exists ...
	I0916 10:46:57.805128   87610 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:46:57.805160   87610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465
	I0916 10:46:57.823265   87610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465/id_rsa Username:docker}
	I0916 10:46:57.916826   87610 ssh_runner.go:195] Run: systemctl --version
	I0916 10:46:57.920743   87610 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:46:57.931489   87610 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:46:57.984683   87610 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-16 10:46:57.972920488 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:46:57.985382   87610 kubeconfig.go:125] found "ha-770465" server: "https://192.168.49.254:8443"
	I0916 10:46:57.985423   87610 api_server.go:166] Checking apiserver status ...
	I0916 10:46:57.985466   87610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:46:57.998799   87610 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1536/cgroup
	I0916 10:46:58.007556   87610 api_server.go:182] apiserver freezer: "10:freezer:/docker/c7d04b23d2abf966ca171fd8833a886574de3fcaa485d5ebf8d16c3f7eff5dbf/kubepods/burstable/pod9a026d6c06d8804b9e6c3008c60afbf8/535bd4e938e3aeb6ecfbd02d81bf8fc060b9bb649a67b3f28d6b43d2c199e4ba"
	I0916 10:46:58.007630   87610 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c7d04b23d2abf966ca171fd8833a886574de3fcaa485d5ebf8d16c3f7eff5dbf/kubepods/burstable/pod9a026d6c06d8804b9e6c3008c60afbf8/535bd4e938e3aeb6ecfbd02d81bf8fc060b9bb649a67b3f28d6b43d2c199e4ba/freezer.state
	I0916 10:46:58.015465   87610 api_server.go:204] freezer state: "THAWED"
	I0916 10:46:58.015490   87610 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0916 10:46:58.019151   87610 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0916 10:46:58.019174   87610 status.go:422] ha-770465 apiserver status = Running (err=<nil>)
	I0916 10:46:58.019186   87610 status.go:257] ha-770465 status: &{Name:ha-770465 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:46:58.019207   87610 status.go:255] checking status of ha-770465-m02 ...
	I0916 10:46:58.019443   87610 cli_runner.go:164] Run: docker container inspect ha-770465-m02 --format={{.State.Status}}
	I0916 10:46:58.037129   87610 status.go:330] ha-770465-m02 host status = "Stopped" (err=<nil>)
	I0916 10:46:58.037152   87610 status.go:343] host is not running, skipping remaining checks
	I0916 10:46:58.037160   87610 status.go:257] ha-770465-m02 status: &{Name:ha-770465-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:46:58.037183   87610 status.go:255] checking status of ha-770465-m03 ...
	I0916 10:46:58.037428   87610 cli_runner.go:164] Run: docker container inspect ha-770465-m03 --format={{.State.Status}}
	I0916 10:46:58.055887   87610 status.go:330] ha-770465-m03 host status = "Running" (err=<nil>)
	I0916 10:46:58.055912   87610 host.go:66] Checking if "ha-770465-m03" exists ...
	I0916 10:46:58.056174   87610 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465-m03
	I0916 10:46:58.076393   87610 host.go:66] Checking if "ha-770465-m03" exists ...
	I0916 10:46:58.076724   87610 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:46:58.076779   87610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m03
	I0916 10:46:58.094731   87610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m03/id_rsa Username:docker}
	I0916 10:46:58.184581   87610 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:46:58.195235   87610 kubeconfig.go:125] found "ha-770465" server: "https://192.168.49.254:8443"
	I0916 10:46:58.195261   87610 api_server.go:166] Checking apiserver status ...
	I0916 10:46:58.195290   87610 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:46:58.204920   87610 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1386/cgroup
	I0916 10:46:58.213066   87610 api_server.go:182] apiserver freezer: "10:freezer:/docker/9a7e3df4a104273e3fc3c64cd6c987ff54162402338c998d52aa1195edd57add/kubepods/burstable/podd01045e3887d6ee007ca690494f2504e/44f23393c12f27d7eeeb1aa2f942af5b0951c3892ac185a1c707a30070533605"
	I0916 10:46:58.213119   87610 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9a7e3df4a104273e3fc3c64cd6c987ff54162402338c998d52aa1195edd57add/kubepods/burstable/podd01045e3887d6ee007ca690494f2504e/44f23393c12f27d7eeeb1aa2f942af5b0951c3892ac185a1c707a30070533605/freezer.state
	I0916 10:46:58.220679   87610 api_server.go:204] freezer state: "THAWED"
	I0916 10:46:58.220709   87610 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0916 10:46:58.224262   87610 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0916 10:46:58.224281   87610 status.go:422] ha-770465-m03 apiserver status = Running (err=<nil>)
	I0916 10:46:58.224288   87610 status.go:257] ha-770465-m03 status: &{Name:ha-770465-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:46:58.224302   87610 status.go:255] checking status of ha-770465-m04 ...
	I0916 10:46:58.224527   87610 cli_runner.go:164] Run: docker container inspect ha-770465-m04 --format={{.State.Status}}
	I0916 10:46:58.241400   87610 status.go:330] ha-770465-m04 host status = "Running" (err=<nil>)
	I0916 10:46:58.241424   87610 host.go:66] Checking if "ha-770465-m04" exists ...
	I0916 10:46:58.241665   87610 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-770465-m04
	I0916 10:46:58.257686   87610 host.go:66] Checking if "ha-770465-m04" exists ...
	I0916 10:46:58.257923   87610 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:46:58.257968   87610 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-770465-m04
	I0916 10:46:58.275847   87610 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/ha-770465-m04/id_rsa Username:docker}
	I0916 10:46:58.372682   87610 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:46:58.383239   87610 status.go:257] ha-770465-m04 status: &{Name:ha-770465-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (106.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-770465 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-770465 -v=7 --alsologtostderr
E0916 10:47:18.510637   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:47:28.752084   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:47:29.777062   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-770465 -v=7 --alsologtostderr: (25.899334035s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-770465 --wait=true -v=7 --alsologtostderr
E0916 10:47:49.233362   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:47:57.478615   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:48:30.194871   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-770465 --wait=true -v=7 --alsologtostderr: (1m20.821081109s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-770465
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (106.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-770465 stop -v=7 --alsologtostderr: (35.573822819s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-770465 status -v=7 --alsologtostderr: exit status 7 (99.021696ms)

                                                
                                                
-- stdout --
	ha-770465
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-770465-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-770465-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:49:51.242622  104853 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:49:51.242896  104853 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:49:51.242906  104853 out.go:358] Setting ErrFile to fd 2...
	I0916 10:49:51.242910  104853 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:49:51.243123  104853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 10:49:51.243329  104853 out.go:352] Setting JSON to false
	I0916 10:49:51.243362  104853 mustload.go:65] Loading cluster: ha-770465
	I0916 10:49:51.243464  104853 notify.go:220] Checking for updates...
	I0916 10:49:51.243946  104853 config.go:182] Loaded profile config "ha-770465": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:49:51.243967  104853 status.go:255] checking status of ha-770465 ...
	I0916 10:49:51.244422  104853 cli_runner.go:164] Run: docker container inspect ha-770465 --format={{.State.Status}}
	I0916 10:49:51.263064  104853 status.go:330] ha-770465 host status = "Stopped" (err=<nil>)
	I0916 10:49:51.263088  104853 status.go:343] host is not running, skipping remaining checks
	I0916 10:49:51.263095  104853 status.go:257] ha-770465 status: &{Name:ha-770465 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:49:51.263124  104853 status.go:255] checking status of ha-770465-m02 ...
	I0916 10:49:51.263408  104853 cli_runner.go:164] Run: docker container inspect ha-770465-m02 --format={{.State.Status}}
	I0916 10:49:51.281344  104853 status.go:330] ha-770465-m02 host status = "Stopped" (err=<nil>)
	I0916 10:49:51.281388  104853 status.go:343] host is not running, skipping remaining checks
	I0916 10:49:51.281399  104853 status.go:257] ha-770465-m02 status: &{Name:ha-770465-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:49:51.281423  104853 status.go:255] checking status of ha-770465-m04 ...
	I0916 10:49:51.281742  104853 cli_runner.go:164] Run: docker container inspect ha-770465-m04 --format={{.State.Status}}
	I0916 10:49:51.299826  104853 status.go:330] ha-770465-m04 host status = "Stopped" (err=<nil>)
	I0916 10:49:51.299849  104853 status.go:343] host is not running, skipping remaining checks
	I0916 10:49:51.299855  104853 status.go:257] ha-770465-m04 status: &{Name:ha-770465-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (34.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-770465 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-770465 --control-plane -v=7 --alsologtostderr: (33.418667944s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-770465 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (34.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.65s)

                                                
                                    
x
+
TestJSONOutput/start/Command (49.07s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-047569 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0916 10:52:08.256545   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt: no such file or directory" logger="UnhandledError"
E0916 10:52:29.776892   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-047569 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (49.07353013s)
--- PASS: TestJSONOutput/start/Command (49.07s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-047569 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-047569 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.66s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-047569 --output=json --user=testUser
E0916 10:52:35.959103   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-047569 --output=json --user=testUser: (5.663747208s)
--- PASS: TestJSONOutput/stop/Command (5.66s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-762540 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-762540 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (61.618503ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"97649d09-1cc0-4335-abc9-345f1bccb36c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-762540] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a71413c3-5043-4d08-ae31-165158fb164e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19651"}}
	{"specversion":"1.0","id":"0fb023c0-1690-4532-a711-db89b3e5c0d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"40f9358c-c2e9-408b-b7eb-fcf9900aa02e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig"}}
	{"specversion":"1.0","id":"9bed2bc9-12f2-4df7-9875-8c315d341fab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube"}}
	{"specversion":"1.0","id":"d127dfd2-865a-4d55-94ad-88d47cbfdd1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"711d60de-02ec-400e-8f1f-6b18172d8bcd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9fca4d43-765c-41fd-940f-bf9376a0da39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-762540" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-762540
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (33.57s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-034993 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-034993 --network=: (31.505719792s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-034993" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-034993
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-034993: (2.041204682s)
--- PASS: TestKicCustomNetwork/create_custom_network (33.57s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.23s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-190795 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-190795 --network=bridge: (21.321826131s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-190795" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-190795
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-190795: (1.893570008s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.23s)

                                                
                                    
x
+
TestKicExistingNetwork (23.13s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-472822 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-472822 --network=existing-network: (21.117077931s)
helpers_test.go:175: Cleaning up "existing-network-472822" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-472822
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-472822: (1.87150582s)
--- PASS: TestKicExistingNetwork (23.13s)

                                                
                                    
x
+
TestKicCustomSubnet (23.72s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-652543 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-652543 --subnet=192.168.60.0/24: (21.668369366s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-652543 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-652543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-652543
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-652543: (2.035159946s)
--- PASS: TestKicCustomSubnet (23.72s)

                                                
                                    
x
+
TestKicStaticIP (24.16s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-229674 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-229674 --static-ip=192.168.200.200: (21.951193757s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-229674 ip
helpers_test.go:175: Cleaning up "static-ip-229674" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-229674
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-229674: (2.086466933s)
--- PASS: TestKicStaticIP (24.16s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (48.25s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-175976 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-175976 --driver=docker  --container-runtime=containerd: (20.344373466s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-187691 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-187691 --driver=docker  --container-runtime=containerd: (23.185269817s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-175976
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-187691
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-187691" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-187691
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-187691: (1.820060058s)
helpers_test.go:175: Cleaning up "first-175976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-175976
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-175976: (1.847698236s)
--- PASS: TestMinikubeProfile (48.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.83s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-595986 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-595986 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.832427285s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-595986 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.79s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-609600 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-609600 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.791854049s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.79s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-609600 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-595986 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-595986 --alsologtostderr -v=5: (1.608107111s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-609600 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-609600
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-609600: (1.171531305s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.49s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-609600
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-609600: (6.489739214s)
--- PASS: TestMountStart/serial/RestartStopped (7.49s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-609600 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (55.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-079070 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-079070 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (54.927001092s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (55.37s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (16.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-079070 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-079070 -- rollout status deployment/busybox
E0916 10:57:08.258431   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-079070 -- rollout status deployment/busybox: (15.401366313s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-079070 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-079070 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-079070 -- exec busybox-7dff88458-pjlvx -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-079070 -- exec busybox-7dff88458-x6h7b -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-079070 -- exec busybox-7dff88458-pjlvx -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-079070 -- exec busybox-7dff88458-x6h7b -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-079070 -- exec busybox-7dff88458-pjlvx -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-079070 -- exec busybox-7dff88458-x6h7b -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (16.68s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-079070 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-079070 -- exec busybox-7dff88458-pjlvx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-079070 -- exec busybox-7dff88458-pjlvx -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-079070 -- exec busybox-7dff88458-x6h7b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-079070 -- exec busybox-7dff88458-x6h7b -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (13.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-079070 -v 3 --alsologtostderr
E0916 10:57:29.777027   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-079070 -v 3 --alsologtostderr: (12.797346792s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (13.40s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 cp testdata/cp-test.txt multinode-079070:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 ssh -n multinode-079070 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 cp multinode-079070:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile475858721/001/cp-test_multinode-079070.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 ssh -n multinode-079070 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 cp multinode-079070:/home/docker/cp-test.txt multinode-079070-m02:/home/docker/cp-test_multinode-079070_multinode-079070-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 ssh -n multinode-079070 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 ssh -n multinode-079070-m02 "sudo cat /home/docker/cp-test_multinode-079070_multinode-079070-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 cp multinode-079070:/home/docker/cp-test.txt multinode-079070-m03:/home/docker/cp-test_multinode-079070_multinode-079070-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 ssh -n multinode-079070 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 ssh -n multinode-079070-m03 "sudo cat /home/docker/cp-test_multinode-079070_multinode-079070-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 cp testdata/cp-test.txt multinode-079070-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 ssh -n multinode-079070-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 cp multinode-079070-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile475858721/001/cp-test_multinode-079070-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 ssh -n multinode-079070-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 cp multinode-079070-m02:/home/docker/cp-test.txt multinode-079070:/home/docker/cp-test_multinode-079070-m02_multinode-079070.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 ssh -n multinode-079070-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 ssh -n multinode-079070 "sudo cat /home/docker/cp-test_multinode-079070-m02_multinode-079070.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 cp multinode-079070-m02:/home/docker/cp-test.txt multinode-079070-m03:/home/docker/cp-test_multinode-079070-m02_multinode-079070-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 ssh -n multinode-079070-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 ssh -n multinode-079070-m03 "sudo cat /home/docker/cp-test_multinode-079070-m02_multinode-079070-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 cp testdata/cp-test.txt multinode-079070-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 ssh -n multinode-079070-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 cp multinode-079070-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile475858721/001/cp-test_multinode-079070-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 ssh -n multinode-079070-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 cp multinode-079070-m03:/home/docker/cp-test.txt multinode-079070:/home/docker/cp-test_multinode-079070-m03_multinode-079070.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 ssh -n multinode-079070-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 ssh -n multinode-079070 "sudo cat /home/docker/cp-test_multinode-079070-m03_multinode-079070.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 cp multinode-079070-m03:/home/docker/cp-test.txt multinode-079070-m02:/home/docker/cp-test_multinode-079070-m03_multinode-079070-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 ssh -n multinode-079070-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 ssh -n multinode-079070-m02 "sudo cat /home/docker/cp-test_multinode-079070-m03_multinode-079070-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.91s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-079070 node stop m03: (1.169205413s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-079070 status: exit status 7 (460.508836ms)

                                                
                                                
-- stdout --
	multinode-079070
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-079070-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-079070-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-079070 status --alsologtostderr: exit status 7 (461.967944ms)

                                                
                                                
-- stdout --
	multinode-079070
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-079070-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-079070-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:57:45.833900  170762 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:57:45.834045  170762 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:57:45.834056  170762 out.go:358] Setting ErrFile to fd 2...
	I0916 10:57:45.834063  170762 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:57:45.834265  170762 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 10:57:45.834452  170762 out.go:352] Setting JSON to false
	I0916 10:57:45.834489  170762 mustload.go:65] Loading cluster: multinode-079070
	I0916 10:57:45.834576  170762 notify.go:220] Checking for updates...
	I0916 10:57:45.835010  170762 config.go:182] Loaded profile config "multinode-079070": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:57:45.835028  170762 status.go:255] checking status of multinode-079070 ...
	I0916 10:57:45.835490  170762 cli_runner.go:164] Run: docker container inspect multinode-079070 --format={{.State.Status}}
	I0916 10:57:45.855299  170762 status.go:330] multinode-079070 host status = "Running" (err=<nil>)
	I0916 10:57:45.855328  170762 host.go:66] Checking if "multinode-079070" exists ...
	I0916 10:57:45.855604  170762 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-079070
	I0916 10:57:45.873149  170762 host.go:66] Checking if "multinode-079070" exists ...
	I0916 10:57:45.873463  170762 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:57:45.873507  170762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070
	I0916 10:57:45.890279  170762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070/id_rsa Username:docker}
	I0916 10:57:45.980981  170762 ssh_runner.go:195] Run: systemctl --version
	I0916 10:57:45.984872  170762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:57:45.995448  170762 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 10:57:46.046543  170762 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:62 SystemTime:2024-09-16 10:57:46.036691997 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 10:57:46.047444  170762 kubeconfig.go:125] found "multinode-079070" server: "https://192.168.67.2:8443"
	I0916 10:57:46.047482  170762 api_server.go:166] Checking apiserver status ...
	I0916 10:57:46.047536  170762 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0916 10:57:46.058432  170762 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1464/cgroup
	I0916 10:57:46.067265  170762 api_server.go:182] apiserver freezer: "10:freezer:/docker/1f3af6522540e0cf7a676c3192a1437f3ee712969b647e0c5ff8f6e5d39943b2/kubepods/burstable/podcec1645cdab3aa5089df4900af238464/411c657184dfd15c5a637bda842998291203948392b41c07d2e8b35719214e87"
	I0916 10:57:46.067327  170762 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1f3af6522540e0cf7a676c3192a1437f3ee712969b647e0c5ff8f6e5d39943b2/kubepods/burstable/podcec1645cdab3aa5089df4900af238464/411c657184dfd15c5a637bda842998291203948392b41c07d2e8b35719214e87/freezer.state
	I0916 10:57:46.075143  170762 api_server.go:204] freezer state: "THAWED"
	I0916 10:57:46.075174  170762 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0916 10:57:46.079867  170762 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0916 10:57:46.079892  170762 status.go:422] multinode-079070 apiserver status = Running (err=<nil>)
	I0916 10:57:46.079904  170762 status.go:257] multinode-079070 status: &{Name:multinode-079070 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:57:46.079925  170762 status.go:255] checking status of multinode-079070-m02 ...
	I0916 10:57:46.080172  170762 cli_runner.go:164] Run: docker container inspect multinode-079070-m02 --format={{.State.Status}}
	I0916 10:57:46.097751  170762 status.go:330] multinode-079070-m02 host status = "Running" (err=<nil>)
	I0916 10:57:46.097777  170762 host.go:66] Checking if "multinode-079070-m02" exists ...
	I0916 10:57:46.098011  170762 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-079070-m02
	I0916 10:57:46.114431  170762 host.go:66] Checking if "multinode-079070-m02" exists ...
	I0916 10:57:46.114667  170762 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0916 10:57:46.114705  170762 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-079070-m02
	I0916 10:57:46.131347  170762 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/19651-3687/.minikube/machines/multinode-079070-m02/id_rsa Username:docker}
	I0916 10:57:46.225243  170762 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0916 10:57:46.235358  170762 status.go:257] multinode-079070-m02 status: &{Name:multinode-079070-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:57:46.235399  170762 status.go:255] checking status of multinode-079070-m03 ...
	I0916 10:57:46.235675  170762 cli_runner.go:164] Run: docker container inspect multinode-079070-m03 --format={{.State.Status}}
	I0916 10:57:46.252242  170762 status.go:330] multinode-079070-m03 host status = "Stopped" (err=<nil>)
	I0916 10:57:46.252272  170762 status.go:343] host is not running, skipping remaining checks
	I0916 10:57:46.252287  170762 status.go:257] multinode-079070-m03 status: &{Name:multinode-079070-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.09s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (89.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-079070
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-079070
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-079070: (24.732823166s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-079070 --wait=true -v=8 --alsologtostderr
E0916 10:58:52.840277   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-079070 --wait=true -v=8 --alsologtostderr: (1m4.522280377s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-079070
--- PASS: TestMultiNode/serial/RestartKeepsNodes (89.35s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-079070 stop: (23.612293508s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-079070 status: exit status 7 (85.071783ms)

                                                
                                                
-- stdout --
	multinode-079070
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-079070-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-079070 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-079070 status --alsologtostderr: exit status 7 (80.583352ms)

                                                
                                                
-- stdout --
	multinode-079070
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-079070-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 10:59:57.673401  182176 out.go:345] Setting OutFile to fd 1 ...
	I0916 10:59:57.673675  182176 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:59:57.673685  182176 out.go:358] Setting ErrFile to fd 2...
	I0916 10:59:57.673691  182176 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 10:59:57.673919  182176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 10:59:57.674100  182176 out.go:352] Setting JSON to false
	I0916 10:59:57.674133  182176 mustload.go:65] Loading cluster: multinode-079070
	I0916 10:59:57.674236  182176 notify.go:220] Checking for updates...
	I0916 10:59:57.674637  182176 config.go:182] Loaded profile config "multinode-079070": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 10:59:57.674656  182176 status.go:255] checking status of multinode-079070 ...
	I0916 10:59:57.675173  182176 cli_runner.go:164] Run: docker container inspect multinode-079070 --format={{.State.Status}}
	I0916 10:59:57.692722  182176 status.go:330] multinode-079070 host status = "Stopped" (err=<nil>)
	I0916 10:59:57.692748  182176 status.go:343] host is not running, skipping remaining checks
	I0916 10:59:57.692756  182176 status.go:257] multinode-079070 status: &{Name:multinode-079070 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0916 10:59:57.692791  182176 status.go:255] checking status of multinode-079070-m02 ...
	I0916 10:59:57.693032  182176 cli_runner.go:164] Run: docker container inspect multinode-079070-m02 --format={{.State.Status}}
	I0916 10:59:57.710228  182176 status.go:330] multinode-079070-m02 host status = "Stopped" (err=<nil>)
	I0916 10:59:57.710271  182176 status.go:343] host is not running, skipping remaining checks
	I0916 10:59:57.710281  182176 status.go:257] multinode-079070-m02 status: &{Name:multinode-079070-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.78s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (22.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-079070
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-079070-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-079070-m02 --driver=docker  --container-runtime=containerd: exit status 14 (63.225118ms)

                                                
                                                
-- stdout --
	* [multinode-079070-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19651
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-079070-m02' is duplicated with machine name 'multinode-079070-m02' in profile 'multinode-079070'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-079070-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-079070-m03 --driver=docker  --container-runtime=containerd: (19.883638492s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-079070
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-079070: exit status 80 (256.953778ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-079070 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-079070-m03 already exists in multinode-079070-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-079070-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-079070-m03: (1.821854337s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (22.07s)

                                                
                                    
x
+
TestPreload (116.67s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-900519 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0916 11:02:08.257007   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:02:29.776974   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-900519 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m14.272452453s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-900519 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-900519 image pull gcr.io/k8s-minikube/busybox: (2.177778239s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-900519
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-900519: (11.884793705s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-900519 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-900519 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (25.729107757s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-900519 image list
helpers_test.go:175: Cleaning up "test-preload-900519" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-900519
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-900519: (2.379047188s)
--- PASS: TestPreload (116.67s)

                                                
                                    
x
+
TestScheduledStopUnix (96.34s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-580714 --memory=2048 --driver=docker  --container-runtime=containerd
E0916 11:03:31.321737   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-580714 --memory=2048 --driver=docker  --container-runtime=containerd: (21.159501142s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-580714 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-580714 -n scheduled-stop-580714
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-580714 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-580714 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-580714 -n scheduled-stop-580714
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-580714
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-580714 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-580714
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-580714: exit status 7 (63.241982ms)

                                                
                                                
-- stdout --
	scheduled-stop-580714
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-580714 -n scheduled-stop-580714
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-580714 -n scheduled-stop-580714: exit status 7 (58.991507ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-580714" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-580714
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-580714: (3.927514507s)
--- PASS: TestScheduledStopUnix (96.34s)

                                                
                                    
x
+
TestInsufficientStorage (9.65s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-347392 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-347392 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.350104203s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1a623cc7-101c-466d-9157-3d2cfe766d4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-347392] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9ac16e0b-b000-470d-a511-08536401ea8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19651"}}
	{"specversion":"1.0","id":"4aa01043-0fb4-425c-84ff-61222d87fe82","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7086d2be-77a8-41a5-ad0a-d201aff6dc20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig"}}
	{"specversion":"1.0","id":"e322f842-b43b-4abf-9f19-0b0c235dce35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube"}}
	{"specversion":"1.0","id":"f54bd994-8ed8-400a-97ce-1606225933a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"5e381e08-042e-40fd-8705-9e20280f32b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a8787d29-377c-4147-8714-e3191816f3be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"486511a3-261b-4e31-a4a2-82f9f853ba43","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"785e8679-92ce-46b8-b6b0-deda8dede599","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"6a8179db-a1e2-4f65-acc0-ea2a9532e29d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"1147e3f3-f946-467c-9024-8b69d7952873","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-347392\" primary control-plane node in \"insufficient-storage-347392\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c9254f1c-cb08-4652-9f1d-c7acd7296936","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726358845-19644 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"f4d8c4d9-e766-4bf1-91fe-82a0a10607e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"e9317c90-0a44-44b9-8960-89d08d8f2681","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-347392 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-347392 --output=json --layout=cluster: exit status 7 (255.255131ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-347392","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-347392","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 11:04:59.915767  206339 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-347392" does not appear in /home/jenkins/minikube-integration/19651-3687/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-347392 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-347392 --output=json --layout=cluster: exit status 7 (257.94763ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-347392","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-347392","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0916 11:05:00.174169  206439 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-347392" does not appear in /home/jenkins/minikube-integration/19651-3687/kubeconfig
	E0916 11:05:00.183982  206439 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/insufficient-storage-347392/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-347392" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-347392
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-347392: (1.784565449s)
--- PASS: TestInsufficientStorage (9.65s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (100.91s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2281018962 start -p running-upgrade-470841 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2281018962 start -p running-upgrade-470841 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (54.067707966s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-470841 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-470841 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (38.259746496s)
helpers_test.go:175: Cleaning up "running-upgrade-470841" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-470841
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-470841: (6.276570015s)
--- PASS: TestRunningBinaryUpgrade (100.91s)

                                                
                                    
x
+
TestMissingContainerUpgrade (166.35s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3781928206 start -p missing-upgrade-327796 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3781928206 start -p missing-upgrade-327796 --memory=2200 --driver=docker  --container-runtime=containerd: (1m29.018365885s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-327796
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-327796: (10.336098327s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-327796
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-327796 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-327796 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m2.179527066s)
helpers_test.go:175: Cleaning up "missing-upgrade-327796" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-327796
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-327796: (2.455617062s)
--- PASS: TestMissingContainerUpgrade (166.35s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-295903 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-295903 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (67.915254ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-295903] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19651
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (28.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-295903 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-295903 --driver=docker  --container-runtime=containerd: (28.270758843s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-295903 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (28.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (142.22s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1436395686 start -p stopped-upgrade-310953 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1436395686 start -p stopped-upgrade-310953 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (1m38.764184888s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1436395686 -p stopped-upgrade-310953 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1436395686 -p stopped-upgrade-310953 stop: (1.244489122s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-310953 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-310953 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (42.209774458s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (142.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (23.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-295903 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-295903 --no-kubernetes --driver=docker  --container-runtime=containerd: (21.498046706s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-295903 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-295903 status -o json: exit status 2 (255.45968ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-295903","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-295903
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-295903: (1.808596441s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (23.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (4.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-295903 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-295903 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.70493396s)
--- PASS: TestNoKubernetes/serial/Start (4.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-295903 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-295903 "sudo systemctl is-active --quiet service kubelet": exit status 1 (242.188857ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-295903
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-295903: (1.295380285s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-295903 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-295903 --driver=docker  --container-runtime=containerd: (7.757988779s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-295903 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-295903 "sudo systemctl is-active --quiet service kubelet": exit status 1 (245.727928ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.25s)

                                                
                                    
x
+
TestPause/serial/Start (39.57s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-613346 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-613346 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (39.564960294s)
--- PASS: TestPause/serial/Start (39.57s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (5.26s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-613346 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-613346 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5.251043341s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (5.26s)

                                                
                                    
x
+
TestPause/serial/Pause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-613346 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.68s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-613346 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-613346 --output=json --layout=cluster: exit status 2 (278.546246ms)

                                                
                                                
-- stdout --
	{"Name":"pause-613346","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-613346","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.28s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.58s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-613346 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.58s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.81s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-613346 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.81s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (10.6s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-613346 --alsologtostderr -v=5
E0916 11:07:08.256955   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-613346 --alsologtostderr -v=5: (10.601658385s)
--- PASS: TestPause/serial/DeletePaused (10.60s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.79s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-613346
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-613346: exit status 1 (20.047367ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-613346: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.79s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.29s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-310953
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-310953: (1.287432544s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-771611 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-771611 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (172.623426ms)

                                                
                                                
-- stdout --
	* [false-771611] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19651
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0916 11:07:36.781501  244717 out.go:345] Setting OutFile to fd 1 ...
	I0916 11:07:36.781619  244717 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:07:36.781628  244717 out.go:358] Setting ErrFile to fd 2...
	I0916 11:07:36.781632  244717 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0916 11:07:36.781827  244717 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19651-3687/.minikube/bin
	I0916 11:07:36.782461  244717 out.go:352] Setting JSON to false
	I0916 11:07:36.783565  244717 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3001,"bootTime":1726481856,"procs":278,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0916 11:07:36.783670  244717 start.go:139] virtualization: kvm guest
	I0916 11:07:36.786944  244717 out.go:177] * [false-771611] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0916 11:07:36.788543  244717 out.go:177]   - MINIKUBE_LOCATION=19651
	I0916 11:07:36.788562  244717 notify.go:220] Checking for updates...
	I0916 11:07:36.791396  244717 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0916 11:07:36.793118  244717 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19651-3687/kubeconfig
	I0916 11:07:36.794440  244717 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19651-3687/.minikube
	I0916 11:07:36.795928  244717 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0916 11:07:36.797661  244717 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0916 11:07:36.799877  244717 config.go:182] Loaded profile config "force-systemd-env-846070": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0916 11:07:36.800045  244717 config.go:182] Loaded profile config "kubernetes-upgrade-311911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0916 11:07:36.800176  244717 config.go:182] Loaded profile config "missing-upgrade-327796": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.1
	I0916 11:07:36.800327  244717 driver.go:394] Setting default libvirt URI to qemu:///system
	I0916 11:07:36.827851  244717 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0916 11:07:36.828010  244717 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0916 11:07:36.887365  244717 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:89 SystemTime:2024-09-16 11:07:36.87748496 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0916 11:07:36.887496  244717 docker.go:318] overlay module found
	I0916 11:07:36.892779  244717 out.go:177] * Using the docker driver based on user configuration
	I0916 11:07:36.894504  244717 start.go:297] selected driver: docker
	I0916 11:07:36.894522  244717 start.go:901] validating driver "docker" against <nil>
	I0916 11:07:36.894540  244717 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0916 11:07:36.897244  244717 out.go:201] 
	W0916 11:07:36.898638  244717 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0916 11:07:36.900105  244717 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-771611 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771611"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771611"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771611"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771611"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771611"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):

                                                
                                                

                                                
                                                
>>> k8s: api server logs:

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771611"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771611"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771611"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771611"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771611"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771611"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771611"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771611"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771611"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771611"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:

                                                
                                                

                                                
                                                
>>> k8s: cms:

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771611"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771611"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771611"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771611"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771611"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771611"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771611"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771611"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771611"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771611"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771611"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771611"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771611"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771611"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771611"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771611"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771611"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771611"

                                                
                                                
----------------------- debugLogs end: false-771611 [took: 3.717923306s] --------------------------------
helpers_test.go:175: Cleaning up "false-771611" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-771611
--- PASS: TestNetworkPlugins/group/false (4.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (134.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-371039 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-371039 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m14.32120312s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (134.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (66.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-349453 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-349453 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m6.244883389s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (66.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (5.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-349453 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-349453 --alsologtostderr -v=3: (5.718379705s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (5.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-349453 -n no-preload-349453
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-349453 -n no-preload-349453: exit status 7 (62.293729ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-349453 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (262.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-349453 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-349453 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m22.501550233s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-349453 -n no-preload-349453
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (262.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (5.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-371039 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-371039 --alsologtostderr -v=3: (5.729687982s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (5.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-371039 -n old-k8s-version-371039
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-371039 -n old-k8s-version-371039: exit status 7 (66.937497ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-371039 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (43.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-679624 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-679624 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (43.944994713s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (43.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (5.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-679624 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-679624 --alsologtostderr -v=3: (5.720258799s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (5.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-679624 -n embed-certs-679624
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-679624 -n embed-certs-679624: exit status 7 (64.946138ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-679624 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (262.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-679624 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-679624 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m22.677650939s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-679624 -n embed-certs-679624
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (262.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (44.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-006978 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-006978 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (44.626834937s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (44.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (5.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-006978 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-006978 --alsologtostderr -v=3: (5.726561714s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (5.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-006978 -n default-k8s-diff-port-006978
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-006978 -n default-k8s-diff-port-006978: exit status 7 (62.97265ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-006978 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-006978 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-006978 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m22.74238279s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-006978 -n default-k8s-diff-port-006978
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-lmdbc" [6d63e6f7-5f9b-45ff-b20e-561f691403c2] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004156359s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-349453 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-349453 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-349453 -n no-preload-349453
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-349453 -n no-preload-349453: exit status 2 (296.194764ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-349453 -n no-preload-349453
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-349453 -n no-preload-349453: exit status 2 (287.312724ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-349453 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-349453 -n no-preload-349453
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-349453 -n no-preload-349453
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (26.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-802652 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-802652 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (26.043962232s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (26.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-802652 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-802652 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-802652 --alsologtostderr -v=3: (1.19157197s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-802652 -n newest-cni-802652
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-802652 -n newest-cni-802652: exit status 7 (60.91838ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-802652 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (13.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-802652 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-802652 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (12.772516635s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-802652 -n newest-cni-802652
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (13.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-802652 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-802652 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-802652 -n newest-cni-802652
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-802652 -n newest-cni-802652: exit status 2 (285.179113ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-802652 -n newest-cni-802652
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-802652 -n newest-cni-802652: exit status 2 (286.226411ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-802652 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-802652 -n newest-cni-802652
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-802652 -n newest-cni-802652
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (44.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-771611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0916 11:15:32.842333   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-771611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (44.335557054s)
--- PASS: TestNetworkPlugins/group/auto/Start (44.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-771611 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-tzbgn" [71da0a2f-2db3-4f64-8f1b-090efc2a5371] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004414039s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-679624 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-679624 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-679624 -n embed-certs-679624
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-679624 -n embed-certs-679624: exit status 2 (296.087319ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-679624 -n embed-certs-679624
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-679624 -n embed-certs-679624: exit status 2 (288.413315ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-679624 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-679624 -n embed-certs-679624
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-679624 -n embed-certs-679624
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (41.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-771611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-771611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (41.292918577s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (41.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-9sr9v" [261ef398-46a5-41c5-bf4d-763c5bc263c3] Running
E0916 11:17:08.257176   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004125365s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-371039 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-371039 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-371039 -n old-k8s-version-371039
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-371039 -n old-k8s-version-371039: exit status 2 (315.703513ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-371039 -n old-k8s-version-371039
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-371039 -n old-k8s-version-371039: exit status 2 (302.690801ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-371039 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-371039 -n old-k8s-version-371039
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-371039 -n old-k8s-version-371039
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (61.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-771611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E0916 11:17:29.777293   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-771611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m1.266431692s)
--- PASS: TestNetworkPlugins/group/calico/Start (61.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-gn59w" [286df0a6-9ecb-4f78-bcac-8b4ce2c556e5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004124621s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-771611 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-hrmv2" [4ae00ae7-ba15-40b6-9f23-61722bbfb09a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003534013s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-006978 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-006978 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-006978 -n default-k8s-diff-port-006978
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-006978 -n default-k8s-diff-port-006978: exit status 2 (292.962164ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-006978 -n default-k8s-diff-port-006978
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-006978 -n default-k8s-diff-port-006978: exit status 2 (290.114008ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-006978 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-006978 -n default-k8s-diff-port-006978
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-006978 -n default-k8s-diff-port-006978
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.63s)
E0916 11:51:01.048989   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:51:01.055389   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:51:01.066790   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:51:01.088157   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:51:01.129545   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:51:01.210952   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:51:01.372427   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:51:01.694062   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:51:02.335901   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:51:03.617551   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:51:06.178977   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:51:11.300272   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:51:21.541538   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:51:42.023516   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:52:03.924629   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:52:08.257283   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:52:14.607272   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:52:22.985549   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:52:29.777006   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:52:38.188896   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:52:38.195223   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:52:38.206607   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:52:38.227963   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:52:38.269448   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:52:38.350893   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:52:38.512388   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:52:38.834279   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:52:39.476413   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:52:40.758043   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:52:43.320326   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:52:48.442599   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:52:52.830100   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:52:58.684927   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:53:19.166493   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:53:24.005331   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:53:24.011723   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:53:24.023141   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:53:24.044520   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:53:24.085923   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:53:24.167360   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:53:24.328870   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:53:24.650738   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:53:25.292749   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:53:26.574700   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:53:29.136438   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:53:31.327266   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:53:34.258286   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:53:44.499573   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:53:44.907333   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:54:00.128393   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:54:04.980946   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:54:11.541720   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:54:14.534831   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/enable-default-cni-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:54:14.541185   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/enable-default-cni-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:54:14.552583   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/enable-default-cni-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:54:14.573941   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/enable-default-cni-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:54:14.615373   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/enable-default-cni-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:54:14.696813   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/enable-default-cni-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:54:14.858302   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/enable-default-cni-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:54:15.179951   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/enable-default-cni-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:54:15.821997   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/enable-default-cni-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:54:17.103583   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/enable-default-cni-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:54:19.665451   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/enable-default-cni-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:54:24.786840   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/enable-default-cni-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:54:35.029151   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/enable-default-cni-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:54:45.943038   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:54:55.510623   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/enable-default-cni-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:55:06.989814   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:55:22.050059   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:55:36.471976   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/enable-default-cni-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:55:55.895295   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:56:01.048948   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:56:07.864980   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:56:28.749338   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:56:58.394181   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/enable-default-cni-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:57:03.924116   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:57:08.257350   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/functional-016570/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:57:29.777065   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:57:38.187359   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:57:52.829624   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/default-k8s-diff-port-006978/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:58:05.891865   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/kindnet-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:58:24.005729   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:58:51.706669   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/calico-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:59:11.542322   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/no-preload-349453/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:59:14.535273   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/enable-default-cni-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 11:59:42.236408   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/enable-default-cni-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:01:01.049034   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/auto-771611/client.crt: no such file or directory" logger="UnhandledError"
E0916 12:02:03.924418   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (62.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-771611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-771611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m2.61832842s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (62.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-zl7lw" [2097b6dc-740c-4073-b340-99fdc41bb11a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004372396s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-771611 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-771611 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (44.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-771611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-771611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (44.608158273s)
--- PASS: TestNetworkPlugins/group/flannel/Start (44.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-x8tzt" [7ddcc199-fdea-4c48-a0e7-dc456dbe3163] Running
E0916 11:47:03.924027   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/old-k8s-version-371039/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004604913s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-771611 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (60.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-771611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-771611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m0.258349025s)
--- PASS: TestNetworkPlugins/group/bridge/Start (60.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (40s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-771611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0916 11:48:52.845752   11189 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19651-3687/.minikube/profiles/addons-191972/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-771611 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (39.996302718s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (40.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-771611 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-771611 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    

Test skip (23/306)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-852440" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-852440
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-771611 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771611"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771611"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771611"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771611"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771611"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):

                                                
                                                

                                                
                                                
>>> k8s: api server logs:

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771611"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771611"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771611"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771611"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771611"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771611"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771611"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771611"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771611"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771611"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:

                                                
                                                

                                                
                                                
>>> k8s: cms:

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771611"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771611"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771611"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771611"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771611"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771611"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771611"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771611"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771611"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771611"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771611"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771611"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771611"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771611"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771611"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771611"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771611"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771611"

                                                
                                                
----------------------- debugLogs end: kubenet-771611 [took: 2.284375406s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-771611" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-771611
--- SKIP: TestNetworkPlugins/group/kubenet (2.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (1.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-771611 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771611"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771611"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771611"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771611"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771611"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):

                                                
                                                

                                                
                                                
>>> k8s: api server logs:

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771611"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771611"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771611"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771611"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771611"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771611"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771611"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771611"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771611"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771611"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:

                                                
                                                

                                                
                                                
>>> k8s: cms:

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771611"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771611"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771611"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771611"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771611"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771611"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771611"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771611"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771611"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771611"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771611"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771611"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771611"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771611"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771611"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771611"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771611"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-771611" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771611"

                                                
                                                
----------------------- debugLogs end: cilium-771611 [took: 1.690106782s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-771611" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-771611
--- SKIP: TestNetworkPlugins/group/cilium (1.83s)

                                                
                                    
Copied to clipboard